Monitoring a Kubernetes Environment
This post is part 3 in a 4-part series about Container Monitoring. Post 1 dives into some of the new challenges containers and microservices create and the information you should focus on. Post 2 describes how you can monitor your Mesos cluster. This article describes the challenges of monitoring Kubernetes, how it works and what this means for your monitoring strategy.
What is Kubernetes?
Kubernetes is a powerful orchestration system, developed by Google, for managing containerized applications in a (private) cloud environment. Kubernetes is able to automate the deployment, management and scaling of containerized applications and services. Kubernetes provides the infrastructure to build a truly container-centric development and operations environment.
Kubernetes introduces a new level of abstraction to your containerized environment thanks to pods. A pod is a group of one or more containers, that are located on the same host and share the same resources, such as network, memory and storage of the node. Each pod in Kubernetes gets its own IP address that is shared with all the containers inside.
In short, Kubernetes exists of the following components:
To ensure a good performance of your business service it is critical to monitor Kubernetes itself as well as the health of your deployed applications, the containers and the dependencies between them. The new abstraction introduced by Kubernetes, requires you to rethink your monitoring strategy, especially if you are used to traditional monitoring tools and traditional hosts such as physical machines or VMs. Microservices have changed the way we think about running services on VMs, but Kubernetes has changed the way we manage and scale containers.
What does this mean for you?
Monitoring Kubernetes is different than traditional monitoring in multiple ways:
- More components (between hosts and applications) to monitor
- You need monitoring capabilities that can track the dynamic behavior of containers and applications inside them
- As the amount of containers scale the number of dependencies will increase
- If a single component within a microservice , there may be no business impact, and so the severity of alerts should match this fact. The traditional monitoring approach of testing whether something is ‘up’ or ‘down’ falls short.
Now you know that it’s critical to monitor the different layers and components of your Kubernetes environment. StackState integrates with all of them to provide you a holistic view of your Kubernetes cluster performance, its health and dependencies:
- The Kubernetes integration aggregates performance metrics, events from Kubernetes
- All services, clusters, nodes and pods including their dependencies are automatically synchronized
- The Docker integration automatically collects all the essential metrics you need
- With the other 80+ integrations, StackState is able to visualize your entire Business and IT landscape and collect its metrics
StackState automatically keeps track of what is running where, thanks to its service discovery capability. Whenever you spin up a container, the StackState agent identifies which application is running inside the containers and automatically starts collecting and reporting the right metrics. If you destroy or stop a container, StackState will understand that too. It allows you to define configuration templates for specific images in a distributed configuration store on top of the StackState Agent which will use them to dynamically reconfigure its checks when your containers ecosystem changes.
In this post, we’ve walked through the challenges of monitoring Kubernetes, how it works and what it means for your monitoring strategy. Request a free trial of StackState and start with monitoring your Kubernetes cluster to have greater visibility into the health, performance and dependencies of your clusters and be better prepared to address potential issues.