Introduction

Kubernetes is a container-orchestration system for automating deployment, scaling, and management of containerized applications. Kubernetes integration monitors the health and performance of your Kubernetes cluster.

What does OpsRamp monitor?

Using agent, OpsRamp monitors the following components of Kubernetes:

  • KubeDNS / CoreDNS
  • Kube Scheduler
  • Kube Controller
  • Kube API Server
  • Kubelet
  • Kube State (Not installed by default in the K8s Cluster)
  • Metrics Server (Not installed by default in the K8s Cluster)

Notes:

  • Install Kube State and Metrics Server manually to fetch and monitor metrics.
    • To deploy Kube State, get the latest version of the deployment YAML file and compatibility matrix from GitHub.
    • To deploy Metrics Server, get the latest version of the deployment YAML file and compatibility matrix from GitHub.
  • With Kubernetes monitoring, OpsRamp Agent also monitors a Docker Container.

Configuring Kubernetes

Configuration involves:

  1. Installing kube-state metrics
  2. Performing more configurations
  3. (Optional) Performing Optional Configuration

Prerequisites

The prerequisites for Kubernetes configuration include:

  1. Install kube-state metrics.
    • Use the right version of kube state YAML for the deployment according to the Kubernetes version of the cluster.
    • When deployed, set the kube state service Cluster IP with an IP Address. The agent requires the address to fetch the metrics from kube state. If Cluster IP is not set (Shown as NONE) modify the service.yaml file and remove clusterIP: None.
      Example of modified service.yaml file (version 1.9):
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/name: kube-state-metrics
        app.kubernetes.io/version: 1.9.7
      name: kube-state-metrics
      namespace: kube-system
    spec:
      ports:
      - name: http-metrics
        port: 8080
        targetPort: http-metrics
      - name: telemetry
        port: 8081
        targetPort: telemetry
      selector:
        app.kubernetes.io/name: kube-state-metrics
    
  2. (Optional) To monitor using the metric server, deploy Metrics Server.
    kubectl apply -f 
    https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
    
  3. Perform more configurations depending on the environment:
    • For AKS, installing kube-dns patch.
    • For GKE, enabling RBAC.
    • For On-Prem, patch.

Install kube-state metrics

To see if kube-state-metrics is installed in the cluster, run the following command on the controller node(s):

kubectl get svc --all-namespaces | grep kube-state-metrics | grep -v grep

The following sample output confirms that kube-state-metrics is already installed in the cluster:

kube-system kube-state-metrics ClusterIP 10.96.186.34 <none> 8080/TCP,8081/TCP 19d

To install kube-state metrics, do the following on the Kubernetes controller node(s):

  1. Clone the Kubernetes kube-state-metrics Github repo.
  2. Run kubectl apply -f kube-state-metrics/kubernetes/.
git clone https://github.com/kubernetes/kube-state-metrics.git
kubectl apply -f kube-state-metrics/kubernetes/

Perform more configurations

Additional configurations depend on the Kubernetes environment:

  • Azure AKS
  • Google GKE

Azure AKS: Install kube-dns patch

For an Azure AKS environment, you must apply a patch. This patch is a DNS Service patch for the Azure (AKS) environment.

  • By default, the sidecar container is disabled.
  • The patch is required to export kube-dns metrics.

To install the kube-dns patch for the Azure (AKS) environment:

  1. Copy the provided Kube dns patch script to kube-dns-metrics-patch.yaml.
  2. Execute the command on the controller nodes to apply the patch:
kubectl patch deployment -n kube-system kube-dns-v20 --patch "$(cat kube-dns-metrics-patch.yaml)"
Sample Kube-dns patch script

The following is the Kube dns patch script that you save as kube-dns-metrics-patch.yaml.

spec:
  template:
    spec:
      containers:
      - name: kubedns
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
      - name: sidecar
        image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          time-outSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m

Execute the following command in the controller nodes to apply a patch:

kubectl patch deployment -n kube-system kube-dns-v20 --patch "$(cat kube-dns-metrics-patch.yaml)"

Google GKE: Enable RBAC

For a Google GKE environment, you must grant permissions to create roles in Kubernetes. To grant permission to create roles, execute the following command:

kubectl create clusterrolebinding cluster-admin-binding
    --clusterrole cluster-admin
    --user $(gcloud config get-value account)

This command binds the clusterrole and cluster-admin to the current user.

(Optional) Perform Optional Configuration

On-Premise: Apply patches

If patching is required on on-premise nodes, do the following:

  1. Create a user account in all nodes of the cluster to collect packages information and install patches.
  2. Execute the following scripts on the Kubernetes nodes. The script depends on the operating system used to create the cluster.
Sample Ubuntu script

The following is an example Ubuntu script:

Configure()
{
            mkdir /opt/opsramp/k8s/patch/{tmp,log} -p  > /dev/null 2>&1
            useradd opskube -s /bin/bash -d /opt/opsramp/k8s/patch/  > /dev/null 2>&1
            usermod -p '$6$dBsN2u5SuC.Niy.C$HxPpbXRZcaQpHui8D3QZshhdJz57xhU1roE12U4KEmlyCNiBNzcGRbrNI7.DREwsf18JUAMT27/VaZmr34Bul.' opskube > /dev/null 2>&1
            if [ -d /etc/sudoers.d ]
            then
                echo "opskube ALL=(ALL) NOPASSWD: /opt/opsramp/k8s/patch/" > /etc/sudoers.d/opskube
                chmod 0440 /etc/sudoers.d/opskube
            else
                sed -i '$ a opskube ALL=(ALL) NOPASSWD: /opt/opsramp/k8s/patch/' /etc/sudoers > /dev/null 2>&1
            fi

            dpkg -s python-apt | grep Status | grep installed
            STATUS1=$?
            if [ $STATUS1 -eq 0 ]
            then
                echo "python-apt already installed! No changes required!"
            else
                apt-get update > /dev/null 2>&1
                apt-get -y install python-apt > /dev/null 2>&1
            fi  

}
Sample CentOS script

The following is an example CentOS script:

Configure()
{
            mkdir /opt/opsramp/k8s/patch/{tmp,log} -p  > /dev/null 2>&1
            useradd opskube -s /bin/bash -d /opt/opsramp/k8s/patch/  > /dev/null 2>&1
            usermod -p '$6$dBsN2u5SuC.Niy.C$HxPpbXRZcaQpHui8D3QZshhdJz57xhU1roE12U4KEmlyCNiBNzcGRbrNI7.DREwsf18JUAMT27/VaZmr34Bul.' opskube > /dev/null 2>&1
            if [ -d /etc/sudoers.d ]
            then
                echo "opskube ALL=(ALL) NOPASSWD: /opt/opsramp/k8s/patch/" > /etc/sudoers.d/opskube
                chmod 0440 /etc/sudoers.d/opskube
            else
                sed -i '$ a opskube ALL=(ALL) NOPASSWD: /opt/opsramp/k8s/patch/' /etc/sudoers > /dev/null 2>&1
            fi

            rpm -qa | grep rpm-python
            STATUS1=$?
            if [ $STATUS1 -eq 0 ]
            then
                echo "rpm-python already installed! No changes required!"
            else
                yum  -y install rpm-python > /dev/null 2>&1
            fi

}

Configuring OpsRamp

Configuration involves:

  1. Configuring and installing Kubernetes Integration
  2. Deploying the OpsRamp agent YAML file
  3. Applying required Kubernetes monitoring templates / Setting up Device Management Policy to apply templates
  4. (Optional) Configuring Docker and Kubernetes event

Step 1: Configuring and installing Kubernetes Integration

  1. From All Clients, select the client.
  2. Go to Setup > Integrations > Integrations.
  3. From Available Integrations, select Compute > Kubernetes and click Install.
  4. Provide the following details:
    • Name for the integration.
    • Deployment type: On-prem or Cloud (AWS, GKE, and AKS)
    • Container Engine: Docker or ContainerD
      Note: Primarily, Docker is used. ContainerD is used for K3s integration.
  5. Click Install.
    Kubernetes Integration

    Kubernetes Integration

Step 2: Deploy the agent

To deploy the agent on the Kubernetes nodes:

  1. Copy the YAML content and paste to a new file in kube-controller (Example file name: opsramp-agent-kubernetes.yaml)
  2. Execute the command kubectl apply -f opsramp-agent-kubernetes.yaml in kube-controller.
Kubernetes Infrastructure

Kubernetes Infrastructure

Environment Variables in an agent YAML file

  • You can adjust the following environment variable to change the Log Level of the agent:
      - name: LOG_LEVEL
        value: "warn"
      
  • Worker Agent: This Daemonset is responsible for collecting System Performance Metrics, Container Metrics (Docker Or ContainerD), Kubelet, and all the container app metrics.
  • Master Agent: This deployment is responsible for collecting k8s-apiserver, k8s-controller, k8s-scheduler, k8s-kube-state, k8s-metrics-server, k8s-coreDNS / kubeDNS metrics.

Connecting Agents behind Proxy

To connect agent using Proxy, use the following environment variables:

CONN_MODE=proxy
PROXY_SERVER=<ProxyServerIP>
PROXY_PORT=<ProxyPort>

If the proxy server needs authentication, set the following credentials or else skip setting the below environment variables.

PROXY_USER=<User>
PROXY_PASSWORD=<Password>

Step 3: Applying Monitoring Templates / Creating Device Management Policy

  1. Apply the appropriate Kubernetes template on the Integration resource (cluster resource) that is created after the deployment of the agent YAML file.

  2. Apply the Docker Host Monitoring template and Kubelet Template on each worker agents created under the Integration resource in the application.

Step 4: (Optional) Configuring Docker and Kubernetes event

Configure Docker Events

Docker events are by default disabled in the agent deployment YAML file. The agent supports the following three docker events by default:

  • Start
  • Kill
  • Oom (Out of Memory)

To enable the Docker events, change the DOCKER_EVENTS environment variable to TRUE.

Disabled by Default

- name: DOCKER_EVENTS
  value: "FALSE"

Enabled

- name: DOCKER_EVENTS
  value: "TRUE"

Configure Kubernetes Events

OpsRamp Agent can forward the Kubernetes events that are generated in the cluster.

By default, this feature is disabled in the agent deployment YAML file. To enable, change the K8S_EVENTS environment variable to TRUE.

Disabled by Default

- name: K8S_EVENTS
  value: "FALSE"

Enabled

- name: K8S_EVENTS
  value: "TRUE"

The events are categorized into the following three types:

  • Node
  • Pod
  • Other

Notes:

  • To opt-out of any of these events, remove the event from the agent deployment YAML file.
  • To add an event that is not supported, add the event (Kube Event Reason) under the appropriate category. If the reason matches with the actual Kubernetes event reason, events are forwarded as alerts.
  • For agent versions 8.0.1-1 and above, the Kubernetes events are sent as monitoring alerts. For the older versions of agent, the Kubernetes events are sent as maintenance alerts to the OpsRamp alert browser.
  • By default, all events are converted as Warning alerts.
  • To forward any events with a different alert state, change the event name followed by alert state (Critical/Warning), as shown below.
pod:
  - Failed:Critical
  - InspectFailed:Warning
  - ErrImageNeverPull
  - Killing

Events supported by default

List of supported events
nodepodother
  • RegisteredNode
  • RemovingNode
  • DeletingNode
  • TerminatingEvictedPod
  • NodeReady
  • NodeNotReady
  • NodeSchedulable
  • NodeNotSchedulable
  • CIDRNotAvailable
  • CIDRAssignmentFailed
  • Starting
  • KubeletSetupFailed
  • FailedMount
  • NodeSelectorMismatching
  • InsufficientFreeCPU
  • InsufficientFreeMemory
  • OutOfDisk
  • HostNetworkNotSupported
  • NilShaper
  • Rebooted
  • NodeHasSufficientDisk
  • NodeOutOfDisk
  • InvalidDiskCapacity
  • FreeDiskSpaceFailed
  • Failed
  • InspectFailed
  • ErrImageNeverPull
  • Killing
  • OutOfDisk
  • HostPortConflict
  • FailedBinding
  • FailedScheduling
  • SuccessfulCreate
  • FailedCreate
  • SuccessfulDelete
  • FailedDelete

List of Metrics

View the list of metrics with description applicable to the following monitors: k8s-apiserver, k8s-controller, k8s-scheduler, k8s-kube-state, k8s-metrics-server, k8s-coreDNS / kubeDNS.

For complete details, view List of Metrics.

What to do next

After a discovery profile is created, do the following:

  • View the integration, go to Infrastructure > Resources.
  • Assign monitoring templates to the resource.
  • Validate that the resource was successfully added to OpsRamp.

Frequently Asked Questions

Will the agents deploy automatically when a new node is attached to the Kubernetes cluster?

Yes, worker agents are deployed using a daemonset. As a result, when a new node is joined/attached to the cluster the worker agent gets automatically deployed.

What happens when the pod having the agent is deleted?

A new pod with the agent is deployed automatically by the Kubernetes scheduler.

Does failure of an agent in one node affect the agents in other nodes?

No. All agents work independently. As a result, if one agent is not behaving properly then the impact is limited only to that agent.

What happens if the agent container in the pod crashes or gets deleted?

The agent restarts and the monitoring resumes. Only in a rare scenario, the metrics for an iteration or two is missed.

Are configuration update inside the container (by logging into the agent container) applied to the agent?

No. Any configuration update inside the agent container does not impact the running agent. All such configuration updates must be performed using config maps and applied again.

Will I get metrics while the agent is getting updated?

Agent update completes within seconds and when the agents are up with the new version, monitoring starts again and all monitoring frequencies are set accordingly. Only in a rare scenario, one iteration of monitoring is missed.

What is the default Log Level for agents that are being deployed?

The Default Log Level is Warning.

Can a node have more than one agent installed?

No, only one worker agent is installed per node, and one master agent installed per cluster on any one of the nodes in the cluster.

Does monitoring stop if the master agent crashes or is deleted?

No, only the kube-apps metrics are stopped. Container metrics sent from workers keep working.