Monitoring PX-Backup with Prometheus and Grafana


This document shows how you can monitor your PX-Backup cluster with Prometheus and Grafana.

Prerequisites

  • A PX-Backup cluster
  • You must have kubectl access to your PX-Backup cluster

Install and configure Prometheus

1. Enter the following combined spec and kubectl command to install the Prometheus Operator:

kubectl apply -f - <<'_EOF'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-operator
  namespace: px-backup
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus-operator
subjects:
  - kind: ServiceAccount
    name: prometheus-operator
    namespace: px-backup
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-operator
  namespace: px-backup
rules:
  - apiGroups:
      - extensions
    resources:
      - thirdpartyresources
    verbs: ["*"]
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs: ["*"]
  - apiGroups:
      - monitoring.coreos.com
    resources:
      - alertmanagers
      - prometheuses
      - prometheuses/finalizers
      - servicemonitors
      - prometheusrules
      - podmonitors
      - thanosrulers
    verbs: ["*"]
  - apiGroups:
      - apps
    resources:
      - statefulsets
    verbs: ["*"]
  - apiGroups: [""]
    resources:
      - configmaps
      - secrets
    verbs: ["*"]
  - apiGroups: [""]
    resources:
      - pods
    verbs: ["list", "delete"]
  - apiGroups: [""]
    resources:
      - services
      - endpoints
    verbs: ["get", "create", "update"]
  - apiGroups: [""]
    resources:
      - nodes
    verbs: ["list", "watch"]
  - apiGroups: [""]
    resources:
      - namespaces
    verbs: ["list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-operator
  namespace: px-backup
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: prometheus-operator
  name: prometheus-operator
  namespace: px-backup
spec:
  selector:
    matchLabels:
      k8s-app: prometheus-operator
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: prometheus-operator
    spec:
      containers:
        - args:
            - --kubelet-service=kube-system/kubelet
            - --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
          image: quay.io/coreos/prometheus-operator:v0.36.0
          name: prometheus-operator
          ports:
            - containerPort: 8080
              name: http
          resources:
            limits:
              cpu: 200m
              memory: 100Mi
            requests:
              cpu: 100m
              memory: 50Mi
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: prometheus-operator
---
_EOF

2. To grant Prometheus access to the metrics API, create the following Kubernetes objects:

  • ClusterRole
  • ClusterRoleBinding
  • Service
  • ServiceAccount
kubectl apply -f - <<'_EOF'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"px-backup-prometheus"},"rules":[{"apiGroups":[""],"resources":["nodes","services","endpoints","pods"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resources":["configmaps"],"verbs":["get"]},{"nonResourceURLs":["/metrics","/federate"],"verbs":["get"]}]}
  name: px-backup-prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  - /federate
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"px-backup-prometheus"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"px-backup-prometheus"},"subjects":[{"kind":"ServiceAccount","name":"px-backup-prometheus","namespace":"px-backup"}]}
  name: px-backup-prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: px-backup-prometheus
subjects:
- kind: ServiceAccount
  name: px-backup-prometheus
  namespace: px-backup
---
apiVersion: v1
kind: Service
metadata:
  name: px-backup-prometheus
  namespace: px-backup
spec:
  type: ClusterIP
  ports:
    - name: web
      port: 9090
      protocol: TCP
      targetPort: 9090
  selector:
    prometheus: px-backup-prometheus
---
apiVersion: v1
kind: ServiceAccount
metadata:
   name: px-backup-prometheus
   namespace: px-backup
---
_EOF

3. To specify the monitoring rules for PX-Backup, create a ServiceMonitor object by entering the following combined spec and kubectl command:

kubectl apply -f - <<'_EOF'
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
   namespace: px-backup
   name: px-backup-prometheus-sm
   labels:
     name: px-backup-prometheus-sm
spec:
   selector:
     matchLabels:
       app.kubernetes.io/name: px-backup
   namespaceSelector:
     any: true
   endpoints:
     - port: rest-api
       targetPort: 10001
---
_EOF

4. Apply Prometheus specs for PX-Backup metrics:

kubectl apply -f - <<'_EOF'
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
   name: px-backup-prometheus
   namespace: px-backup
spec:
   replicas: 2
   logLevel: debug
   serviceAccountName: px-backup-prometheus
   serviceMonitorSelector:
     matchLabels:
       name: px-backup-prometheus-sm
---
_EOF

Install and configure Grafana

1. Create a storage class for Grafana and persistent volumes with the following names:

  • grafana-data
  • grafana-dashboard
  • grafana-source-config
  • grafana-extensions
kubectl apply -f - <<'_EOF'
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
    name: px-grafana-sc
provisioner: kubernetes.io/portworx-volume
parameters:
   repl: "3"
   priority_io: "high"
allowVolumeExpansion: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: grafana-data
   namespace: px-backup
   annotations:
     volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: grafana-dashboard
   namespace: px-backup
   annotations:
     volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: grafana-source-config
   namespace: px-backup
   annotations:
     volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: grafana-extensions
   namespace: px-backup
   annotations:
     volume.beta.kubernetes.io/storage-class: px-grafana-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
---
_EOF

Note the following about this storage class:

  • The provisioner parameter is set to kubernetes.io/portworx-volume. For details about the Portworx-specific parameters, refer to the Portworx Volume section of the Kubernetes website.
  • Three replicas of each volume will be created.

2. Enter the following command to install Grafana:

kubectl apply -n px-backup -f - <<'_EOF'
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  type: ClusterIP
  ports:
    - port: 3000
  selector:
    app: grafana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      securityContext:
       fsGroup: 2000
      containers:
        - image: grafana/grafana:6.1.6
          name: grafana
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 100m
              memory: 100Mi
          readinessProbe:
            httpGet:
              path: /login
              port: 3000
          volumeMounts:
            - name: grafana
              mountPath: /etc/grafana/provisioning/dashboard
              readOnly: false
            - name: grafana-dash
              mountPath: /var/lib/grafana/dashboards
              readOnly: false
            - name: grafana-source-cfg
              mountPath: /etc/grafana/provisioning/datasources
              readOnly: false
            - name: grafana-plugins
              mountPath: /var/lib/grafana/plugins
              readOnly: false
      volumes:
      - name: grafana
        persistentVolumeClaim:
          claimName: grafana-data
      - name: grafana-dash
        persistentVolumeClaim:
          claimName: grafana-dashboard
      - name: grafana-source-cfg
        persistentVolumeClaim:
          claimName: grafana-source-config
      - name: grafana-plugins
        persistentVolumeClaim:
          claimName: grafana-extensions
---
_EOF

Note the following about this deployment:

  • The volumes section references the PVCs you created in the previous step.

3. Enter the following kubectl port-forward command to forward all connections made to localhost:3000 to svc/grafana:3000:

kubectl port-forward svc/grafana --namespace px-backup --address 0.0.0.0 3000

4. Follow the instructions from the Grafana Support for Prometheus page of the Prometheus documentation, to create a Prometheus data source named px-backup.

5. Follow the instructions from the Importing a dashboard page of the Grafana documentation to import the PX-Backup dashboard JSON file.


Last edited: Tuesday, Sep 14, 2021