Skip to content

Service Mesh Microservices Architecture from Development to Canary Release Best Practices (Part 2)


Introduction

The previous document introduced the deployment of DataKit in a Kubernetes environment, the deployment of the Bookinfo project in an Istio environment, configuring a CICD Pipeline for the reviews microservice, and using canary release for three versions of reviews. This article will cover observability in Kubernetes and Istio.

1 Observability in Kubernetes

1.1 Docker Monitoring View

In a Kubernetes cluster, a Pod is the smallest scheduling unit, which can contain one or more containers. In Guance, you can use the Docker Monitoring View to observe containers.
Log in to Guance, click on Scenarios -> Create Dashboard, and select Docker Monitoring View.

image

Enter the dashboard name as Docker Monitoring View 1. The name can be customized. Click Confirm.

image

Enter the monitoring view and select the hostname and container name.

image

1.2 Kubernetes Monitoring View

Log in to Guance, click on Scenarios -> Create Dashboard, and select Kubernetes Monitoring View.

image

Enter the dashboard name as Kubernetes Monitoring View. The name can be customized. Click Confirm.

image

Enter the monitoring view and select the cluster name and namespace. Note: The cluster name dropdown list is set up during the deployment of DataKit in the previous document.

image

image

1.3 ETCD Monitoring View

1.3.1 Enable ETCD Collector

In a Kubernetes cluster, enabling the collector requires defining the configuration using a ConfigMap and mounting it to the corresponding directory in DataKit. The content of etcd.conf is as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  name: datakit-conf
  namespace: datakit
data:
    #### etcd
    etcd.conf: |-    
        [[inputs.prom]]
          ## Exporter address or file path (Exporter address should include http or https protocol)
          ## File paths differ across operating systems
          ## Windows example: C:\\Users
          ## UNIX-like example: /usr/local/
          urls = ["https://172.16.0.229:2379/metrics"]

          ## Collector alias
          source = "etcd"

          ## Metric type filtering, optional values are counter, gauge, histogram, summary
          # By default, only counter and gauge types are collected
          # If empty, no filtering is performed
          metric_types = ["counter", "gauge"]

          ## Metric name filtering
          # Supports regular expressions; multiple can be configured, matching any one is sufficient
          # If empty, no filtering is performed
          metric_name_filter = ["etcd_server_proposals","etcd_server_leader","etcd_server_has","etcd_network_client"]

          ## Measurement name prefix
          # Configuring this adds a prefix to the measurement name
          measurement_prefix = ""

          ## Measurement name
          # By default, the metric name is split by underscores "_", with the first segment as the measurement name and the rest as the current metric name
          # If measurement_name is configured, the metric name is not split
          # The final measurement name will include the measurement_prefix prefix
          # measurement_name = "prom"

          ## Collection interval "ns", "us" (or "µs"), "ms", "s", "m", "h"
          interval = "60s"

          ## Filtering tags, multiple tags can be configured
          # Matching tags will be ignored
          # tags_ignore = ["xxxx"]

          ## TLS configuration
          tls_open = true
          #tls_ca = "/etc/kubernetes/pki/etcd/ca.crt"
          tls_cert = "/etc/kubernetes/pki/etcd/peer.crt"
          tls_key = "/etc/kubernetes/pki/etcd/peer.key"

          ## Custom measurement names
          # Metrics with a specific prefix can be grouped into a single measurement
          # Custom measurement name configuration takes precedence over measurement_name configuration
          [[inputs.prom.measurements]]
            prefix = "etcd_"
            name = "etcd"

          ## Custom authentication method, currently only supports Bearer Token
          # [inputs.prom.auth]
          # type = "bearer_token"
          # token = "xxxxxxxx"
          # token_file = "/tmp/token"

          ## Custom Tags

Log in to Rancher, under the Browse Clusters tab, select the k8s-solution-cluster cluster, navigate to More Resources -> Core -> ConfigMaps, choose the datakit namespace, click Edit Configuration on the datakit.conf row, click Add, add the etcd.conf configuration, and then click Save.

image

image

Log in to Rancher, under the Browse Clusters tab, select the k8s-solution-cluster cluster, navigate to Workloads -> DaemonSets, choose the datakit workspace, and click Edit Configuration on the datakit column.

image

Enter the Storage interface, add the etcd.conf mount directory /usr/local/datakit/conf.d/etcd/etcd.conf, and click Save.

image

1.3.2 Mount Certificate Files

To collect ETCD metrics using HTTPS, you need to use the certificates from the Kubernetes cluster. Specifically, you need to mount the /etc/kubernetes/pki/etcd directory from the Kubeadmin-deployed cluster to the /etc/kubernetes/pki/etcd directory in DataKit.

      volumes:
      - hostPath:
          path: /etc/kubernetes/pki/etcd
        name: dir-etcd
          volumeMounts:
          - mountPath: /etc/kubernetes/pki/etcd
          name: dir-etcd   

Use Rancher to complete the configuration. Log in to Rancher, under the Browse Clusters tab, select the k8s-solution-cluster cluster, navigate to Workloads -> DaemonSets, choose the datakit workspace, and click Edit YAML on the datakit column.

image

Add the content as shown in the image and click Save.

image

image

1.3.3 Achieve ETCD Observability

Log in to Guance, click on Scenarios -> Create Dashboard, and select ETCD Monitoring View.

Enter the dashboard name as ETCD Monitoring View. The name can be customized. Click Confirm.

image

Enter the monitoring view and select the cluster name.

image

For more information on ETCD integration methods, refer to the ETCD integration documentation.

2 Observability in Istio

2.1 Istio Mesh Monitoring View

Log in to Guance, click on Scenarios -> Create Dashboard, and select Istio Mesh Monitoring View.

image

Enter the dashboard name as Istio Mesh Monitoring View. The name can be customized. Click Confirm.

image

Enter the monitoring view and select the cluster name.

image

image

2.2 Istio Control Plane Monitoring View

Log in to Guance, click on Scenarios -> Create Dashboard, and select Istio Control Plane Monitoring View.

image

Enter the monitoring view and select the cluster name.

image

image

image

2.3 Istio Service Monitoring View

Log in to Guance, click on Scenarios -> Create Dashboard, and select Istio Service Monitoring View.

image

Enter the monitoring view and select the cluster name.

image

image

2.4 Istio Workload Monitoring View

Log in to Guance, click on Scenarios -> Create Dashboard, and select Istio Workload Monitoring View.

image

Enter the monitoring view and select the cluster name.

image

image

image

Feedback

Is this page helpful? ×