Skip to content

Huawei Cloud DMS Kafka

Collect Huawei Cloud DMS Kafka Metrics data

Configuration

Install Func

It is recommended to activate the Guance integration - extension - DataFlux Func (Automata): all prerequisites are automatically installed. Please proceed with the script installation.

If you deploy Func yourself, refer to Self-deploy Func

Install Script

Note: Please prepare the Huawei Cloud AK with the required permissions in advance (for simplicity, you can grant the global read-only permission Tenant Guest)

Automata Version Activation Script

  1. Log in to the Guance console.
  2. Click the [Integration] menu and select [Cloud Account Management].
  3. Click [Add Cloud Account], select [Huawei Cloud], and fill in the required information on the interface. If you have already configured the cloud account information, ignore this step.
  4. Click [Test]. After the test is successful, click [Save]. If the test fails, please check if the relevant configuration information is correct and retest.
  5. Click [Cloud Account Management]. You can see the added cloud account in the list. Click the corresponding cloud account to enter the details page.
  6. Click the [Integration] button on the cloud account details page. In the Not Installed list, find Huawei Cloud Kafka and click the [Install] button. The installation interface will pop up for installation.

Manual Activation Script

  1. Log in to the Func console, click [Script Market], enter the Guance script market, and search for integration_huaweicloud_kafka.

  2. Click [Install], then enter the corresponding parameters: Huawei Cloud AK, SK, and account name.

  3. Click [Deploy Startup Script]. The system will automatically create the Startup script set and configure the corresponding startup script.

  4. After activation, you can see the corresponding automatic trigger configuration in "Management / Automatic Trigger Configuration". Click [Execute] to execute it immediately without waiting for the scheduled time. Wait a moment, and you can view the execution task record and corresponding logs.

Verification

  1. In "Management / Automatic Trigger Configuration", confirm whether the corresponding task has the corresponding automatic trigger configuration. At the same time, you can check the corresponding task record and logs to check for any abnormalities.
  2. In Guance, check if there is asset information in "Infrastructure - Resource Catalog".
  3. In Guance, check if there is corresponding monitoring data in "Metrics".

Metrics

Collect Huawei Cloud DMS Kafka Metrics data. You can collect more Metrics through configuration Huawei Cloud DMS Kafka Metrics details

Instance Monitoring Metrics

Metric Name Metric Meaning Unit Dimension
current_partitions This metric is used to count the number of partitions used in the Kafka instance Count instance_id
current_topics This metric is used to count the number of topics created in the Kafka instance Count instance_id
group_msgs This metric is used to count the total number of backlogged messages in all consumer groups in the Kafka instance Count instance_id

Node Monitoring Metrics

Metric Name Metric Meaning Unit Dimension
broker_data_size This metric is used to count the current message data size of the node Byte instance_id
broker_messages_in_rate This metric is used to count the number of messages produced per second Count/s instance_id
broker_bytes_in_rate This metric is used to count the number of bytes produced per second Byte/s instance_id
broker_bytes_out_rate This metric is used to count the number of bytes consumed per second Byte/s instance_id
broker_public_bytes_in_rate Count the public network access inflow traffic of the Broker node per second Byte/s instance_id
broker_public_bytes_out_rate Count the public network access outflow traffic of the Broker node per second Byte/s instance_id
broker_fetch_mean Count the average duration of processing consumption requests by the Broker node ms instance_id
broker_produce_mean Average duration of processing production requests ms instance_id
broker_cpu_core_load Average load of each CPU core collected at the virtual machine level of the Kafka node % instance_id
broker_disk_usage Disk capacity usage collected at the virtual machine level of the Kafka node % instance_id
broker_memory_usage Memory usage collected at the virtual machine level of the Kafka node % instance_id
broker_heap_usage Heap memory usage collected from the Kafka process JVM of the Kafka node % instance_id
broker_alive Indicates whether the Kafka node is alive 1: Alive 0: Offline instance_id
broker_connections Current number of all TCP connections of the Kafka node Count instance_id
broker_cpu_usage CPU usage of the virtual machine of the Kafka node % instance_id
broker_total_bytes_in_rate Total network access inflow traffic of the Broker node per second Byte/s instance_id
broker_total_bytes_out_rate Total network access outflow traffic of the Broker node per second Byte/s instance_id
broker_disk_read_rate Disk read operation traffic Byte/s instance_id
broker_disk_write_rate Disk write operation traffic Byte/s instance_id
network_bandwidth_usage Network bandwidth utilization % instance_id

Consumer Group Monitoring Metrics

Metric Name Metric Meaning Unit Dimension
messages_consumed This metric is used to count the number of messages consumed by the current consumer group Count instance_id
messages_remained This metric is used to count the number of messages that can be consumed by the consumer group Count instance_id
topic_messages_remained This metric is used to count the number of messages that can be consumed by the consumer group in the specified queue Count instance_id
topic_messages_consumed This metric is used to count the number of messages consumed by the consumer group in the specified queue Count instance_id
consumer_messages_remained This metric is used to count the number of messages that can be consumed by the consumer group Count instance_id
consumer_messages_consumed This metric is used to count the number of messages consumed by the consumer group Count instance_id

Object

The collected Huawei Cloud Kafka object data structure can be seen in "Infrastructure - Resource Catalog"

{
  "measurement": "huaweicloud_kafka",
  "tags": {      
    "RegionId"           : "cn-north-4",
    "charging_mode"      : "1",
    "connect_address"    : "192.168.0.161,192.168.0.126,192.168.0.31",
    "description"        : "",
    "engine"             : "kafka",
    "engine_version"     : "2.7",
    "instance_id"        : "beb33e02-xxxx-xxxx-xxxx-628a3994fd1f",
    "kafka_manager_user" : "",
    "name"               : "beb33e02-xxxx-xxxx-xxxx-628a3994fd1f",
    "port"               : "9092",
    "project_id"         : "f5f4c067d68xxxx86e173b18367bf",
    "resource_spec_code" : "",
    "service_type"       : "advanced",
    "specification"      : "kafka.2u4g.cluster.small * 3 broker",
    "status"             : "RUNNING",
    "storage_type"       : "hec",
    "user_id"            : "e4b27d49128e4bd0893b28d032a2e7c0",
    "user_name"          : "xxxx"
  },
  "fields": {
    "created_at"          : "1693203968959",
    "maintain_begin"      : "02:00:00",
    "maintain_end"        : "06:00:00",
    "storage_space"       : 186,
    "total_storage_space" : 300,
    "message"             : "{Instance JSON data}"
  }
}

Note: The fields in tags, fields may change with subsequent updates

Tip 1: The value of tags.name is the instance ID, which is used as the unique identifier.

Tip 2: The following fields are all JSON serialized strings.

Feedback

Is this page helpful? ×