MinIO¶
Performance metrics of MinIO, including MinIO uptime, storage space distribution, bucket details, file size range distribution, S3 TTFB (s) distribution, S3 traffic, S3 requests, etc.
Configuration¶
Supported Versions¶
- MinIO version: ALL
Note: Example MinIO version is RELEASE.2022-06-25T15-50-16Z (commit-id=bd099f5e71d0ea511846372869bfcb280a5da2f6)
Metric Collection¶
MinIO by default exposes metric, which can be collected directly via Prometheus.
- Use
minio-client
(shortened asmc
) to create authorization information
$ mc alias set myminio http://192.168.0.210:9000 minioadmin minioadmin
scrape_configs:
- job_name: minio-job
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjQ4MTAwNzIxNDQsImlzcyI6InByb21ldGhldXMiLCJzdWIiOiJtaW5pb2FkbWluIn0.tzoJ7ifMxgx4jXfUKdD_Sq5Ll2-YlbaBu6FuNTZcc88t9o9STyg4yicRAgYmezVGFwYR2VFKvBSBnOnVnb0n4w
metrics_path: /minio/v2/metrics/cluster
scheme: http
static_configs:
- targets: ['192.168.0.210:9000']
Note
MinIO only provides a way to generate token
through mc
, which can be used for prometheus
metric collection. It does not include generating the corresponding Prometheus server. The output contains bearer_token
, metrics_path
, scheme
, and targets
, which can be used to assemble the final URL.
- Enable DataKit collector
- Edit
prom-minio.conf
configuration file
prom-minio.conf
[[inputs.prom]]
# Exporter URLs
urls = ["http://192.168.0.210:9000/minio/v2/metrics/cluster"]
# Ignore request errors for URLs
ignore_req_err = false
# Collector alias
source = "minio"
metric_types = []
# Retain metrics to prevent time series explosion
metric_name_filter = ["minio_bucket","minio_cluster","minio_node","minio_s3","minio_usage"]
# Collection interval "ns", "us" (or "µs"), "ms", "s", "m", "h"
interval = "1m"
# TLS configuration
tls_open = false
# tls_ca = "/tmp/ca.crt"
# tls_cert = "/tmp/peer.crt"
# tls_key = "/tmp/peer.key"
# Filter tags; multiple tags can be configured
# Matching tags will be ignored, but the corresponding data will still be reported
tags_ignore = ["version","le","commit"]
# Custom authentication method, currently only supports Bearer Token
# token and token_file: configure one of them
[inputs.prom.auth]
type = "bearer_token"
token = "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjQ4MTAwNzIxNDQsImlzcyI6InByb21ldGhldXMiLCJzdWIiOiJtaW5pb2FkbWluIn0.tzoJ7ifMxgx4jXfUKdD_Sq5Ll2-YlbaBu6FuNTZcc88t9o9STyg4yicRAgYmezVGFwYR2VFKvBSBnOnVnb0n4w"
# token_file = "/tmp/token"
# Custom measurement names
# Metrics with the following prefix can be grouped into a custom measurement
# Custom measurement name settings take precedence over measurement_name
#[[inputs.prom.measurements]]
# prefix = "cpu_"
# name = "cpu"
# [[inputs.prom.measurements]]
# prefix = "mem_"
# name = "mem"
# Discard data matching the following tag patterns
[inputs.prom.ignore_tag_kv_match]
# key1 = [ "val1.*", "val2.*"]
# key2 = [ "val1.*", "val2.*"]
# Add additional HTTP headers to the data pull request
[inputs.prom.http_headers]
# Root = "passwd"
# Michael = "1234"
# Rename tag keys in prom data
[inputs.prom.tags_rename]
overwrite_exist_tags = false
[inputs.prom.tags_rename.mapping]
# tag1 = "new-name-1"
# tag2 = "new-name-2"
# tag3 = "new-name-3"
# Send collected metrics as logs to the center
# If service field is empty, the service tag will be set to the measurement name
[inputs.prom.as_logging]
enable = false
service = "service_name"
# Custom Tags
[inputs.prom.tags]
# some_tag = "some_value"
# more_tag = "some_other_value"
Key parameter descriptions:
- urls: Prometheus metric address; fill in the metric URL exposed by MinIO
- source: Collector alias; it is recommended to use
minio
- interval: Collection interval
- metric_name_filter: Metric filtering; collect only required metrics
- tls_open: TLS configuration
- metric_types: Metric types; if not filled, all metrics are collected
- tags_ignore: Ignore unnecessary tags
- [inputs.prom.auth]: Configure authorization information
-
token: Value of bearer_token
-
Restart DataKit
Metrics¶
Metric | Description |
---|---|
node_process_uptime_seconds | Node uptime |
node_disk_free_bytes | Node free disk space |
node_disk_used_bytes | Node used disk space |
node_file_descriptor_open_total | Number of node file descriptor opens |
node_go_routine_total | Number of node go_routine calls |
cluster_disk_online_total | Number of online disks in the cluster |
cluster_disk_offline_total | Number of offline disks in the cluster |
bucket_usage_object_total | Number of objects used in bucket |
bucket_usage_total_bytes | Total bytes used in bucket |
bucket_objects_size_distribution | Object size distribution in bucket |
s3_traffic_received_bytes | S3 received traffic |
s3_traffic_sent_bytes | S3 sent traffic |
s3_requests_total | Total number of S3 requests |
s3_requests_waiting_total | Number of S3 requests waiting |
s3_requests_errors_total | Total number of S3 errors |
s3_requests_4xx_errors_total | Number of S3 4xx errors |
s3_time_ttfb_seconds_distribution | S3 TTFB |
usage_last_activity_nano_seconds | Time since last activity |