- desc: 'CouchBase built-in view by Exporter' path: 'dashboard/zh/couchbase_prom'
monitor: - desc: 'CouchBase monitor' path: 'monitor/zh/couchbase_prom'
CouchBase¶
The collector can gather many metrics from CouchBase instances, such as memory and disk usage, current connections, and more, and send these metrics to Guance, helping to monitor and analyze various anomalies in CouchBase.
Collector Configuration¶
Prerequisites¶
Version Information¶
CouchBase Version:7.2.0 CouchBase Exporter Version:
blakelead/couchbase-exporter:latest
Install CouchBase Exporter¶
Use the CouchBase client collector CouchBase Exporter. The collector documentation can be found here
Note: The username and password used in the text are the CouchBase Server credentials.
docker run -d --name cbexporter --publish 9191:9191 --env EXPORTER_LISTEN_ADDR=:9191 --env EXPORTER_TELEMETRY_PATH=/metrics --env EXPORTER_SERVER_TIMEOUT=10s --env EXPORTER_LOG_LEVEL=debug --env EXPORTER_LOG_FORMAT=json --env EXPORTER_DB_URI=http://172.17.0.92:8091 --env EXPORTER_DB_TIMEOUT=10s --env EXPORTER_DB_USER=Administrator --env EXPORTER_DB_PASSWORD=pwd1234 --env EXPORTER_SCRAPE_CLUSTER=true --env EXPORTER_SCRAPE_NODE=true --env EXPORTER_SCRAPE_BUCKET=true --env EXPORTER_SCRAPE_XDCR=false blakelead/couchbase-exporter:latest
Parameter Introduction:
| | | | |
| -------------------------- | ------------------- | -------------------------------------------------- | ----------------------------------------------- |
| environment variable | argument | description | default |
| | -config.file | Configuration file to load data from | |
| EXPORTER_LISTEN_ADDR | -web.listen-address | Address to listen on for HTTP requests | :9191 |
| EXPORTER_TELEMETRY_PATH | -web.telemetry-path | Path under which to expose metrics | /metrics |
| EXPORTER_SERVER_TIMEOUT | -web.timeout | Server read timeout in seconds | 10s |
| EXPORTER_DB_URI | -db.uri | Address of CouchBase cluster | [http://127.0.0.1:8091](http://127.0.0.1:8091/) |
| EXPORTER_DB_TIMEOUT | -db.timeout | CouchBase client timeout in seconds | 10s |
| EXPORTER_TLS_ENABLED | -tls.enabled | If true, enable TLS communication with the cluster | false |
| EXPORTER_TLS_SKIP_INSECURE | -tls.skip-insecure | If true, certificate won't be verified | false |
| EXPORTER_TLS_CA_CERT | -tls.ca-cert | Root certificate of the cluster | |
| EXPORTER_TLS_CLIENT_CERT | -tls.client-cert | Client certificate | |
| EXPORTER_TLS_CLIENT_KEY | -tls.client-key | Client private key | |
| EXPORTER_DB_USER | *not allowed* | Administrator username | |
| EXPORTER_DB_PASSWORD | *not allowed* | Administrator password | |
| EXPORTER_LOG_LEVEL | -log.level | Log level: info,debug,warn,error,fatal | error |
| EXPORTER_LOG_FORMAT | -log.format | Log format: text, `json` | text |
| EXPORTER_SCRAPE_CLUSTER | -scrape.cluster | If false, wont scrape cluster metrics | true |
| EXPORTER_SCRAPE_NODE | -scrape.node | If false, wont scrape node metrics | true |
| EXPORTER_SCRAPE_BUCKET | -scrape.bucket | If false, wont scrape bucket metrics | true |
| EXPORTER_SCRAPE_XDCR | -scrape.xdcr | If false, wont scrape `xdcr` metrics | false |
| | -help | Command line help | |
Configuration Implementation¶
- Enable the DataKit Prom plugin, copy the sample file
- Modify the
couchbase-prom.conf
configuration file
[[inputs.prom]]
## Exporter URLs
urls = [""]
## Ignore request errors for the url
ignore_req_err = false
## Collector alias
source = "zookeeper"
## Data output destination
# Configure this to write the collected data to a local file instead of sending it to the center
# You can then use the command `datakit --prom-conf /path/to/this/conf` to debug the locally saved metrics
# If the url is already configured as a local file path, --prom-conf will prioritize debugging the data in the output path
# output = "/abs/path/to/file"
>
## Maximum data collection size in bytes
# When outputting data to a local file, you can set a maximum data collection size
# If the collected data exceeds this limit, the data will be discarded
# The default maximum data collection size is set to 32MB
# max_file_size = 0
## Metric type filter, optional values are counter, gauge, histogram, summary
# By default, only counter and gauge type metrics are collected
# If empty, no filtering is performed
metric_types = ["counter", "gauge"]
## Metric name filter
# Supports regular expressions, multiple can be configured, satisfying any one of them is sufficient
# If empty, no filtering is performed
# metric_name_filter = ["cpu"]
## Metric set name prefix
# Configure this to add a prefix to the metric set name
measurement_prefix = ""
## Metric set name
# By default, the metric name is split by underscore "_", the first field after splitting is used as the metric set name, and the remaining fields are used as the current metric name
# If measurement_name is configured, the metric name is not split
# The final metric set name will have the measurement_prefix added
# measurement_name = "prom"
## Collection interval "ns", "us" (or "µs"), "ms", "s", "m", "h"
interval = "10s"
## Filter tags, multiple tags can be configured
# Matching tags will be ignored
# tags_ignore = ["xxxx"]
## TLS configuration
tls_open = false
# tls_ca = "/tmp/ca.crt"
# tls_cert = "/tmp/peer.crt"
# tls_key = "/tmp/peer.key"
## Custom authentication method, currently only supports Bearer Token
# token and token_file: only one needs to be configured
# [inputs.prom.auth]
# type = "bearer_token"
# token = "xxxxxxxx"
# token_file = "/tmp/token"
## Custom metric set name
# Metrics containing the prefix can be grouped into one metric set
# Custom metric set name configuration takes precedence over measurement_name configuration
#[[inputs.prom.measurements]]
# prefix = "cpu_"
# name = "cpu"
# [[inputs.prom.measurements]]
# prefix = "mem_"
# name = "mem"
## Custom Tags
[inputs.prom.tags]
#some_tag = "some_value"
# more_tag = "some_other_value"
- Restart DataKit
Currently, you can enable the collector by injecting collector configuration via ConfigMap.
Metrics¶
All data collected by default will have a global tag named host
appended (the tag value is the hostname where DataKit is located). You can also specify other tags in the configuration via [inputs.prom.tags]
:
Cluster metrics¶
Tags
name | description |
---|---|
bucket | bucket name |
host | host name which installed nginx |
instance | host |
Metrics
name | description |
---|---|
cluster_rebalance_status | Rebalancing status |
Complete metrics can be found here
Node metrics¶
Tags
name | description |
---|---|
bucket | bucket name |
host | host name which installed nginx |
instance | host |
Metrics
name | description |
---|---|
node_stats_couch_docs_data_size | CouchBase documents data size in the node |
node_stats_get_hits | Number of get hits |
node_uptime_seconds | Node uptime |
node_status | Status of CouchBase node |
Complete metrics can be found here
Bucket metrics¶
Tags
name | description |
---|---|
bucket | bucket name |
host | host name which installed nginx |
instance | host |
Metrics
name | description |
---|---|
bucket_ram_quota_percent_used | Memory used by the bucket in percent |
bucket_ops_per_second | Number of operations per second |
bucket_item_count | Number of items in the bucket |
bucketstats_curr_connections | Current bucket connections |
bucketstats_delete_hits | Delete hits |
bucketstats_disk_write_queue | Disk write queue depth |
bucketstats_ep_bg_fetched | Disk reads per second |
bucketstats_ep_mem_high_wat | Memory usage high water mark for auto-evictions |
Complete metrics can be found here
XDCR metrics¶
Tags
name | description |
---|---|
bucket | bucket name |
host | host name which installed nginx |
instance | host |
Metrics
Complete metrics can be found here
Logs¶
To collect CouchBase logs, follow these steps:
- Enable the DataKit log plugin, copy the sample file
Note: DataKit must be installed on the same host as CouchBase to collect CouchBase logs.
- Modify the
couchbase-prom.conf
configuration file
# {"version": "1.9.2", "desc": "do NOT edit this line"}
[[inputs.logging]]
## Required
## File names or a pattern to tail.
logfiles = [
"/opt/couchbase/var/lib/couchbase/logs/couchdb.log",
]
## glob filteer
ignore = [""]
## Your logging source, if it's empty, use 'default'.
source = "couchdb"
## Add service tag, if it's empty, use $source.
service = "couchdb"
## Grok pipeline script name.
pipeline = ""
## optional status:
## "emerg","alert","critical","error","warning","info","debug","OK"
ignore_status = []
## optional encodings:
## "utf-8", "utf-16le", "utf-16le", "gbk", "gb18030" or ""
character_encoding = ""
## The pattern should be a regexp. Note the use of '''this regexp'''.
## regexp link: https://golang.org/pkg/regexp/syntax/#hdr-Syntax
# multiline_match = '''^\S'''
auto_multiline_detection = true
auto_multiline_extra_patterns = []
## Removes ANSI escape codes from text strings.
remove_ansi_escape_codes = false
## If the data sent failure, will retry forevery.
blocking_mode = true
## If file is inactive, it is ignored.
## time units are "ms", "s", "m", "h"
ignore_dead_log = "1h"
## Read file from beginning.
from_beginning = false
[inputs.logging.tags]
# some_tag = "some_value"
# more_tag = "some_other_value"