APISIX
Installation and Configuration¶
Prerequisites¶
- Install
APISIX
- Install DataKit
APISIX Configuration¶
The configuration file for APISIX is config.yaml
. Note that datakit_host
should be adjusted to the actual address, such as filling in the IP if it's a host environment, or adjusting it to datakit-service.datakit.svc
in a Kubernetes environment.
Metrics¶
APISIX supports exposing Metrics via the Prometheus protocol. Add the following configuration in the APISIX configuration file:
apisix:
prometheus:
enabled: true
path: /apisix/prometheus/metrics
metricPrefix: apisix_
containerPort: 9091
plugins:
- prometheus
You also need to enable the prometheus
plugin in the global APISIX plugins section.
Logs¶
APISIX supports reporting log information in multiple ways, mainly through the http-logger
plugin:
You also need to configure the http-logger
reporting address on the APISIX route with the following content:
{
"batch_max_size": 1,
"uri": "http://<datakit_host>:9529/v1/write/logstreaming?source=apisix_logstreaming"
}
Traces¶
APISIX supports reporting trace information via the Opentelemetry protocol by enabling the opentelemetry
plugin:
apisix:
plugins:
- opentelemetry
pluginAttrs:
opentelemetry:
resource:
service.name: APISIX
tenant.id: business_id
collector:
address: <datakit_host>:9529/otel
request_timeout: 3
DataKit¶
Host¶
DataKit runs on the host and can collect data using the host method. Enter the DataKit installation directory for configuration.
- Metrics
Enable the prometheus
collector to collect APISIX Metrics. Go to the DataKit installation directory under conf.d/prom
, and execute the following command:
cp prom.conf.sample apisix.conf
Adjust the content of apisix.conf
, mainly adjusting the urls
, as follows:
- Logs
Enable the logstreaming
collector to collect APISIX logs. Go to the DataKit installation directory under conf.d/log
, and execute the following command:
cp logstreaming.conf.sample logstreaming.conf
No adjustment needed for the content.
- Traces
Enable the opentelemetry
collector to collect APISIX trace data. Go to the DataKit installation directory under conf.d/opentelemetry
, and execute the following command:
cp opentelemetry.conf.sample opentelemetry.conf
No adjustment needed for the content.
- Restart
After making adjustments, restart DataKit.
Kubernetes¶
DataKit runs on Kubernetes and can be configured in the following way.
- Metrics
Use the KubernetesPrometheus collector to collect Prometheus
Metric information.
Edit datakit.yaml
and add the apisix.conf
part in the ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: datakit-conf
namespace: datakit
data:
apisix.conf: |-
[inputs.kubernetesprometheus]
node_local = true
scrape_interval = "30s"
keep_exist_metric_name = false
[[inputs.kubernetesprometheus.instances]]
role = "pod"
namespaces = ["apisix"]
selector = "app.kubernetes.io/name=apisix"
scrape = "true"
scheme = "http"
port = "9091"
path = "/apisix/prometheus/metrics"
interval = "30s"
[inputs.kubernetesprometheus.instances.custom]
measurement = "apisix"
job_as_measurement = false
[inputs.kubernetesprometheus.instances.custom.tags]
node_name = "__kubernetes_pod_node_name"
namespace = "__kubernetes_pod_namespace"
pod_name = "__kubernetes_pod_name"
instance = "__kubernetes_mate_instance"
host = "__kubernetes_mate_host"
Mount apisix.conf
to the /usr/local/datakit/conf.d/kubernetesprometheus/
directory of DataKit.
- mountPath: /usr/local/datakit/conf.d/kubernetesprometheus/apisix.conf
name: datakit-conf
subPath: apisix.conf
- Logs
Edit datakit.yaml
and append logstreaming
to the ENV_DEFAULT_ENABLED_INPUTS
environment variable value, as shown below:
- name: ENV_DEFAULT_ENABLED_INPUTS
value: dk,cpu,disk,diskio,mem,swap,system,hostobject,net,host_processes,container,statsd,logstreaming
- Traces
Edit datakit.yaml
and append opentelemetry
to the ENV_DEFAULT_ENABLED_INPUTS
environment variable value, while enabling ENV_INPUT_DDTRACE_COMPATIBLE_OTEL to activate OTEL and DDTrace data compatibility.
- name: ENV_DEFAULT_ENABLED_INPUTS
value: dk,cpu,disk,diskio,mem,swap,system,hostobject,net,host_processes,container,statsd,logstreaming,opentelemetry
- name: ENV_INPUT_DDTRACE_COMPATIBLE_OTEL
value: 'true'
- Restart
After making adjustments, restart DataKit.
Metrics¶
Metric | Description | Type |
---|---|---|
bandwidth | APISIX traffic (ingress/egress) | int |
etcd_modify_indexes | Number of etcd index records | int |
etcd_reachable | Etcd availability, 1 means available, 0 means unavailable | int |
http_latency_bucket | Service request time delay | int |
http_latency_count | Number of service request time delays | int |
http_latency_sum | Total of service request time delays | int |
http_requests_total | Total number of HTTP requests | int |
http_status | HTTP status | int |
nginx_http_current_connections | Current number of nginx connections | int |
nginx_metric_errors_total | Number of erroneous nginx metrics | int |
node_info | Node information | int |
shared_dict_capacity_bytes | Capacity of APISIX nginx | int |
shared_dict_free_space_bytes | Available space of APISIX nginx | int |
Logs¶
Use Pipeline to extract trace_id
from APISIX logs to achieve correlation between traces and logs.
jsonData=load_json(_)
requestJson = jsonData["request"]
responseJson = jsonData["response"]
add_key(http_status,responseJson["status"])
add_key(url,requestJson["url"])
add_key(client_ip,jsonData["client_ip"])
trace_id = requestJson["headers"]["traceparent"]
grok(trace_id, "%{DATA}-%{DATA:trace_id}-%{DATA}")