Skip to content

Dataway


Introduction

DataWay is the data gateway of Guance. All data reported by collectors to Guance must pass through the DataWay gateway.

Dataway Installation

  • Create a new Dataway

On the "Data Gateway" page in the Guance management console, click "Create Dataway". Enter a name and binding address, then click "Create".

After creation is successful, a new Dataway will be created automatically and the installation script for the Dataway will be generated.

Info

The binding address is the Dataway gateway address. It must be a complete HTTP address, for example http(s)://1.2.3.4:9528, including the protocol, host address, and port. The host address can generally be the IP address of the machine where Dataway is deployed, or it can be specified as a domain name, which must be properly resolved.

Note: Ensure that collectors can access this address, otherwise data collection will fail.

  • Install Dataway
DW_KODO=http://kodo_ip:port \
   DW_TOKEN=<tkn_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
   DW_UUID=<YOUR_UUID> \
   bash -c "$(curl https://static.guance.com/dataway/install.sh)"

Host installation is no longer recommended. Please install Dataway directly using the Kubernetes statefulset method.

After installation is complete, a dataway.yaml file will be generated in the installation directory. Its content is shown in the example below and can be modified manually, taking effect after restarting the service.

dataway.yaml (Click to expand)
# ============= DATAWAY CONFIG =============

# Dataway UUID, we can get it on during create a new dataway
uuid:

# It's the workspace token, most of the time, it's
# system worker space's token.
token:

# secret_token used under sinker mode, and to check if incomming datakit
# requests are valid.
secret_token:

# If __internal__ token allowed? If ok, the data/request will direct to
# the workspace with the token above
enable_internal_token: false

# is empty token allowed? If ok, the data/request will direct to
# the workspace with the token above
enable_empty_token: false

# Is dataway cascaded? For cascaded Dataway, it's remote_host is
# another Dataway and not Kodo.
cascaded: false

# kodo(next dataway) related configures
remote_host:
http_timeout: 3s

http_max_idle_conn_perhost: 0 # default to CPU cores
http_max_conn_perhost: 0      # default no limit

insecure_skip_verify: false
http_client_trace: false
max_conns_perhost: 0
sni: ""

# dataway API configures
bind: 0.0.0.0:9528

# disable 404 page
disable_404page: false

# dataway TLS file path
tls_crt:
tls_key:

# enable pprof
pprof_bind: localhost:6060

api_limit_rate : 100000         # 100K
max_http_body_bytes : 67108864  # 64MB
copy_buffer_drop_size : 262144  # 256KB, if copy buffer memory larger than this, this memory released
reserved_pool_size: 4096        # reserved pool size for better GC

within_docker: false

log_level: info
log: log
gin_log: gin.log

ip_blacklist:
  ttl = "1m"
  clean_interval = "1h"

cache_cfg:
  # cache disk path
  dir: "disk_cache"

  # disable cache
  disabled: false

  clean_interval: "10s"

  # in MB, max single data package size in disk cache, such as HTTP body
  max_data_size: 100

  # in MB, single disk-batch(single file) size
  batch_size: 128

  # in MB, max disk size allowed to cache data
  max_disk_size: 65535

  # expire duration, default 7 days
  expire_duration: "168h"

prometheus:
  listen: "localhost:9090"
  url: "/metrics"
  enable: true

#sinker:
#  etcd:
#    urls:
#    - http://localhost:2379 # one or multiple etcd host
#    dial_timeout: 30s
#    key_space: "/dw_sinker" # subscribe to the etcd key
#    username: "dataway"
#    password: "<PASSWORD>"
#  #file:
#  #  path: /path/to/sinker.json

The Dataway pod yaml is as follows:

dataway-statefulset.yaml (Click to expand)
---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: sts-utils-dataway
  name: dataway
  namespace: utils
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sts-utils-dataway
  serviceName: dataway
  template:
    metadata:
      annotations:
        datakit/logs: |
          [
            {
              "disable": false,
              "source": "dataway",
              "service": "dataway",
              "multiline_match": "^\\d{4}|^\\[GIN\\]"
            }
          ]
        datakit/prom.instances: |
          [[inputs.prom]]
            url = "http://$IP:9090/metrics"

            source = "dataway"
            measurement_name = "dw"
            interval = "10s"
            disable_instance_tag = true
          [inputs.prom.tags]
            service = "dataway"
            instance = "$PODNAME" # we can set as "xxx-$PODNAME"
      labels:
        app: sts-utils-dataway
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - sts-utils-dataway
              topologyKey: kubernetes.io/hostname
      containers:
        - env:
            - name: DW_REMOTE_HOST
              value: http://kodo.forethought-kodo:9527
            - name: DW_BIND
              value: 0.0.0.0:9528
            - name: DW_UUID
              value: agnt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx   # Dataway UUID
            - name: DW_TOKEN
              value: tkn_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  # Dataway token
            - name: DW_PROM_LISTEN
              value: 0.0.0.0:9090
            - name: DW_LOG
              value: stdout
            - name: DW_LOG_LEVEL
              value: info
            - name: DW_GIN_LOG
              value: stdout
            - name: DW_DISKCACHE_DIR
              value: cache
            - name: DW_HTTP_TIMEOUT
              value: '3s'
            - name: DW_ENABLE_INTERNAL_TOKEN
              value: 'false'
            - name: DW_MAX_HTTP_BODY_BYTES
              value: '67108864'
            - name: DW_HTTP_CLIENT_TRACE
              value: 'on'
            - name: DW_RESERVED_POOL_SIZE
              value: '0'
            - name: DW_COPY_BUFFER_DROP_SIZE
              value: '262144'
            - name: DW_DISKCACHE_CAPACITY_MB
              value: 102400
          image: pubrepo.guance.com/dataflux/dataway:1.12.1
          imagePullPolicy: IfNotPresent
          name: dataway
          ports:
            - containerPort: 9528
              name: 9528tcp01
              protocol: TCP
          resources:
            limits:
              cpu: '4'
              memory: 4Gi
            requests:
              cpu: 100m
              memory: 512Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /usr/local/cloudcare/dataflux/dataway/cache
              name: dataway-cache
      dnsPolicy: ClusterFirst
      imagePullSecrets: []
      #nodeSelector:
      #  nodepool: dataway
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      #tolerations:
      #  - effect: NoSchedule
      #    key: nodepool
      #    operator: Equal
      #    value: dataway
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: dataway-cache
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: xxxxxx  # High-Performance Storage StorageClass
        volumeMode: Filesystem
      status:
        phase: Pending

---

apiVersion: v1
kind: Service
metadata:
  name: dataway
  namespace: utils
spec:
  ports:
    - name: 9528tcp02
      nodePort: 30928
      port: 9528
      protocol: TCP
      targetPort: 9528
  selector:
    app: sts-utils-dataway
  type: NodePort

In dataway-statefulset.yaml, Dataway configuration can be modified through environment variables. Refer to here.

Alternatively, a dataway.yaml can be mounted externally via ConfigMap, but it must be mounted as /usr/local/cloudcare/dataflux/dataway/dataway.yaml:

containers:
  volumeMounts:
    - name: dataway-config
      mountPath: /usr/local/cloudcare/dataflux/dataway/dataway.yaml
      subPath: config.yaml
volumes:
- configMap:
    defaultMode: 256
    name: dataway-config
    optional: false
  name: dataway-config

The environment variables required for container installation are the same as for Kubernetes. Start a DataWay container using the following Docker command:

docker run -d \
    --name <YOUR-DW-IN-DOCKER> \
    -p 19528:9528 -p 19090:9090 \
    --mount type=bind,source=<host/path/for/diskcache>,target=/usr/local/cloudcare/dataflux/dataway/cache \
    --memory=2g --memory-reservation=256m \
    --cpus="2" \
    -e DW_UUID=<YOUR-AGNT_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
    -e DW_TOKEN=<YOUR-TKN_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
    -e DW_REMOTE_HOST=http://kodo.forethought-kodo:9527 \
    -e DW_BIND=0.0.0.0:9528 \
    -e DW_PROM_LISTEN=0.0.0.0:9090 \
    -e DW_HTTP_CLIENT_TRACE=true \
    -e DW_LOG_LEVEL=info \
    -e DW_LOG=stdout \
    -e DW_GIN_LOG=stdout \
    -e DW_DISKCACHE_CAPACITY_MB=65536 \
    pubrepo.guance.com/dataflux/dataway:1.12.1

Important Notes
  • Dataway can only run on Linux systems (currently only Linux arm64/amd64 binaries are released)
  • For host installation, the Dataway installation path is /usr/local/cloudcare/dataflux/dataway
  • Under Kubernetes, resource limits are set to 4000m/4Gi by default. Adjust according to the actual situation. The minimum requirement is 100m/512Mi.
  • Verify Dataway Installation

After installation is complete, wait a moment and refresh the "Data Gateway" page. If you see a version number in the "Version Information" column for the newly added data gateway, it means this Dataway has successfully connected to the Guance center. Frontend users can then use it to ingest data.

Once Dataway successfully connects to the Guance center, log in to the Guance console. On the "Integration" / "DataKit" page, you can view all Dataway addresses. Select the required Dataway gateway address, obtain the DataKit installation command, and execute it on the server to start collecting data.

Manage DataWay

Delete DataWay

On the "Data Gateway" page in the Guance management console, select the DataWay you want to delete, click "Configure", and then click the "Delete" button in the bottom left corner of the Edit DataWay dialog box that pops up.

Warning

After deleting the DataWay, you also need to log in to the server where the DataWay gateway is deployed, stop the DataWay service, and delete the installation directory to completely remove the DataWay.

Upgrade DataWay

On the "Data Gateway" page in the Guance management console, if an upgrade is available for the DataWay, an upgrade prompt will appear in the version information.

DW_UPGRADE=1 bash -c "$(curl https://static.guance.com/dataway/install.sh)"

Simply replace the image version:

- image: pubrepo.guance.com/dataflux/dataway:1.12.1

Dataway Service Management

When Dataway is installed on a host, use the following commands to manage the Dataway service.

# Start
$ systemctl start dataway

# Restart
$ systemctl restart dataway

# Stop
$ systemctl stop dataway

For Kubernetes, restart the corresponding Pod.

Environment Variables

Image Environment Variables

The following environment variables are supported when Dataway runs in a Kubernetes environment.

Compatibility with existing dataway.yaml

Because some older Dataways inject configuration via ConfigMap (the file mounted into the container is generally named dataway.yaml), if the Dataway image finds such a file existing in the installation directory after startup, the following DW_* environment variables will not take effect. These environment variables will only take effect after removing the existing ConfigMap mount.

If the environment variables take effect, there will be a hidden (viewable via ls -a) .dataway.yaml file in the Dataway installation directory. You can cat this file to confirm the environment variables are effective.

HTTP Server Settings

Env Description
DW_REMOTE_HOST
type: string
required: Y
Kodo address, or next Dataway address, in the form http://host:port
DW_WHITE_LIST
type: string
required: N
Dataway client IP whitelist, separated by English ,
DW_HTTP_TIMEOUT
type: string
required: N
Timeout setting for Dataway requests to Kodo or the next Dataway, default 3s
DW_HTTP_MAX_IDLE_CONN_PERHOST
type: int
required: N
Maximum idle connections setting for Dataway requests to Kodo Version-1.6.2
Default value is 1000 Version-1.11.2
DW_HTTP_MAX_CONN_PERHOST
type: int
required: N
Maximum connections setting for Dataway requests to Kodo, default is unlimited Version-1.6.2
DW_BIND
type: string
required: N
Dataway HTTP API binding address, default 0.0.0.0:9528
DW_API_LIMIT
type: int
required: N
Dataway API rate limiting setting. If set to 1000, each specific API can only be requested 1000 times within 1 second. Default is 100K.
DW_HEARTBEAT
type: string
required: N
Heartbeat interval between Dataway and the center, default 60s
DW_MAX_HTTP_BODY_BYTES
type: int
required: N
Maximum allowed HTTP Body size for Dataway API (in bytes), default 64MB
DW_TLS_INSECURE_SKIP_VERIFY
type: boolean
required: N
Ignore HTTPS/TLS certificate errors
DW_HTTP_CLIENT_TRACE
type: boolean
required: N
When Dataway acts as an HTTP client, it can enable collection of related metrics, which are eventually output in its Prometheus metrics.
DW_ENABLE_TLS
type: boolean
required: N
Enable HTTPS Version-1.4.1
DW_TLS_CRT
type: file-path
required: N
Specify HTTPS/TLS crt file directory Version-1.4.0
DW_TLS_KEY
type: file-path
required: N
Specify HTTPS/TLS key file directory Version-1.4.0
DW_SNI
type: string
required: N
Specify current Dataway SNI information Version-1.6.0
DW_DISABLE_404PAGE
type: boolean
required: N
Disable 404 page Version-1.6.1
DW_HTTP_IP_BLACKLIST_TTL
type: string
required: N
Set IP blacklist validity period, default 1m Version-1.11.0
DW_HTTP_IP_BLACKLIST_CLEAN_INTERVAL
type: string
required: N
Set IP blacklist cleanup interval, default 1h Version-1.11.0
HTTP TLS Settings

To generate a TLS certificate valid for one year, you can use the following OpenSSL command:

# Generate a TLS certificate valid for one year
$ openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key
...

After executing this command, you will be prompted to enter some necessary information, including your country, region, city, organization name, department name, and your email address. This information will be included in your certificate.

After completing the information entry, you will generate two files: tls.crt (the certificate file) and tls.key (the private key file). Please keep your private key file safe and ensure its security.

To enable the application to use these TLS certificates, you need to set the absolute paths of these two files into the application's environment variables. Here is an example of setting environment variables:

DW_ENABLE_TLS must be enabled first for the other two ENVs (DW_TLS_CRT/DW_TLS_KEY) to take effect. Version-1.4.1

env:
- name: DW_ENABLE_TLS
  value: "true"
- name: DW_TLS_CRT
  value: "/path/to/your/tls.crt"
- name: DW_TLS_KEY
  value: "/path/to/your/tls.key"

Replace /path/to/your/tls.crt and /path/to/your/tls.key with the actual paths where your tls.crt and tls.key files are stored.

After setting, you can test if TLS is working with the following command:

$ curl -k http://localhost:9528

If successful, an ASCII Art message saying It's working! will be displayed. If the certificate does not exist, the Dataway log will have an error similar to the following:

server listen(TLS) failed: open /path/to/your/tls.{crt,key}: no such file or directory

At this point, Dataway cannot start, and the curl command above will also report an error:

$ curl -vvv -k http://localhost:9528
curl: (7) Failed to connect to localhost port 9528 after 6 ms: Couldn't connect to server

Logging Settings

Env Description
DW_LOG
type: string
required: N
Log path, default is log. To output logs to standard output for easy log collection, simply configure it as stdout.
DW_LOG_LEVEL
type: string
required: N
Default is info, options include debug.
DW_GIN_LOG
type: string
required: N
Default is gin.log. This can also be configured as stdout for easy collection.
DW_LOG_PKG_ID
type: bool
required: N
Version-1.12.0 Whether to record package ID in logs. Default true.

Token/UUID Settings

Env Description
DW_UUID
type: string
required: Y
Dataway UUID, generated by the system workspace when creating a new Dataway.
DW_TOKEN
type: string
required: Y
Usually the data upload token of the system workspace.
DW_SECRET_TOKEN
type: string
required: N
When the Sinker function is enabled, this Token can be set.
DW_ENABLE_INTERNAL_TOKEN
type: boolean
required: N
Allow using __internal__ as the client Token, in which case the system workspace's Token is used by default.
DW_ENABLE_EMPTY_TOKEN
type: boolean
required: N
Allow uploading data without a Token, in which case the system workspace's Token is used by default.

Sinker Settings

Env Description
DW_SECRET_TOKEN
type: string
required: N
When the Sinker function is enabled, this Token can be set.
DW_CASCADED
type: string
required: N
Whether Dataway is cascaded.
DW_SINKER_ETCD_URLS
type: string
required: N
List of etcd addresses, separated by ,, e.g., http://1.2.3.4:2379,http://1.2.3.4:2380.
DW_SINKER_ETCD_DIAL_TIMEOUT
type: string
required: N
etcd connection timeout, default 30s.
DW_SINKER_ETCD_KEY_SPACE
type: string
required: N
etcd key name where the Sinker configuration is located (default /dw_sinker).
DW_SINKER_ETCD_USERNAME
type: string
required: N
etcd username.
DW_SINKER_ETCD_PASSWORD
type: string
required: N
etcd password.
DW_SINKER_FILE_PATH
type: file-path
required: N
Specify sinker rule configuration via a local file.
DW_SINKER_CACHE_BUCKETS
type: int
required: N
Version-1.12.0 Specify the number of Sinker cache buckets, default 64.
DW_SINKER_CACHE_RESERVED_CAPACITY
type: int
required: N
Version-1.12.0 Specify the upper limit of Sinker cache entries, default 1M (1<<20).
DW_SINKER_CACHE_TTL
type: int
required: N
Version-1.12.0 Specify the survival time of cached elements in Sinker, default 10m (10 minutes).
DW_SINKER_CACHE_PREALLOC
type: bool
required: N
Version-1.12.0 Pre-allocate cache memory, default false.
Warning

If both local file and etcd methods are specified, the Sinker rules in the local file take priority. If neither is specified, the sinker function is effectively turned off.

Prometheus Metrics Exposure

Env Description
DW_PROM_URL
type: string
required: N
URL Path for Prometheus metrics (default /metrics).
DW_PROM_LISTEN
type: string
required: N
Address for exposing Prometheus metrics (default localhost:9090).
DW_PROM_DISABLED
type: boolean
required: N
Disable Prometheus metrics exposure.

Disk Cache Settings

Env Description
DW_DISKCACHE_DIR
type: file-path
required: N
Set the cache directory. This directory is usually externally mounted storage.
DW_DISKCACHE_DISABLE
type: boolean
required: N
Disable disk cache. If cache is not disabled, remove this environment variable.
DW_DISKCACHE_CLEAN_INTERVAL
type: string
required: N
Cache cleanup interval, default 30s.
DW_DISKCACHE_EXPIRE_DURATION
type: string
required: N
Cache expiration time, default 168h (7d).
DW_DISKCACHE_CAPACITY_MB
type: int
required: N
Version-1.6.0 Set the available disk space size, in MB, default 20GB.
DW_DISKCACHE_BATCH_SIZE_MB
type: int
required: N
Version-1.6.0 Set the maximum size of a single disk cache file, in MB, default 64MB.
DW_DISKCACHE_MAX_DATA_SIZE_MB
type: int
required: N
Version-1.6.0 Set the maximum size of a single cache content (e.g., a single HTTP body), in MB, default 64MB. Single data packets exceeding this size will be discarded.
Tips

Setting DW_DISKCACHE_DISABLE disables the disk cache.

Performance Related Settings

Version-1.6.0

Env Description
DW_COPY_BUFFER_DROP_SIZE
type: int
required: N
Single HTTP body buffers exceeding the specified size (in bytes) will be cleared immediately to avoid consuming too much memory. Default value 256KB.

Dataway API List

Details for each API below are to be added.

GET /v1/ping

Version-1.11.0

  • API Description: Get the current version number and release date of Dataway, and also return the egress IP of the client request.

If DataWay has the 404 page disabled (disable_404page), this interface will not be available.

GET /v1/ntp

Version-1.6.0

  • API Description: Get the current Unix timestamp (in seconds) of Dataway.

POST /v1/write/:category

  • API Description: Receive various collected data uploaded by Datakit.

GET /v1/datakit/pull

  • API Description: Handle Datakit's request to pull central configuration (blacklist/Pipeline).

POST /v1/write/rum/replay

  • API Description: Receive Session Replay data uploaded by Datakit.

POST /v1/upload/profiling

  • API Description: Receive Profiling data uploaded by Datakit.

POST /v1/election

  • API Description: Handle Datakit's election request.

POST /v1/election/heartbeat

  • API Description: Handle Datakit's election heartbeat request.

POST /v1/query/raw

Handle DQL query requests. A simple example is as follows:

POST /v1/query/raw?token=<workspace-token> HTTP/1.1
Content-Type: application/json

{
    "token": "workspace-token",
    "queries": [
        {
            "query": "M::cpu LIMIT 1"
        }
    ],
    "echo_explain": <true/false>
}

Return example:

{
  "content": [
    {
      "series": [
        {
          "name": "cpu",
          "columns": [
            "time",
            "usage_iowait",
            "usage_total",
            "usage_user",
            "usage_guest",
            "usage_system",
            "usage_steal",
            "usage_guest_nice",
            "usage_irq",
            "load5s",
            "usage_idle",
            "usage_nice",
            "usage_softirq",
            "global_tag1",
            "global_tag2",
            "host",
            "cpu"
          ],
          "values": [
            [
              1709782208662,
              0,
              7.421875,
              3.359375,
              0,
              4.0625,
              0,
              0,
              0,
              1,
              92.578125,
              0,
              0,
              null,
              null,
              "WIN-JCHUL92N9IP",
              "cpu-total"
            ]
          ]
        }
      ],
      "points": null,
      "cost": "24.558375ms",
      "is_running": false,
      "async_id": "",
      "query_parse": {
        "namespace": "metric",
        "sources": {
          "cpu": "exact"
        },
        "fields": {},
        "funcs": {}
      },
      "index_name": "",
      "index_store_type": "",
      "query_type": "guancedb",
      "complete": false,
      "index_names": "",
      "scan_completed": false,
      "scan_index": "",
      "next_cursor_time": -1,
      "sample": 1,
      "interval": 0,
      "window": 0
    }
  ]
}

Return result description:

  • The real data is located in the inner series field.
  • name indicates the measurement name (here the CPU metric is queried; for log data, this field is absent).
  • columns indicates the names of the returned result columns.
  • values contains the corresponding column results for columns.

Info
  • The token in the URL request parameters can be different from the token in the JSON body. The former is used to verify the legality of the query request, and the latter is used to determine the target data's workspace.
  • The queries field can carry multiple queries, each query can carry additional fields. For the specific field list, refer to here

POST /v1/workspace

  • API Description: Handle workspace query requests initiated by the Datakit end.

POST /v1/object/labels

  • API Description: Handle requests to modify object Labels.

DELETE /v1/object/labels

  • API Description: Handle requests to delete object Labels.

GET /v1/check/:token

  • API Description: Check if the token is valid.

Dataway Metrics Collection

HTTP client metrics collection

To collect metrics for Dataway HTTP requests to Kodo (or the next hop Dataway), you need to manually enable the http_client_trace configuration. Or specify the environment variable DW_HTTP_CLIENT_TRACE=true.

Dataway itself exposes Prometheus metrics. They can be collected by the prom collector that comes with Datakit. An example collector configuration is as follows:

[[inputs.prom]]
  ## Exporter URLs.
  urls = [ "http://localhost:9090/metrics", ]
  source = "dataway"
  election = true
  measurement_name = "dw" # The dataway measurement set is fixed as dw, do not change it.
[inputs.prom.tags]
  service = "dataway"

If Datakit is deployed in the cluster (requires Datakit 1.14.2 or above), then Prometheus metrics exposure can be enabled in Dataway (the default POD yaml for Dataway already includes this):

annotations: # The following annotations are added by default.
   datakit/prom.instances: |
     [[inputs.prom]]
       url = "http://$IP:9090/metrics" # The port here (default 9090) depends on the situation.
       source = "dataway"
       measurement_name = "dw" # Fixed as this measurement set.
       interval = "10s"
       disable_instance_tag = true

     [inputs.prom.tags]
       service = "dataway"
       instance = "$PODNAME"

...
env:
- name: DW_PROM_LISTEN
  value: "0.0.0.0:9090" # Keep the port here consistent with the port in the url above.

If collection is successful, you can see the corresponding monitoring view by searching for dataway in the Guance "Scenes" / "Built-in Views".

Dataway Metrics List

The following are the metrics exposed by Dataway. These metrics can be obtained by requesting http://localhost:9090/metrics. You can use the following command to view a specific metric in real time (3s):

If some metrics cannot be queried, it may be because the relevant business module is not yet running. Some new metrics only exist in the latest version. The version information for each metric is not listed here one by one. Please refer to the metric list returned by the /metrics interface.

watch -n 3 'curl -s http://localhost:9090/metrics | grep -a <METRIC-NAME>'
TYPE NAME LABELS HELP
SUMMARY dataway_http_api_elapsed_seconds api,method,sinked,status API request latency
SUMMARY dataway_http_api_body_buffer_utilization api API body buffer utillization(Len/Cap)
SUMMARY dataway_http_api_body_copy api API body copy
SUMMARY dataway_http_api_body_copy_seconds api API body copy latency
SUMMARY dataway_http_api_body_copy_enlarge api API body copy enlarged pooled buffer
SUMMARY dataway_http_api_resp_size_bytes api,method,status API response size
SUMMARY dataway_http_api_req_size_bytes api,method,status API request size
COUNTER dataway_http_api_body_too_large_dropped_total api,method API request too large dropped
COUNTER dataway_http_api_with_inner_token api,method API request with inner token
COUNTER dataway_http_api_dropped_total api,method API request dropped when sinker rule match failed
COUNTER dataway_ip_blacklist_blocked_total api,method IP blacklist blocked requests total
COUNTER dataway_ip_blacklist_missed_total api,method IP blacklist missed total
COUNTER dataway_ip_blacklist_added_total api,method,reason IP blacklist added total
COUNTER dataway_syncpool_stats name,type sync.Pool usage stats
COUNTER dataway_http_api_copy_body_failed_total api API copy body failed count
COUNTER dataway_http_api_signed_total api,method API signature count
SUMMARY dataway_http_api_cached_bytes api,cache_type,method,reason API cached body bytes
SUMMARY dataway_http_api_reusable_body_read_bytes api,method API re-read body on forking request
SUMMARY dataway_http_api_recv_points api API /v1/write/:category recevied points
SUMMARY dataway_http_api_send_points api API /v1/write/:category send points
SUMMARY dataway_http_api_cache_points api,cache_type Disk cached /v1/write/:category points
SUMMARY dataway_http_api_cache_cleaned_points api,cache_type,status Disk cache cleaned /v1/write/:category points
COUNTER dataway_http_api_forked_total api,method,token API request forked total
GAUGE dataway_http_cli_info max_conn_per_host,max_idle_conn,max_idle_conn_per_host,timeout Dataway as client settings
GAUGE dataway_http_info cascaded,docker,http_client_trace,listen,max_body,release_date,remote,version Dataway API basic info
GAUGE dataway_last_heartbeat_time N/A Dataway last heartbeat with Kodo timestamp
SUMMARY dataway_http_api_copy_buffer_drop_total max API copy buffer dropped(too large cached buffer) count
GAUGE dataway_cpu_usage N/A Dataway CPU usage(%)
GAUGE dataway_mem_stat type Dataway memory usage stats
GAUGE dataway_open_files N/A Dataway open files
GAUGE dataway_cpu_cores N/A Dataway CPU cores
GAUGE dataway_uptime N/A Dataway uptime
COUNTER dataway_process_ctx_switch_total type Dataway process context switch count(Linux only)
COUNTER dataway_process_io_count_total type Dataway process IO count
COUNTER dataway_process_io_bytes_total type Dataway process IO bytes count
SUMMARY dataway_http_api_dropped_cache api,method,reason Dropped cache data dur to various reasons
GAUGE dataway_httpcli_dns_resolved_address api,coalesced,host,server HTTP DNS resolved address
SUMMARY dataway_httpcli_dns_cost_seconds api,coalesced,host,server HTTP DNS cost
SUMMARY dataway_httpcli_tls_handshake_seconds api,server HTTP TLS handshake cost
SUMMARY dataway_httpcli_http_connect_cost_seconds api,server HTTP connect cost
SUMMARY dataway_httpcli_got_first_resp_byte_cost_seconds api,server Got first response byte cost
SUMMARY http_latency api,server HTTP latency
COUNTER dataway_httpcli_tcp_conn_total api,server,remote,type HTTP TCP connection count
COUNTER dataway_httpcli_conn_reused_from_idle_total api,server HTTP connection reused from idle count
SUMMARY dataway_httpcli_conn_idle_time_seconds api,server HTTP connection idle time
GAUGE dataway_sinker_rule_cache_size name Sinker rule cache size
GAUGE dataway_sinker_rule_error error Rule errors
GAUGE dataway_sinker_default_rule_hit info Default sinker rule hit count
GAUGE dataway_sinker_rule_last_applied_time source,version Rule last applied time(Unix timestamp)
SUMMARY dataway_sinker_rule_cost_seconds type Rule cost time seconds
SUMMARY dataway_sinker_rule_match_count type Sinker rule match count on each request
SUMMARY dataway_sinker_lru_cache_cleaned name Sinker LRU cache cleanup removed entries
SUMMARY dataway_sinker_lru_cache_dropped_ttl_seconds bucket,name,reason Sinker LRU cache dropped TTL seconds
COUNTER dataway_sinker_pull_total event,source Sinker pulled or pushed total
GAUGE dataway_sinker_rule_count type,with_default Sinker rule count
GAUGE dataway_sinker_rule_cache_get_total name,type Sinker rule cache get hit/miss count
COUNTER diskcache_rotate_total path Cache rotate count, mean file rotate from data to data.0000xxx
COUNTER diskcache_remove_total path Removed file count, if some file read EOF, remove it from un-read list
COUNTER diskcache_wakeup_total path Wakeup count on sleeping write file
COUNTER diskcache_pos_updated_total op,path .pos file updated count
COUNTER diskcache_seek_back_total path Seek back when Get() got any error
GAUGE diskcache_capacity path Current capacity(in bytes)
GAUGE diskcache_max_data path Max data to Put(in bytes), default 0
GAUGE diskcache_batch_size path Data file size(in bytes)
GAUGE diskcache_size path Current cache size that waiting to be consumed(get)
GAUGE diskcache_open_time no_fallback_on_error,no_lock,no_pos,no_sync,path Current cache Open time in unix timestamp(second)
GAUGE diskcache_last_close_time path Current cache last Close time in unix timestamp(second)
GAUGE diskcache_datafiles path Current un-read data files
SUMMARY diskcache_get_latency path Get() cost seconds
SUMMARY diskcache_put_latency path Put() cost seconds
SUMMARY diskcache_put_bytes path Cache Put() bytes
SUMMARY diskcache_get_bytes path Cache Get() bytes
SUMMARY diskcache_dropped_data path,reason Dropped data during Put() when capacity reached.

Metrics Collection in Docker Mode

There are two modes for host installation: one is installation on the host machine, and the other is installation via Docker. Here we specifically explain the differences in metrics collection when installed via Docker.

When installed via Docker, the HTTP port exposed for metrics is mapped to port 19090 on the host machine (by default). In this case, the metrics collection address is http://localhost:19090/metrics.

If a different port is specified, during Docker installation, 10000 will be added to that port. Therefore, the specified port should not exceed 45535.

Additionally, when installed via Docker, the profile collection port is also exposed. By default, it is mapped to port 16060 on the host machine. Its mechanism is also to add 10000 to the specified port.

Dataway's Own Log Collection and Processing

Dataway's own Log is divided into two types: gin logs and its own program logs. They can be separated using the following Pipeline:

# Pipeline for dataway logging

# Testing sample loggin
'''
2023-12-14T11:27:06.744+0800    DEBUG   apis    apis/api_upload_profile.go:272  save profile file to disk [ok] /v1/upload/profiling?token=****************a4e3db8481c345a94fe5a
[GIN] 2021/10/25 - 06:48:07 | 200 |   30.890624ms |  114.215.200.73 | POST     "/v1/write/logging?token=tkn_5c862a11111111111111111111111111"
'''

add_pattern("TOKEN", "tkn_\\w+")
add_pattern("GINTIME", "%{YEAR}/%{MONTHNUM}/%{MONTHDAY}%{SPACE}-%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}")
grok(_,"\\[GIN\\]%{SPACE}%{GINTIME:timestamp}%{SPACE}\\|%{SPACE}%{NUMBER:dataway_code}%{SPACE}\\|%{SPACE}%{NOTSPACE:cost_time}%{SPACE}\\|%{SPACE}%{NOTSPACE:client_ip}%{SPACE}\\|%{SPACE}%{NOTSPACE:method}%{SPACE}%{GREEDYDATA:http_url}")

# gin logging
if cost_time != nil {
  if http_url != nil  {
    grok(http_url, "%{TOKEN:token}")
    cover(token, [5, 15])
    replace(message, "tkn_\\w{0,5}\\w{6}", "****************$4")
    replace(http_url, "tkn_\\w{0,5}\\w{6}", "****************$4")
  }

  group_between(dataway_code, [200,299], "info", status)
  group_between(dataway_code, [300,399], "notice", status)
  group_between(dataway_code, [400,499], "warning", status)
  group_between(dataway_code, [500,599], "error", status)

  if sample(0.1) { # drop 90% debug log
    drop()
    exit()
  } else {
    set_tag(sample_rate, "0.1")
  }

  parse_duration(cost_time)
  duration_precision(cost_time, "ns", "ms")

  set_measurement('gin', true)
  set_tag(service,"dataway")
  exit()
}

# app logging
if cost_time == nil {
  grok(_,"%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{NOTSPACE:status}%{SPACE}%{NOTSPACE:module}%{SPACE}%{NOTSPACE:code}%{SPACE}%{GREEDYDATA:msg}")
  if level == nil {
    grok(message,"Error%{SPACE}%{DATA:errormsg}")
    if errormsg != nil {
      add_key(status,"error")
      drop_key(errormsg)
    }
  }
  lowercase(level)

  # if debug level enabled, drop most of them
  if status == 'debug' {
    if sample(0.1) { # drop 90% debug log
      drop()
      exit()
    } else {
      set_tag(sample_rate, "0.1")
    }
  }

  group_in(status, ["error", "panic", "dpanic", "fatal","err","fat"], "error", status) # mark them as 'error'

  if msg != nil {
    grok(msg, "%{TOKEN:token}")
    cover(token, [5, 15])
    replace(message, "tkn_\\w{0,5}\\w{6}", "****************$4")
    replace(msg, "tkn_\\w{0,5}\\w{6}", "****************$4")
  }

  set_measurement("dataway-log", true)
  set_tag(service,"dataway")
}

Dataway bug report

Dataway itself exposes metrics and profiling collection endpoints. We can collect this information for troubleshooting.

The following information collection is based on the actual configured ports and addresses. The existing commands are listed according to the default parameters.

dw-bug-report.sh
br_dir="dw-br-$(date +%s)"
mkdir -p $br_dir

echo "save bug report to ${br_dir}"

# Modify the configuration here according to the actual situation.
dw_ip="localhost" # IP address where Dataway metrics/profile are exposed.
metric_port=9090  # Port for metrics exposure.
profile_port=6060 # Port for profile exposure.
dw_yaml_conf="/usr/local/cloudcare/dataflux/dataway/dataway.yaml"
dw_dot_yaml_conf="/usr/local/cloudcare/dataflux/dataway/.dataway.yaml" # This file exists for container installation.

# Collect runtime metrics.
curl -v "http://${dw_ip}:${metric_port}/metrics" -o $br_dir/metrics

# Collect profiling information.
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/allocs" -o $br_dir/allocs
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/heap" -o $br_dir/heap
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/profile" -o $br_dir/profile # This command will run for about 30s.

cp $dw_yaml_conf $br_dir/dataway.yaml.copy
cp $dw_dot_yaml_conf $br_dir/.dataway.yaml.copy

tar czvf ${br_dir}.tar.gz ${br_dir}
rm -rf ${br_dir}

Run the script:

$ sh dw-bug-report.sh
...

After execution, a file similar to dw-br-1721188604.tar.gz will be generated. Extract this file.

FAQ

Request Body Too Large Issue

Version-1.3.7

Dataway has a default setting for the request body size (default 64MB). When the request body is too large, the client will receive an HTTP 413 error (Request Entity Too Large). If the request body is within a reasonable range, you can appropriately increase this value (in bytes):

  • Set the environment variable DW_MAX_HTTP_BODY_BYTES
  • Set max_http_body_bytes in dataway.yaml

If excessively large request packets occur during operation, they are reflected in both metrics and logs:

  • The metric dataway_http_too_large_dropped_total exposes the number of dropped large requests.
  • Search the Dataway log cat log | grep 'drop too large request'. The log will output details of the HTTP request headers, making it easier to understand the client situation.
Warning

In the disk cache module, there is also a maximum data block write limit (default 64MB). If you increase the maximum request body configuration, you must also adjust this configuration (ENV_DISKCACHE_MAX_DATA_SIZE) to ensure that large requests can be correctly written to the disk cache.

```yaml - image: pubrepo.guance.com/dataflux/dataway:1.12.1

<!-- markdownlint-enable -->

### Dataway Service Management {#manage-service}

When Dataway is installed on a host, use the following commands to manage the Dataway service.

``` shell
# Start
$ systemctl start dataway

# Restart
$ systemctl restart dataway

# Stop
$ systemctl stop dataway

For Kubernetes, restart the corresponding Pod.

Environment Variables

Image Environment Variables

The following environment variables are supported when Dataway runs in a Kubernetes environment.

Compatibility with existing dataway.yaml

Because some older Dataways inject configuration via ConfigMap (the file mounted into the container is generally named dataway.yaml), if the Dataway image finds such a file existing in the installation directory after startup, the following DW_* environment variables will not take effect. These environment variables will only take effect after removing the existing ConfigMap mount.

If the environment variables take effect, there will be a hidden (viewable via ls -a) .dataway.yaml file in the Dataway installation directory. You can cat this file to confirm the environment variables are effective.

HTTP Server Settings

Env Description
DW_REMOTE_HOST
type: string
required: Y
Kodo address, or next Dataway address, in the form http://host:port
DW_WHITE_LIST
type: string
required: N
Dataway client IP whitelist, separated by English ,
DW_HTTP_TIMEOUT
type: string
required: N
Timeout setting for Dataway requests to Kodo or the next Dataway, default 3s
DW_HTTP_MAX_IDLE_CONN_PERHOST
type: int
required: N
Maximum idle connections setting for Dataway requests to Kodo Version-1.6.2
Default value is 1000 Version-1.11.2
DW_HTTP_MAX_CONN_PERHOST
type: int
required: N
Maximum connections setting for Dataway requests to Kodo, default is unlimited Version-1.6.2
DW_BIND
type: string
required: N
Dataway HTTP API binding address, default 0.0.0.0:9528
DW_API_LIMIT
type: int
required: N
Dataway API rate limiting setting. If set to 1000, each specific API can only be requested 1000 times within 1 second. Default is 100K.
DW_HEARTBEAT
type: string
required: N
Heartbeat interval between Dataway and the center, default 60s
DW_MAX_HTTP_BODY_BYTES
type: int
required: N
Maximum allowed HTTP Body size for Dataway API (in bytes), default 64MB
DW_TLS_INSECURE_SKIP_VERIFY
type: boolean
required: N
Ignore HTTPS/TLS certificate errors
DW_HTTP_CLIENT_TRACE
type: boolean
required: N
When Dataway acts as an HTTP client, it can enable collection of related metrics, which are eventually output in its Prometheus metrics.
DW_ENABLE_TLS
type: boolean
required: N
Enable HTTPS Version-1.4.1
DW_TLS_CRT
type: file-path
required: N
Specify HTTPS/TLS crt file directory Version-1.4.0
DW_TLS_KEY
type: file-path
required: N
Specify HTTPS/TLS key file directory Version-1.4.0
DW_SNI
type: string
required: N
Specify current Dataway SNI information Version-1.6.0
DW_DISABLE_404PAGE
type: boolean
required: N
Disable 404 page Version-1.6.1
DW_HTTP_IP_BLACKLIST_TTL
type: string
required: N
Set IP blacklist validity period, default 1m Version-1.11.0
DW_HTTP_IP_BLACKLIST_CLEAN_INTERVAL
type: string
required: N
Set IP blacklist cleanup interval, default 1h Version-1.11.0
HTTP TLS Settings

To generate a TLS certificate valid for one year, you can use the following OpenSSL command:

# Generate a TLS certificate valid for one year
$ openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key
...

After executing this command, you will be prompted to enter some necessary information, including your country, region, city, organization name, department name, and your email address. This information will be included in your certificate.

After completing the information entry, you will generate two files: tls.crt (the certificate file) and tls.key (the private key file). Please keep your private key file safe and ensure its security.

To enable the application to use these TLS certificates, you need to set the absolute paths of these two files into the application's environment variables. Here is an example of setting environment variables:

DW_ENABLE_TLS must be enabled first for the other two ENVs (DW_TLS_CRT/DW_TLS_KEY) to take effect. Version-1.4.1

env:
- name: DW_ENABLE_TLS
  value: "true"
- name: DW_TLS_CRT
  value: "/path/to/your/tls.crt"
- name: DW_TLS_KEY
  value: "/path/to/your/tls.key"

Replace /path/to/your/tls.crt and /path/to/your/tls.key with the actual paths where your tls.crt and tls.key files are stored.

After setting, you can test if TLS is working with the following command:

$ curl -k http://localhost:9528

If successful, an ASCII Art message saying It's working! will be displayed. If the certificate does not exist, the Dataway log will have an error similar to the following:

server listen(TLS) failed: open /path/to/your/tls.{crt,key}: no such file or directory

At this point, Dataway cannot start, and the curl command above will also report an error:

$ curl -vvv -k http://localhost:9528
curl: (7) Failed to connect to localhost port 9528 after 6 ms: Couldn't connect to server

Logging Settings

Env Description
DW_LOG
type: string
required: N
Log path, default is log. To output logs to standard output for easy log collection, simply configure it as stdout.
DW_LOG_LEVEL
type: string
required: N
Default is info, options include debug.
DW_GIN_LOG
type: string
required: N
Default is gin.log. This can also be configured as stdout for easy collection.
DW_LOG_PKG_ID
type: bool
required: N
Version-1.12.0 Whether to record package ID in logs. Default true.

Token/UUID Settings

Env Description
DW_UUID
type: string
required: Y
Dataway UUID, generated by the system workspace when creating a new Dataway.
DW_TOKEN
type: string
required: Y
Usually the data upload token of the system workspace.
DW_SECRET_TOKEN
type: string
required: N
When the Sinker function is enabled, this Token can be set.
DW_ENABLE_INTERNAL_TOKEN
type: boolean
required: N
Allow using __internal__ as the client Token, in which case the system workspace's Token is used by default.
DW_ENABLE_EMPTY_TOKEN
type: boolean
required: N
Allow uploading data without a Token, in which case the system workspace's Token is used by default.

Sinker Settings

Env Description
DW_SECRET_TOKEN
type: string
required: N
When the Sinker function is enabled, this Token can be set.
DW_CASCADED
type: string
required: N
Whether Dataway is cascaded.
DW_SINKER_ETCD_URLS
type: string
required: N
List of etcd addresses, separated by ,, e.g., http://1.2.3.4:2379,http://1.2.3.4:2380.
DW_SINKER_ETCD_DIAL_TIMEOUT
type: string
required: N
etcd connection timeout, default 30s.
DW_SINKER_ETCD_KEY_SPACE
type: string
required: N
etcd key name where the Sinker configuration is located (default /dw_sinker).
DW_SINKER_ETCD_USERNAME
type: string
required: N
etcd username.
DW_SINKER_ETCD_PASSWORD
type: string
required: N
etcd password.
DW_SINKER_FILE_PATH
type: file-path
required: N
Specify sinker rule configuration via a local file.
DW_SINKER_CACHE_BUCKETS
type: int
required: N
Version-1.12.0 Specify the number of Sinker cache buckets, default 64.
DW_SINKER_CACHE_RESERVED_CAPACITY
type: int
required: N
Version-1.12.0 Specify the upper limit of Sinker cache entries, default 1M (1<<20).
DW_SINKER_CACHE_TTL
type: int
required: N
Version-1.12.0 Specify the survival time of cached elements in Sinker, default 10m (10 minutes).
DW_SINKER_CACHE_PREALLOC
type: bool
required: N
Version-1.12.0 Pre-allocate cache memory, default false.
Warning

If both local file and etcd methods are specified, the Sinker rules in the local file take priority. If neither is specified, the sinker function is effectively turned off.

Prometheus Metrics Exposure

Env Description
DW_PROM_URL
type: string
required: N
URL Path for Prometheus metrics (default /metrics).
DW_PROM_LISTEN
type: string
required: N
Address for exposing Prometheus metrics (default localhost:9090).
DW_PROM_DISABLED
type: boolean
required: N
Disable Prometheus metrics exposure.

Disk Cache Settings

Env Description
DW_DISKCACHE_DIR
type: file-path
required: N
Set the cache directory. This directory is usually externally mounted storage.
DW_DISKCACHE_DISABLE
type: boolean
required: N
Disable disk cache. If cache is not disabled, remove this environment variable.
DW_DISKCACHE_CLEAN_INTERVAL
type: string
required: N
Cache cleanup interval, default 30s.
DW_DISKCACHE_EXPIRE_DURATION
type: string
required: N
Cache expiration time, default 168h (7d).
DW_DISKCACHE_CAPACITY_MB
type: int
required: N
Version-1.6.0 Set the available disk space size, in MB, default 20GB.
DW_DISKCACHE_BATCH_SIZE_MB
type: int
required: N
Version-1.6.0 Set the maximum size of a single disk cache file, in MB, default 64MB.
DW_DISKCACHE_MAX_DATA_SIZE_MB
type: int
required: N
Version-1.6.0 Set the maximum size of a single cache content (e.g., a single HTTP body), in MB, default 64MB. Single data packets exceeding this size will be discarded.
Tips

Setting DW_DISKCACHE_DISABLE disables the disk cache.

Performance Related Settings

Version-1.6.0

Env Description
DW_COPY_BUFFER_DROP_SIZE
type: int
required: N
Single HTTP body buffers exceeding the specified size (in bytes) will be cleared immediately to avoid consuming too much memory. Default value 256KB.

Dataway API List

Details for each API below are to be added.

GET /v1/ping

Version-1.11.0

  • API Description: Get the current version number and release date of Dataway, and also return the egress IP of the client request.

If DataWay has the 404 page disabled (disable_404page), this interface will not be available.

GET /v1/ntp

Version-1.6.0

  • API Description: Get the current Unix timestamp (in seconds) of Dataway.

POST /v1/write/:category

  • API Description: Receive various collected data uploaded by Datakit.

GET /v1/datakit/pull

  • API Description: Handle Datakit's request to pull central configuration (blacklist/Pipeline).

POST /v1/write/rum/replay

  • API Description: Receive Session Replay data uploaded by Datakit.

POST /v1/upload/profiling

  • API Description: Receive Profiling data uploaded by Datakit.

POST /v1/election

  • API Description: Handle Datakit's election request.

POST /v1/election/heartbeat

  • API Description: Handle Datakit's election heartbeat request.

POST /v1/query/raw

Handle DQL query requests. A simple example is as follows:

POST /v1/query/raw?token=<workspace-token> HTTP/1.1
Content-Type: application/json

{
    "token": "workspace-token",
    "queries": [
        {
            "query": "M::cpu LIMIT 1"
        }
    ],
    "echo_explain": <true/false>
}

Return example:

{
  "content": [
    {
      "series": [
        {
          "name": "cpu",
          "columns": [
            "time",
            "usage_iowait",
            "usage_total",
            "usage_user",
            "usage_guest",
            "usage_system",
            "usage_steal",
            "usage_guest_nice",
            "usage_irq",
            "load5s",
            "usage_idle",
            "usage_nice",
            "usage_softirq",
            "global_tag1",
            "global_tag2",
            "host",
            "cpu"
          ],
          "values": [
            [
              1709782208662,
              0,
              7.421875,
              3.359375,
              0,
              4.0625,
              0,
              0,
              0,
              1,
              92.578125,
              0,
              0,
              null,
              null,
              "WIN-JCHUL92N9IP",
              "cpu-total"
            ]
          ]
        }
      ],
      "points": null,
      "cost": "24.558375ms",
      "is_running": false,
      "async_id": "",
      "query_parse": {
        "namespace": "metric",
        "sources": {
          "cpu": "exact"
        },
        "fields": {},
        "funcs": {}
      },
      "index_name": "",
      "index_store_type": "",
      "query_type": "guancedb",
      "complete": false,
      "index_names": "",
      "scan_completed": false,
      "scan_index": "",
      "next_cursor_time": -1,
      "sample": 1,
      "interval": 0,
      "window": 0
    }
  ]
}

Return result description:

  • The real data is located in the inner series field.
  • name indicates the measurement name (here the CPU metric is queried; for log data, this field is absent).
  • columns indicates the names of the returned result columns.
  • values contains the corresponding column results for columns.

Info
  • The token in the URL request parameters can be different from the token in the JSON body. The former is used to verify the legality of the query request, and the latter is used to determine the target data's workspace.
  • The queries field can carry multiple queries, each query can carry additional fields. For the specific field list, refer to here

POST /v1/workspace

  • API Description: Handle workspace query requests initiated by the Datakit end.

POST /v1/object/labels

  • API Description: Handle requests to modify object Labels.

DELETE /v1/object/labels

  • API Description: Handle requests to delete object Labels.

GET /v1/check/:token

  • API Description: Check if the token is valid.

Dataway Metrics Collection

HTTP client metrics collection

To collect metrics for Dataway HTTP requests to Kodo (or the next hop Dataway), you need to manually enable the http_client_trace configuration. Or specify the environment variable DW_HTTP_CLIENT_TRACE=true.

Dataway itself exposes Prometheus metrics. They can be collected by the prom collector that comes with Datakit. An example collector configuration is as follows:

[[inputs.prom]]
  ## Exporter URLs.
  urls = [ "http://localhost:9090/metrics", ]
  source = "dataway"
  election = true
  measurement_name = "dw" # The dataway measurement set is fixed as dw, do not change it.
[inputs.prom.tags]
  service = "dataway"

If Datakit is deployed in the cluster (requires Datakit 1.14.2 or above), then Prometheus metrics exposure can极有可能被中断,请继续翻译剩余部分。


  1. This limit is used to prevent the Dataway container/Pod from being restricted by the system to use only about 20,000 connections at runtime. Increasing this limit will affect the efficiency of Dataway data uploads. When Dataway traffic is high, consider increasing the number of CPUs per Dataway or horizontally scaling Dataway instances. 

Feedback

Is this page helpful? ×