Skip to content

Dataway


Introduction

DataWay is the data gateway of Guance. All data reported by collectors to Guance must pass through the DataWay gateway.

Dataway Installation

  • Create a New Dataway

In the Guance management console, go to the "Data Gateway" page, click "Create Dataway". Enter the name and binding address, then click "Create".

After successful creation, a new Dataway will be created automatically and the installation script for Dataway will be generated.

Info

The binding address is the Dataway gateway address. It must be a complete HTTP address, e.g., http(s)://1.2.3.4:9528, including the protocol, host address, and port. The host address can generally be the IP address of the machine where Dataway is deployed, or it can be specified as a domain name, which needs to be properly resolved.

Note: Ensure that the collector can access this address, otherwise data collection will fail.

  • Install Dataway
DW_KODO=http://kodo_ip:port \
   DW_TOKEN=<tkn_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
   DW_UUID=<YOUR_UUID> \
   bash -c "$(curl https://static.guance.com/dataway/install.sh)"

Host installation is no longer recommended. Please install Dataway directly via the Kubernetes statefulset method.

After installation, a dataway.yaml file will be generated in the installation directory. Its content is as shown in the example below and can be modified manually. Changes take effect after restarting the service.

dataway.yaml (Click to expand)
# ============= DATAWAY CONFIG =============

# Dataway UUID, we can get it on during create a new dataway
uuid:

# It's the workspace token, most of the time, it's
# system worker space's token.
token:

# secret_token used under sinker mode, and to check if incomming datakit
# requests are valid.
secret_token:

# If __internal__ token allowed? If ok, the data/request will direct to
# the workspace with the token above
enable_internal_token: false

# is empty token allowed? If ok, the data/request will direct to
# the workspace with the token above
enable_empty_token: false

# Is dataway cascaded? For cascaded Dataway, it's remote_host is
# another Dataway and not Kodo.
cascaded: false

# kodo(next dataway) related configures
remote_host:
http_timeout: 3s

http_max_idle_conn_perhost: 0 # default to CPU cores
http_max_conn_perhost: 0      # default no limit

insecure_skip_verify: false
http_client_trace: false
sni: ""

# dataway API configures
bind: 0.0.0.0:9528

# disable 404 page
disable_404page: false

# dataway TLS file path
tls_crt:
tls_key:

# enable pprof
pprof_bind: localhost:6060

api_limit_rate : 100000         # 100K
max_http_body_bytes : 67108864  # 64MB
copy_buffer_drop_size : 262144  # 256KB, if copy buffer memory larger than this, this memory released
reserved_pool_size: 4096        # reserved pool size for better GC

within_docker: false

log_level: info
log: log
gin_log: gin.log

ip_blacklist:
  ttl = "1m"
  clean_interval = "1h"

cache_cfg:
  # cache disk path
  dir: "disk_cache"

  # disable cache
  disabled: false

  clean_interval: "10s"

  # in MB, max single data package size in disk cache, such as HTTP body
  max_data_size: 100

  # in MB, single disk-batch(single file) size
  batch_size: 128

  # in MB, max disk size allowed to cache data
  max_disk_size: 65535

  # expire duration, default 7 days
  expire_duration: "168h"

prometheus:
  listen: "localhost:9090"
  url: "/metrics"
  enable: true

#sinker:
#  cache_options:
#    prealloc: true
#    reserved_capacity: 10000000 # max cached items
#    buckets: 64
#    ttl: 10m # clear unactive matches
#  etcd:
#    urls:
#    - http://localhost:2379 # one or multiple etcd host
#    dial_timeout: 30s
#    key_space: "/dw_sinker" # subscribe to the etcd key
#    username: "dataway"
#    password: "<PASSWORD>"
#  file:
#    path: /path/to/sinker.json

The Dataway pod yaml is as follows:

dataway-statefulset.yaml (Click to expand)
---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: sts-utils-dataway
  name: dataway
  namespace: utils
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sts-utils-dataway
  serviceName: dataway
  template:
    metadata:
      annotations:
        datakit/logs: |
          [
            {
              "disable": false,
              "source": "dataway",
              "service": "dataway",
              "multiline_match": "^\\d{4}|^\\[GIN\\]"
            }
          ]
        datakit/prom.instances: |
          [[inputs.prom]]
            url = "http://$IP:9090/metrics"

            source = "dataway"
            measurement_name = "dw"
            interval = "10s"
            disable_instance_tag = true
          [inputs.prom.tags]
            service = "dataway"
            instance = "$PODNAME" # we can set as "xxx-$PODNAME"
      labels:
        app: sts-utils-dataway
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - sts-utils-dataway
              topologyKey: kubernetes.io/hostname
      containers:
        - env:
            - name: DW_REMOTE_HOST
              value: http://kodo.forethought-kodo:9527
            - name: DW_BIND
              value: 0.0.0.0:9528
            - name: DW_UUID
              value: agnt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx   # Dataway UUID
            - name: DW_TOKEN
              value: tkn_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  # Dataway token
            - name: DW_PROM_LISTEN
              value: 0.0.0.0:9090
            - name: DW_LOG
              value: stdout
            - name: DW_LOG_LEVEL
              value: info
            - name: DW_GIN_LOG
              value: stdout
            - name: DW_DISKCACHE_DIR
              value: cache
            - name: DW_HTTP_TIMEOUT
              value: '3s'
            - name: DW_ENABLE_INTERNAL_TOKEN
              value: 'false'
            - name: DW_MAX_HTTP_BODY_BYTES
              value: '67108864'
            - name: DW_HTTP_CLIENT_TRACE
              value: 'on'
            - name: DW_RESERVED_POOL_SIZE
              value: '0'
            - name: DW_COPY_BUFFER_DROP_SIZE
              value: '262144'
            - name: DW_DISKCACHE_CAPACITY_MB
              value: 102400
          image: pubrepo.guance.com/dataflux/dataway:1.14.0
          imagePullPolicy: IfNotPresent
          name: dataway
          ports:
            - containerPort: 9528
              name: 9528tcp01
              protocol: TCP
          resources:
            limits:
              cpu: '4'
              memory: 4Gi
            requests:
              cpu: 100m
              memory: 512Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /usr/local/cloudcare/dataflux/dataway/cache
              name: dataway-cache
      dnsPolicy: ClusterFirst
      imagePullSecrets: []
      #nodeSelector:
      #  nodepool: dataway
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      #tolerations:
      #  - effect: NoSchedule
      #    key: nodepool
      #    operator: Equal
      #    value: dataway
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: dataway-cache
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: xxxxxx  # High-Performance Storage StorageClass
        volumeMode: Filesystem
      status:
        phase: Pending

---

apiVersion: v1
kind: Service
metadata:
  name: dataway
  namespace: utils
spec:
  ports:
    - name: 9528tcp02
      nodePort: 30928
      port: 9528
      protocol: TCP
      targetPort: 9528
  selector:
    app: sts-utils-dataway
  type: NodePort

In dataway-statefulset.yaml, Dataway configuration can be modified via environment variables. Refer to here.

Alternatively, you can mount an external dataway.yaml via ConfigMap, but it must be mounted as /usr/local/cloudcare/dataflux/dataway/dataway.yaml:

containers:
  volumeMounts:
    - name: dataway-config
      mountPath: /usr/local/cloudcare/dataflux/dataway/dataway.yaml
      subPath: config.yaml
volumes:
- configMap:
    defaultMode: 256
    name: dataway-config
    optional: false
  name: dataway-config

The environment variables required for container installation are the same as for Kubernetes. Start a DataWay container with the following Docker command:

docker run -d \
    --name <YOUR-DW-IN-DOCKER> \
    -p 19528:9528 -p 19090:9090 \
    --mount type=bind,source=<host/path/for/diskcache>,target=/usr/local/cloudcare/dataflux/dataway/cache \
    --memory=2g --memory-reservation=256m \
    --cpus="2" \
    -e DW_UUID=<YOUR-AGNT_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
    -e DW_TOKEN=<YOUR-TKN_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX> \
    -e DW_REMOTE_HOST=http://kodo.forethought-kodo:9527 \
    -e DW_BIND=0.0.0.0:9528 \
    -e DW_PROM_LISTEN=0.0.0.0:9090 \
    -e DW_HTTP_CLIENT_TRACE=true \
    -e DW_LOG_LEVEL=info \
    -e DW_LOG=stdout \
    -e DW_GIN_LOG=stdout \
    -e DW_DISKCACHE_CAPACITY_MB=65536 \
    pubrepo.guance.com/dataflux/dataway:1.14.0

Notes
  • Dataway can only run on Linux systems (currently only Linux arm64/amd64 binaries are released).
  • For host installation, the Dataway installation path is /usr/local/cloudcare/dataflux/dataway.
  • In Kubernetes, the default resource limits are set to 4000m/4Gi. Adjust according to actual situations. The minimum requirement is 100m/512Mi.
  • Verify Dataway Installation

After installation, wait a moment and refresh the "Data Gateway" page. If you see a version number in the "Version Information" column of the newly added data gateway, it indicates that this Dataway has successfully connected to the Guance center. Frontend users can then use it to ingest data.

Once Dataway successfully connects to the Guance center, log in to the Guance console. On the "Integration" / "DataKit" page, you can view all Dataway addresses. Select the required Dataway gateway address, obtain the DataKit installation command, and execute it on the server to start collecting data.

Managing DataWay

Deleting DataWay

In the Guance management console "Data Gateway" page, select the DataWay you want to delete, click "Configure". In the pop-up edit DataWay dialog, click the "Delete" button at the bottom left.

Warning

After deleting DataWay, you also need to log in to the server where the DataWay gateway is deployed, stop the DataWay service, and then delete the installation directory to completely remove DataWay.

Upgrading DataWay

In the Guance management console "Data Gateway" page, if an upgrade is available for DataWay, an upgrade prompt will appear in the version information area.

DW_UPGRADE=1 bash -c "$(curl https://static.guance.com/dataway/install.sh)"

Simply replace the image version:

- image: pubrepo.guance.com/dataflux/dataway:1.14.0

Dataway Service Management

When Dataway is installed on a host, you can use the following commands to manage the Dataway service.

# Start
$ systemctl start dataway

# Restart
$ systemctl restart dataway

# Stop
$ systemctl stop dataway

For Kubernetes, restart the corresponding Pod.

Environment Variables

Image Environment Variables

Dataway supports the following environment variables when running in a Kubernetes environment.

Compatibility with existing dataway.yaml

Since some older Dataways inject configuration via ConfigMap (the file mounted into the container is generally named dataway.yaml), if the Dataway image finds a ConfigMap-mounted file in the installation directory after startup, the following DW_* environment variables will not take effect. These environment variables will only take effect after removing the existing ConfigMap mount.

If environment variables take effect, there will be a hidden (viewable via ls -a) .dataway.yaml file in the Dataway installation directory. You can cat this file to confirm the environment variables are effective.

HTTP Server Settings

Env Description
DW_REMOTE_HOST
type: string
required: Y
Kodo address, or the next Dataway address, in the form http://host:port
DW_WHITE_LIST
type: string
required: N
Dataway client IP whitelist, separated by English ,
DW_HTTP_TIMEOUT
type: string
required: N
Timeout setting for Dataway requests to Kodo or the next Dataway, default 3s
DW_HTTP_MAX_IDLE_CONN_PERHOST
type: int
required: N
Maximum idle connection setting for Dataway requests to Kodo Version-1.6.2
Default value is 1000 Version-1.11.2
DW_HTTP_MAX_CONN_PERHOST
type: int
required: N
Maximum connection setting for Dataway requests to Kodo, default no limit Version-1.6.2
DW_BIND
type: string
required: N
Dataway HTTP API binding address, default 0.0.0.0:9528
DW_API_LIMIT
type: int
required: N
Dataway API rate limiting. If set to 1000, each specific API is allowed only 1000 requests within 1 second, default 100K
DW_HEARTBEAT
type: string
required: N
Heartbeat interval between Dataway and the center, default 60s
DW_MAX_HTTP_BODY_BYTES
type: int
required: N
Maximum HTTP Body allowed by Dataway API (in bytes), default 64MB
DW_TLS_INSECURE_SKIP_VERIFY
type: boolean
required: N
Ignore HTTPS/TLS certificate errors
DW_HTTP_CLIENT_TRACE
type: boolean
required: N
When Dataway acts as an HTTP client, enabling this can collect some related metrics, which will eventually be output in its Prometheus metrics
DW_ENABLE_TLS
type: boolean
required: N
Enable HTTPS Version-1.4.1
DW_TLS_CRT
type: file-path
required: N
Specify HTTPS/TLS crt file directory Version-1.4.0
DW_TLS_KEY
type: file-path
required: N
Specify HTTPS/TLS key file directory Version-1.4.0
DW_SNI
type: string
required: N
Specify the current Dataway SNI information Version-1.6.0
DW_DISABLE_404PAGE
type: boolean
required: N
Disable 404 page Version-1.6.1
DW_HTTP_IP_BLACKLIST_TTL
type: string
required: N
Set IP blacklist time-to-live, default 1m Version-1.11.0
DW_HTTP_IP_BLACKLIST_CLEAN_INTERVAL
type: string
required: N
Set IP blacklist cleanup interval, default 1h Version-1.11.0
HTTP TLS Settings

To generate a TLS certificate valid for one year, you can use the following OpenSSL command:

# Generate a TLS certificate valid for one year
$ openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key
...

After executing this command, you will be prompted to enter some necessary information, including your country, region, city, organization name, department name, and your email address. This information will be included in your certificate.

After completing the information entry, you will generate two files: tls.crt (certificate file) and tls.key (private key file). Please keep your private key file secure and ensure its safety.

To enable the application to use these TLS certificates, you need to set the absolute paths of these two files into the application's environment variables. Here is an example of setting environment variables:

DW_ENABLE_TLS must be enabled first, and the other two ENVs (DW_TLS_CRT/DW_TLS_KEY) will only take effect. Version-1.4.1

env:
- name: DW_ENABLE_TLS
  value: "true"
- name: DW_TLS_CRT
  value: "/path/to/your/tls.crt"
- name: DW_TLS_KEY
  value: "/path/to/your/tls.key"

Replace /path/to/your/tls.crt and /path/to/your/tls.key with the actual paths where your tls.crt and tls.key files are stored.

After setting, you can test if TLS is effective with the following command:

$ curl -k http://localhost:9528

If successful, an ASCII Art message It's working! will be displayed. If the certificate does not exist, Dataway logs will show an error similar to:

server listen(TLS) failed: open /path/to/your/tls.{crt,key}: no such file or directory

In this case, Dataway cannot start, and the above curl command will also report an error:

$ curl -vvv -k http://localhost:9528
curl: (7) Failed to connect to localhost port 9528 after 6 ms: Couldn't connect to server

Logging Settings

Env Description
DW_LOG
type: string
required: N
Log path, default is log. To output logs to standard output for easy log collection, configure it as stdout
DW_LOG_LEVEL
type: string
required: N
Default is info, optional debug
DW_GIN_LOG
type: string
required: N
Default is gin.log, can also be configured as stdout here for easy collection
DW_LOG_PKG_ID
type: bool
required: N
Version-1.12.0 Whether to record package ID in logs. Default true

Token/UUID Settings

Env Description
DW_UUID
type: string
required: Y
Dataway UUID, generated by the system workspace when creating a new Dataway
DW_TOKEN
type: string
required: Y
Usually the data upload Token of the system workspace
DW_SECRET_TOKEN
type: string
required: N
When Sinker functionality is enabled, this Token can be set
DW_ENABLE_INTERNAL_TOKEN
type: boolean
required: N
Allow using __internal__ as the client Token, in which case the system workspace Token is used by default
DW_ENABLE_EMPTY_TOKEN
type: boolean
required: N
Allow uploading data without a Token, in which case the system workspace Token is used by default

Sinker Settings

Env Description
DW_SECRET_TOKEN
type: string
required: N
When Sinker functionality is enabled, this Token can be set
DW_CASCADED
type: string
required: N
Whether Dataway is cascaded
DW_SINKER_ETCD_URLS
type: string
required: N
List of etcd addresses, separated by ,, e.g., http://1.2.3.4:2379,http://1.2.3.4:2380
DW_SINKER_ETCD_DIAL_TIMEOUT
type: string
required: N
etcd connection timeout, default 30s
DW_SINKER_ETCD_KEY_SPACE
type: string
required: N
etcd key name where Sinker configuration is located (default /dw_sinker)
DW_SINKER_ETCD_USERNAME
type: string
required: N
etcd username
DW_SINKER_ETCD_PASSWORD
type: string
required: N
etcd password
DW_SINKER_FILE_PATH
type: file-path
required: N
Specify sinker rule configuration via a local file
DW_SINKER_CACHE_BUCKETS
type: int
required: N
Version-1.12.0 Specify the number of Sinker cache buckets, default 64
DW_SINKER_CACHE_RESERVED_CAPACITY
type: int
required: N
Version-1.12.0 Specify the upper limit of Sinker cache entries, default 1 million (1<<20)
DW_SINKER_CACHE_TTL
type: int
required: N
Version-1.12.0 Specify the TTL of cached elements in Sinker, default 10m (10 minutes)
DW_SINKER_CACHE_PREALLOC
type: bool
required: N
Version-1.12.0 Pre-allocate cache memory, default false
Warning

If both local file and etcd methods are specified, the Sinker rules in the local file take priority. If neither is specified, the sinker functionality is effectively turned off.

Prometheus Metrics Exposure

Env Description
DW_PROM_URL
type: string
required: N
Prometheus metrics URL Path (default /metrics)
DW_PROM_LISTEN
type: string
required: N
Prometheus metrics exposure address (default localhost:9090)
DW_PROM_DISABLED
type: boolean
required: N
Disable Prometheus metrics exposure

Disk Cache Settings

Env Description
DW_DISKCACHE_DIR
type: file-path
required: N
Set cache directory, this directory is generally externally mounted storage
DW_DISKCACHE_DISABLE
type: boolean
required: N
Disable disk cache, if cache is not disabled, remove this environment variable
DW_DISKCACHE_CLEAN_INTERVAL
type: string
required: N
Cache cleanup interval, default 30s
DW_DISKCACHE_EXPIRE_DURATION
type: string
required: N
Cache expiration time, default 168h (7d)
DW_DISKCACHE_CAPACITY_MB
type: int
required: N
Version-1.6.0 Set available disk space size, in MB, default 20GB
DW_DISKCACHE_BATCH_SIZE_MB
type: int
required: N
Version-1.6.0 Set maximum size of a single disk cache file, in MB, default 64MB
DW_DISKCACHE_MAX_DATA_SIZE_MB
type: int
required: N
Version-1.6.0 Set maximum size of a single cache content (e.g., a single HTTP body), in MB, default 64MB. Single data packets exceeding this size will be discarded
Tips

Setting DW_DISKCACHE_DISABLE disables disk cache.

Performance-Related Settings

Version-1.6.0

Env Description
DW_COPY_BUFFER_DROP_SIZE
type: int
required: N
HTTP body buffers exceeding the specified size (in bytes) will be cleared immediately to avoid consuming too much memory. Default value 256KB

Dataway API List

Details for each API below are to be added.

GET /v1/ping

Version-1.11.0

  • API Description: Get the current version number and release date of Dataway, and also return the egress IP of the client request.

If DataWay has disabled the 404 page (disable_404page), this interface will not be available.

GET /v1/ntp

Version-1.6.0

  • API Description: Get the current Unix timestamp (in seconds) of Dataway

POST /v1/write/:category

  • API Description: Receive various collected data uploaded by Datakit

GET /v1/datakit/pull

  • API Description: Handle Datakit requests for pulling center configuration (blacklist/Pipeline)

POST /v1/write/rum/replay

  • API Description: Receive Session Replay data uploaded by Datakit

POST /v1/upload/profiling

  • API Description: Receive Profiling data uploaded by Datakit

POST /v1/election

  • API Description: Handle Datakit election requests

POST /v1/election/heartbeat

  • API Description: Handle Datakit election heartbeat requests

POST /v1/query/raw

Handle DQL query requests. A simple example is as follows:

POST /v1/query/raw?token=<workspace-token> HTTP/1.1
Content-Type: application/json

{
    "token": "workspace-token",
    "queries": [
        {
            "query": "M::cpu LIMIT 1"
        }
    ],
    "echo_explain": <true/false>
}

Response example:

{
  "content": [
    {
      "series": [
        {
          "name": "cpu",
          "columns": [
            "time",
            "usage_iowait",
            "usage_total",
            "usage_user",
            "usage_guest",
            "usage_system",
            "usage_steal",
            "usage_guest_nice",
            "usage_irq",
            "load5s",
            "usage_idle",
            "usage_nice",
            "usage_softirq",
            "global_tag1",
            "global_tag2",
            "host",
            "cpu"
          ],
          "values": [
            [
              1709782208662,
              0,
              7.421875,
              3.359375,
              0,
              4.0625,
              0,
              0,
              0,
              1,
              92.578125,
              0,
              0,
              null,
              null,
              "WIN-JCHUL92N9IP",
              "cpu-total"
            ]
          ]
        }
      ],
      "points": null,
      "cost": "24.558375ms",
      "is_running": false,
      "async_id": "",
      "query_parse": {
        "namespace": "metric",
        "sources": {
          "cpu": "exact"
        },
        "fields": {},
        "funcs": {}
      },
      "index_name": "",
      "index_store_type": "",
      "query_type": "guancedb",
      "complete": false,
      "index_names": "",
      "scan_completed": false,
      "scan_index": "",
      "next_cursor_time": -1,
      "sample": 1,
      "interval": 0,
      "window": 0
    }
  ]
}

Response result description:

  • The actual data is located in the inner series field.
  • name indicates the measurement name (here the CPU metric is queried; for log-type data, this field is not present).
  • columns indicates the names of the returned result columns.
  • values contains the corresponding column results for those in columns.

Info
  • The token in the URL request parameters can be different from the token in the JSON body. The former is used to verify the legitimacy of the query request, and the latter is used to determine the target workspace where the data resides.
  • The queries field can contain multiple queries, each of which can carry additional fields. For the specific field list, refer to here.

POST /v1/workspace

  • API Description: Handle workspace query requests initiated by Datakit

POST /v1/object/labels

  • API Description: Handle requests to modify object Labels

DELETE /v1/object/labels

  • API Description: Handle requests to delete object Labels

GET /v1/check/:token

  • API Description: Check if the token is valid

Dataway Metrics Collection

HTTP Client Metrics Collection

To collect metrics for Dataway HTTP requests to Kodo (or the next-hop Dataway), you need to manually enable the http_client_trace configuration. Or specify the environment variable DW_HTTP_CLIENT_TRACE=true.

Dataway itself exposes Prometheus metrics. These metrics can be collected by Datakit's built-in prom collector. An example collector configuration is as follows:

[[inputs.prom]]
  ## Exporter URLs.
  urls = [ "http://localhost:9090/metrics", ]
  source = "dataway"
  election = true
  measurement_name = "dw" # The dataway measurement set is fixed as dw, do not change.
[inputs.prom.tags]
  service = "dataway"

If Datakit is deployed in the cluster (requires Datakit 1.14.2 or above), then Prometheus metrics exposure can be enabled in Dataway (the default POD yaml for Dataway already includes this):

annotations: # The following annotations are added by default.
   datakit/prom.instances: |
     [[inputs.prom]]
       url = "http://$IP:9090/metrics" # The port here (default 9090) depends on the actual situation.
       source = "dataway"
       measurement_name = "dw" # Fixed as this measurement set.
       interval = "10s"
       disable_instance_tag = true

     [inputs.prom.tags]
       service = "dataway"
       instance = "$PODNAME"

...
env:
- name: DW_PROM_LISTEN
  value: "0.0.0.0:9090" # Keep this port consistent with the port in the url above.

If collection is successful, you can see the corresponding monitoring view by searching for dataway in Guance "Scenarios" / "Built-in Views".

Dataway Metrics List

The following are the metrics exposed by Dataway. These metrics can be obtained by requesting http://localhost:9090/metrics. You can view a specific metric in real-time (3s) with the following command:

If some metrics cannot be queried, it may be because the related business module is not yet running. Some new metrics only exist in the latest version. The version information for each metric is not individually marked here; refer to the metric list returned by the /metrics interface.

watch -n 3 'curl -s http://localhost:9090/metrics | grep -a <METRIC-NAME>'
TYPE NAME LABELS HELP
SUMMARY dataway_http_api_elapsed_seconds api,method,sinked,status API request latency
SUMMARY dataway_http_api_body_buffer_utilization api API body buffer utillization(Len/Cap)
SUMMARY dataway_http_api_body_copy api API body copy
SUMMARY dataway_http_api_body_copy_seconds api API body copy latency
SUMMARY dataway_http_api_body_copy_enlarge api API body copy enlarged pooled buffer
SUMMARY dataway_http_api_resp_size_bytes api,method,status API response size
SUMMARY dataway_http_api_req_size_bytes api,method,status API request size
COUNTER dataway_http_api_body_too_large_dropped_total api,method API request too large dropped
COUNTER dataway_http_api_with_inner_token api,method API request with inner token
COUNTER dataway_http_api_dropped_total api,method API request dropped when sinker rule match failed
COUNTER dataway_ip_blacklist_blocked_total api,method IP blacklist blocked requests total
COUNTER dataway_ip_blacklist_missed_total api,method IP blacklist missed total
COUNTER dataway_ip_blacklist_added_total api,method,reason IP blacklist added total
COUNTER dataway_syncpool_stats name,type sync.Pool usage stats
COUNTER dataway_http_api_copy_body_failed_total api API copy body failed count
COUNTER dataway_http_api_signed_total api,method API signature count
SUMMARY dataway_http_api_cached_bytes api,cache_type,method,reason API cached body bytes
SUMMARY dataway_http_api_reusable_body_read_bytes api,method API re-read body on forking request
SUMMARY dataway_http_api_recv_points api API /v1/write/:category recevied points
SUMMARY dataway_http_api_send_points api API /v1/write/:category send points
SUMMARY dataway_http_api_cache_points api,cache_type Disk cached /v1/write/:category points
SUMMARY dataway_http_api_cache_cleaned_points api,cache_type,status Disk cache cleaned /v1/write/:category points
COUNTER dataway_http_api_forked_total api,method,token API request forked total
GAUGE dataway_http_cli_info max_conn_per_host,max_idle_conn,max_idle_conn_per_host,timeout Dataway as client settings
GAUGE dataway_http_info cascaded,docker,http_client_trace,listen,max_body,release_date,remote,version Dataway API basic info
GAUGE dataway_last_heartbeat_time N/A Dataway last heartbeat with Kodo timestamp
SUMMARY dataway_http_api_copy_buffer_drop_total max API copy buffer dropped(too large cached buffer) count
GAUGE dataway_cpu_usage N/A Dataway CPU usage(%)
GAUGE dataway_mem_stat type Dataway memory usage stats
GAUGE dataway_open_files N/A Dataway open files
GAUGE dataway_cpu_cores N/A Dataway CPU cores
GAUGE dataway_uptime N/A Dataway uptime
COUNTER dataway_process_ctx_switch_total type Dataway process context switch count(Linux only)
COUNTER dataway_process_io_count_total type Dataway process IO count
COUNTER dataway_process_io_bytes_total type Dataway process IO bytes count
SUMMARY dataway_http_api_dropped_cache api,method,reason Dropped cache data dur to various reasons
GAUGE dataway_httpcli_dns_resolved_address api,coalesced,host,server HTTP DNS resolved address
SUMMARY dataway_httpcli_dns_cost_seconds api,coalesced,host,server HTTP DNS cost
SUMMARY dataway_httpcli_tls_handshake_seconds api,server HTTP TLS handshake cost
SUMMARY dataway_httpcli_http_connect_cost_seconds api,server HTTP connect cost
SUMMARY dataway_httpcli_got_first_resp_byte_cost_seconds api,server Got first response byte cost
SUMMARY http_latency api,server HTTP latency
COUNTER dataway_httpcli_tcp_conn_total api,server,remote,type HTTP TCP connection count
COUNTER dataway_httpcli_conn_reused_from_idle_total api,server HTTP connection reused from idle count
SUMMARY dataway_httpcli_conn_idle_time_seconds api,server HTTP connection idle time
GAUGE dataway_sinker_rule_cache_size name Sinker rule cache size
GAUGE dataway_sinker_rule_error error Rule errors
GAUGE dataway_sinker_default_rule_hit info Default sinker rule hit count
GAUGE dataway_sinker_rule_last_applied_time source,version Rule last applied time(Unix timestamp)
SUMMARY dataway_sinker_rule_cost_seconds type Rule cost time seconds
SUMMARY dataway_sinker_rule_match_count type Sinker rule match count on each request
SUMMARY dataway_sinker_lru_cache_cleaned name Sinker LRU cache cleanup removed entries
SUMMARY dataway_sinker_lru_cache_dropped_ttl_seconds bucket,name,reason Sinker LRU cache dropped TTL seconds
COUNTER dataway_sinker_pull_total event,source Sinker pulled or pushed total
GAUGE dataway_sinker_rule_count type,with_default Sinker rule count
GAUGE dataway_sinker_rule_cache_get_total name,type Sinker rule cache get hit/miss count
COUNTER diskcache_rotate_total path Cache rotate count, mean file rotate from data to data.0000xxx
COUNTER diskcache_remove_total path Removed file count, if some file read EOF, remove it from un-read list
COUNTER diskcache_wakeup_total path Wakeup count on sleeping write file
COUNTER diskcache_pos_updated_total op,path .pos file updated count
COUNTER diskcache_seek_back_total path Seek back when Get() got any error
GAUGE diskcache_capacity path Current capacity(in bytes)
GAUGE diskcache_max_data path Max data to Put(in bytes), default 0
GAUGE diskcache_batch_size path Data file size(in bytes)
GAUGE diskcache_size path Current cache size that waiting to be consumed(get)
GAUGE diskcache_open_time no_fallback_on_error,no_lock,no_pos,no_sync,path Current cache Open time in unix timestamp(second)
GAUGE diskcache_last_close_time path Current cache last Close time in unix timestamp(second)
GAUGE diskcache_datafiles path Current un-read data files
SUMMARY diskcache_get_latency path Get() cost seconds
SUMMARY diskcache_put_latency path Put() cost seconds
SUMMARY diskcache_put_bytes path Cache Put() bytes
SUMMARY diskcache_get_bytes path Cache Get() bytes
SUMMARY diskcache_dropped_data path,reason Dropped data during Put() when capacity reached.

Metrics Collection in Docker Mode

There are two modes for host installation: one is installation on the bare metal host, and the other is installation via Docker. Here we specifically explain the differences in metrics collection when installing via Docker.

When installing via Docker, the HTTP port for metrics exposure is mapped to port 19090 on the host machine (by default). In this case, the metrics collection address is http://localhost:19090/metrics.

If a different port is specified separately, then during Docker installation, 10000 will be added to that port. Therefore, the port specified here should not exceed 45535.

Additionally, during Docker installation, a profile collection port will also be exposed. By default, it is mapped to port 16060 on the host machine. Its mechanism is also to add 10000 to the specified port.

Dataway Self-Log Collection and Processing

Dataway's own Logs are divided into two categories: gin logs and its own program logs. They can be separated using the following Pipeline:

# Pipeline for dataway logging

# Testing sample loggin
'''
2023-12-14T11:27:06.744+0800    DEBUG   apis    apis/api_upload_profile.go:272  save profile file to disk [ok] /v1/upload/profiling?token=****************a4e3db8481c345a94fe5a
[GIN] 2021/10/25 - 06:48:07 | 200 |   30.890624ms |  114.215.200.73 | POST     "/v1/write/logging?token=tkn_5c862a11111111111111111111111111"
'''

add_pattern("TOKEN", "tkn_\\w+")
add_pattern("GINTIME", "%{YEAR}/%{MONTHNUM}/%{MONTHDAY}%{SPACE}-%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}")
grok(_,"\\[GIN\\]%{SPACE}%{GINTIME:timestamp}%{SPACE}\\|%{SPACE}%{NUMBER:dataway_code}%{SPACE}\\|%{SPACE}%{NOTSPACE:cost_time}%{SPACE}\\|%{SPACE}%{NOTSPACE:client_ip}%{SPACE}\\|%{SPACE}%{NOTSPACE:method}%{SPACE}%{GREEDYDATA:http_url}")

# gin logging
if cost_time != nil:
  if http_url != nil:
    grok(http_url, "%{TOKEN:token}")
    cover(token, [5, 15])
    replace(message, "tkn_\\w{0,5}\\w{6}", "****************$4")
    replace(http_url, "tkn_\\w{0,5}\\w{6}", "****************$4")
  endif

  group_between(dataway_code, [200,299], "info", status)
  group_between(dataway_code, [300,399], "notice", status)
  group_between(dataway_code, [400,499], "warning", status)
  group_between(dataway_code, [500,599], "error", status)

  if sample(0.1): # drop 90% debug log
    drop()
    exit()
  else:
    set_tag(sample_rate, "0.1")
  endif

  parse_duration(cost_time)
  duration_precision(cost_time, "ns", "ms")

  set_measurement('gin', true)
  set_tag(service,"dataway")
  exit()
endif

# app logging
if cost_time == nil:
  grok(_,"%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{NOTSPACE:status}%{SPACE}%{NOTSPACE:module}%{SPACE}%{NOTSPACE:code}%{SPACE}%{GREEDYDATA:msg}")
  if level == nil:
    grok(message,"Error%{SPACE}%{DATA:errormsg}")
    if errormsg != nil:
      add_key(status,"error")
      drop_key(errormsg)
    endif
  endif
  lowercase(level)

  # if debug level enabled, drop most of them
  if status == 'debug':
    if sample(0.1): # drop 90% debug log
      drop()
      exit()
    else:
      set_tag(sample_rate, "0.1")
    endif
  endif

  group_in(status, ["error", "panic", "dpanic", "fatal","err","fat"], "error", status) # mark them as 'error'

  if msg != nil:
    grok(msg, "%{TOKEN:token}")
    cover(token, [5, 15])
    replace(message, "tkn_\\w{0,5}\\w{6}", "****************$4")
    replace(msg, "tkn_\\w{0,5}\\w{6}", "****************$4")
  endif

  set_measurement("dataway-log", true)
  set_tag(service,"dataway")
endif

Dataway Bug Report

Dataway itself exposes metrics and profiling collection endpoints. We can collect this information for troubleshooting.

The following information collection is based on the actual configured ports and addresses. The existing commands are listed according to default parameters.

dw-bug-report.sh
br_dir="dw-br-$(date +%s)"
mkdir -p $br_dir

echo "save bug report to ${br_dir}"

# Modify the configuration here according to the actual situation.
dw_ip="localhost" # IP address where Dataway metrics/profile are exposed
metric_port=9090  # Port for metrics exposure
profile_port=6060 # Port for profile exposure
dw_yaml_conf="/usr/local/cloudcare/dataflux/dataway/dataway.yaml"
dw_dot_yaml_conf="/usr/local/cloudcare/dataflux/dataway/.dataway.yaml" # This file exists for container installation.

# Collect runtime metrics
curl -v "http://${dw_ip}:${metric_port}/metrics" -o $br_dir/metrics

# Collect profiling information
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/allocs" -o $br_dir/allocs
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/heap" -o $br_dir/heap
curl -v "http://${dw_ip}:${profile_port}/debug/pprof/profile" -o $br_dir/profile # This command will run for about 30s.

cp $dw_yaml_conf $br_dir/dataway.yaml.copy
cp $dw_dot_yaml_conf $br_dir/.dataway.yaml.copy

tar czvf ${br_dir}.tar.gz ${br_dir}
rm -rf ${br_dir}

Run the script:

$ sh dw-bug-report.sh
...

After execution, a file similar to dw-br-1721188604.tar.gz will be generated. Extract this file.

FAQ

Request Body Too Large Issue

Version-1.3.7

Dataway has a default setting for request body size (default 64MB). When the request body is too large, the client will receive an HTTP 413 error (Request Entity Too Large). If the request body is within a reasonable range, you can appropriately increase this value (in bytes):

  • Set the environment variable DW_MAX_HTTP_BODY_BYTES
  • Set max_http_body_bytes in dataway.yaml

If overly large request packets occur during runtime, they are reflected in both metrics and logs:

  • The metric dataway_http_too_large_dropped_total exposes the number of dropped large requests.
  • Search Dataway logs cat log | grep 'drop too large request'. The logs will output details of the HTTP request headers, facilitating further investigation of the client situation.
Warning

In the disk cache module, there is also a maximum data block write limit (default 64MB). If you increase the maximum request body configuration, you must also adjust this configuration (ENV_DISKCACHE_MAX_DATA_SIZE) to ensure that large requests can be correctly written to the disk cache.


  1. This limit is used to prevent the Dataway container/Pod from being restricted by the system to use only about 20,000 connections at runtime. Increasing this limit will affect Dataway data upload efficiency. When Dataway traffic is high, consider increasing the CPU count for a single Dataway or horizontally scaling Dataway instances. 

Feedback

Is this page helpful? ×