Skip to content

User Access Metrics Monitoring


Used to monitor user access metrics data within the workspace, threshold ranges can be set, and the system will automatically trigger alerts when metrics exceed these thresholds.

Use Cases

Supports monitoring of metrics data for multiple types of applications such as Web, Android, iOS, and Miniapp. For example, it can monitor JS error rates based on city dimensions for Web applications.

Monitoring Configuration

Monitoring Frequency

The execution frequency of the monitoring rule.

Monitoring Interval

The time range for querying the monitored metrics. The available monitoring intervals vary depending on the monitoring frequency.

Monitoring Frequency Monitoring Interval (Dropdown Options)
30s 1m/5m/15m/30m/1h/3h
1m 1m/5m/15m/30m/1h/3h
5m 5m/15m/30m/1h/3h
15m 15m/30m/1h/3h/6h
30m 30m/1h/3h/6h
1h 1h/3h/6h/12h/24h
6h 6h/12h/24h
12h 12h/24h
24h 24h

Monitored Metrics

Set the monitored metrics. You can configure application metric data under a single application type in the current workspace and define a time range. For instance: all application metrics data under the Web type in the current workspace.

Field Description
Application Type RUM supported application types, including Web, Android, iOS, Miniapp
Application Name Retrieves the corresponding application list based on the application type.
Metric List of metrics divided by application type:

  • Web/Miniapp (including JS error count, JS error rate, resource error count, resource error rate, average first render time, average page load time, LCP(largest_contentful_paint), FID(first_input_delay), CLS(cumulative_layout_shift), FCP(first_contentful_paint), etc.)

  • Android/IOS (including launch duration, total crash count, total crash rate, resource error count, resource error rate, FPS, average page load time, etc.).
  • Filtering Conditions Screen the detection data range based on metric tags. Multiple tags can be added, supporting fuzzy matching or non-fuzzy matching screening.
    Monitoring Dimensions Up to three string type (keyword) fields from the configuration data can be selected as monitoring dimensions. A combination of multiple fields can uniquely identify the detection object, and the system will determine whether the metric has reached the threshold to trigger an event.

    For example: selecting monitoring dimensions host and host_ip, then the detection object can be {host: host1, host_ip: 127.0.0.1}.

    Web / Miniapp Metrics Description

    Metric Query Example
    JS Error Count R::error:(count(`__docid`) as `JS Error Count`) { `app_id` = '' }
    JS Error Rate Web: eval(A/B, alias='Page JS Error Rate', A="R::view:(count(`view_url`)) {`view_error_count` > 0, `app_id` = ''}",B="R::view:(count(`view_url`)) { `app_id` = ''} ") Miniapp: eval(A/B, alias='JS Error Rate', A="R::view:(count(`view_name`)) {`view_error_count` > 0, `app_id` = '' }",B="R::view:(count(`view_name`)) { `app_id` = '' }")
    Resource Error Count R::resource:(count(`resource_url`) as `Resource Error Count`) {`resource_status` >=400, `app_id` = ''}
    Resource Error Rate eval(A/B, alias='Resource Error Rate', A="R::`resource`:(count(`resource_url`)) { `resource_status` >= '400',`app_id` = '' }", B="R::`resource`:(count(`resource_url`)) { `app_id` = '' }")
    Average First Render Time R::page:(avg(page_fpt)){`app_id` = '#{appid}'}
    Average Page Load Time R::view:(avg(loading_time)){`app_id` = '#{appid}'}
    Slow Page Load Count R::resource:(count(resource_load)){`app_id` = '#{appid}',`resource_load`>8000000000,resource_type='document'}
    Average Resource Load Time R::resource:(avg(`resource_load`) as `Load Time` ) {`app_id` = '#{appid}',resource_type!='document'}
    LCP (largest_contentful_paint) Includes aggregation functions: avg, P75, P90, P99

    R::view:(avg(largest_contentful_paint)){`app_id` = '#{appid}'} R::view:(percentile(`largest_contentful_paint`,75)){`app_id` = '#{appid}'} R::view:(percentile(`largest_contentful_paint`,90)){`app_id` = '#{appid}'} R::view:(percentile(`largest_contentful_paint`,99)){`app_id` = '#{appid}'}
    FID (first_input_delay) Includes aggregation functions: avg, P75, P90, P99

    R::view:(avg(first_input_delay)){`app_id` = '#{appid}'} R::view:(percentile(`first_input_delay`,75)){`app_id` = '#{appid}'} R::view:(percentile(`first_input_delay`,90)){`app_id` = '#{appid}'} R::view:(percentile(`first_input_delay`,99)){`app_id` = '#{appid}'}
    CLS (cumulative_layout_shift) Includes aggregation functions: avg, P75, P90, P99

    R::view:(avg(cumulative_layout_shift)){`app_id` = '#{appid}'} R::view:(percentile(`cumulative_layout_shift`,75)){`app_id` = '#{appid}'} R::view:(percentile(`cumulative_layout_shift`,90)){`app_id` = '#{appid}'} R::view:(percentile(`cumulative_layout_shift`,99)){`app_id` = '#{appid}'}
    FCP (first_contentful_paint) Includes aggregation functions: avg, P75, P90, P99

    R::view:(avg(first_contentful_paint)){`app_id` = '#{appid}'} R::view:(percentile(`first_contentful_paint`,75)){`app_id` = '#{appid}'} R::view:(percentile(`first_contentful_paint`,90)){`app_id` = '#{appid}'} R::view:(percentile(`first_contentful_paint`,99)){`app_id` = '#{appid}'}

    Android / IOS Metrics Description

    Metric Query Example
    Launch Duration R::action:(avg(duration)) { `app_id` = '' ,action_type='app_cold_launch'}
    Total Crash Count R::error:(count(error_type)) {app_id='',`error_source` = 'logger' and is_web_view !='true'}
    Total Crash Rate eval(A.a1/B.b1, alias='Total Crash Rate',A="R::error:(count(error_type) as a1) {app_id='',`error_source` = 'logger',is_web_view !='true'} ",B="R::action:(count(action_name) as b1) { `app_id` = '',`action_type` in [`launch_cold`,`launch_hot`,`launch_warm`]} ")
    Resource Error Count R::resource:(count(`resource_url`) as `Resource Error Count`) {`resource_status` >=400, `app_id` = ''}
    Resource Error Rate eval(A/B, alias='Resource Error Rate', A="R::`resource`:(count(`resource_url`)) { `resource_status` >= '400',`app_id` = '' }", B="R::`resource`:(count(`resource_url`)) { `app_id` = '' }")
    Average FPS R::view:(avg(`fps_avg`)) { `app_id` = '' }
    Average Page Load Time R::view:(avg(`loading_time`)) { `app_id` = '' }
    Average Resource Load Time R::resource:(avg(`duration`)) { `app_id` = '' }
    Stutter Count R::long_task:(count(`view_id`)) { `app_id` = '' }
    Page Error Rate eval(A/B, alias='Page Error Rate',A="R::view:(count(`view_name`)) {`view_error_count` > 0, `app_id` = '' }",B="R::view:(count(`view_name`)) { `app_id` = '' }")

    Trigger Conditions

    Set alert levels for trigger conditions: you can configure any one of the following severity levels - Critical, Major, Minor, Normal.

    Configure trigger conditions and severity levels. When query results contain multiple values, an event is generated if any value meets the trigger condition.

    For more details, refer to Event Level Description.

    If Continuous Trigger Judgment is enabled, you can configure the trigger condition to take effect after multiple consecutive judgments, generating events up to a maximum of 10 times.

    Alert Levels
    1. Alert Levels Critical (Red), Major (Orange), Minor (Yellow): Based on configured condition judgment operators.

    For more details, refer to Operator Description.

    1. Alert Level Normal (Green): Based on the number of configured detections, as follows:

      • Each execution of a detection task counts as 1 detection, e.g., Detection Frequency = 5 minutes, then 1 detection = 5 minutes;
      • You can customize the number of detections, e.g., Detection Frequency = 5 minutes, then 3 detections = 15 minutes.
      Level Description
      Normal After the detection rule takes effect, if critical, major, or minor abnormal events are generated, and the data detection results return to normal within the configured custom detection count, a recovery alert event is generated.
      ⚠ Recovery alert events are not subject to Alert Mute restrictions. If the detection count for recovery alert events is not set, the alert event will not recover and will remain in the Events > Unrecovered Events List.

    Data Gap

    For data gap states, seven strategies can be configured.

    1. Link with the detection interval time range, judge the query result of the most recent minutes for the monitored metrics, no event triggered;

    2. Link with the detection interval time range, judge the query result of the most recent minutes for the monitored metrics, query result considered as 0; at this point, the query result will be compared again with the threshold configured in the Trigger Condition above to determine if an anomaly event should be triggered.

    3. Customize filling for the detection interval value, trigger data gap events, trigger critical events, trigger major events, trigger minor events, and trigger recovery events; when choosing this configuration strategy, the custom data gap time configuration should be >= detection interval time difference, if the configured time <= detection interval time difference, there may be simultaneous satisfaction of data gaps and anomalies, in which case only the data gap handling result will be applied.

    Information Generation

    Enabling this option will generate "Information" events for detection results that do not match the above trigger conditions.

    Note

    If trigger conditions, data gaps, and information generation are configured simultaneously, the following priority applies: data gap > trigger conditions > information event generation.

    Other Configurations

    For more details, refer to Rule Configuration.

    Feedback

    Is this page helpful? ×