Interval Detection V2¶
V2 version of interval detection utilizes historical data to construct confidence intervals, predicting normal fluctuation ranges. The system compares current data characteristics with historical data to determine if they exceed the confidence interval, thereby identifying anomalies and triggering alerts to ensure data stability and security.
Main features:
- In-depth analysis: Predicts normal fluctuations based on historical data to build confidence intervals.
- Continuous updates: Continuously updated by the Guance algorithm team to enhance data processing capabilities.
Concepts¶
Confidence interval range (confidence_interval): A metric that measures the tolerance for fluctuation of time-series data within a specific detection range, with values between 1% and 100%. When data exhibits high volatility and randomness, this value can be appropriately increased; when data fluctuations are regular, it can be decreased. An overly large confidence interval widens the upper and lower boundaries, reducing the number of detected anomalies; an overly small confidence interval may detect too many anomalies; an overly large confidence interval may fail to detect any anomalies.
Therefore, adjusting this parameter reasonably based on the fluctuation characteristics of the data is crucial for balancing the sensitivity and accuracy of anomaly detection, effectively avoiding excessive false positives or missed anomalies.
Illustration:
Detection Configuration¶
Detection Frequency¶
Refers to the execution frequency of the detection rule, defaulting to 10 minutes and cannot be changed.
Detection Metrics¶
Refers to the monitored metric data.
| Field | Description |
|---|---|
| Data Type | The data type currently being detected, including Metrics, APM, and RUM data. |
| Measurement | The measurement where the current detection metric is located. |
| Metric | The metric targeted by the current detection. |
| Aggregation Algorithm | Includes Avg by (average value), Min by (minimum value), Max by (maximum value), Sum by (sum), Last (last value), First by (first value), Count by (number of data points), Count_distinct by (number of distinct data points), p50 (median value), p75 (value at the 75th percentile), p90 (value at the 90th percentile), p99 (value at the 99th percentile). |
| Detection Dimension | Any string-type (keyword) field in the configuration data can be selected as a detection dimension. Currently, up to three fields can be selected as detection dimensions. By combining multiple detection dimension fields, a specific detection object can be identified. Guance determines whether the statistical metrics of a detection object meet the threshold of the trigger condition. If the condition is met, an event is generated.For example, selecting detection dimensions host and host_ip allows the detection object to be {host: host1, host_ip: 127.0.0.1}. |
| Filter Conditions | Filters the data of the detection metrics based on the metric's tags to limit the data scope of the detection. One or more tag filters can be added. Supports fuzzy matching and fuzzy exclusion filtering conditions. |
| Alias | Custom name for the detection metric. |
| Query Method | Supports simple queries and expression queries. |
Cross-Workspace Query Metrics¶
After authorization, detection metrics from other workspaces under the current account can be selected. Once the monitor rule is successfully created, cross-workspace alert configuration can be achieved.
Note
After selecting another workspace, the detection metric dropdown options only display data types that have been authorized for the current workspace.
Trigger Conditions¶
Configure trigger conditions for alert levels: You can configure any one of the trigger conditions for Emergency, Important, Warning, or Normal. Supports three forms of data comparison: upward (data increase), downward (data decrease), or upward or downward.
Configure trigger conditions and severity. When the query result contains multiple values, an event is generated if any value meets the trigger condition.
For more details, refer to Event Level Description.
Bulk Alert Protection¶
Enabled by default.
When the number of alerts generated in a single detection exceeds a preset threshold, the system automatically switches to a status summary strategy: Instead of processing each alert object individually, it generates a small number of summary alerts based on event status and pushes them.
This ensures timely notifications while significantly reducing alert noise and avoiding timeout risks caused by processing too many alerts.
Note
When this switch is enabled, the Event Details of such events generated after subsequent monitor detections will not display historical records or associated events.
Alert Level¶
-
Alert Level Emergency (red), Important (orange), Warning (yellow);
-
Alert Level Normal (green): Based on the configured number of detections, explained as follows:
-
Each execution of a detection task counts as 1 detection. For example, if
Detection Frequency = 5 minutes, then 1 detection = 5 minutes. -
The number of detections can be customized. For example, if
Detection Frequency = 5 minutes, then 3 detections = 15 minutes.
Level Description Normal After the detection rule takes effect, if an Emergency, Important, or Warning abnormal event occurs and the data detection result returns to normal within the configured custom number of detections, a recovery alert event is generated.
❗️ Recovery alert events are not subject to Alert Silence restrictions. If the number of detections for recovery alert events is not set, the alert event will not recover and will remain in the Events > Unrecovered Events List. -
Data Gap¶
Seven strategies can be configured for data gap status.
-
Link with the detection interval time range to judge the query result of the most recent minutes of the detection metric, do not trigger an event.
-
Link with the detection interval time range to judge the query result of the most recent minutes of the detection metric, treat the query result as 0; at this time, the query result will be re-compared with the threshold configured in the Trigger Conditions above to determine whether to trigger an abnormal event.
-
Custom fill the detection interval value, trigger data gap event, trigger emergency event, trigger important event, trigger warning event, and trigger recovery event; when selecting this type of configuration strategy, the recommended custom data gap time configuration is >= detection interval time. If the configured time is <= the detection interval time, situations may arise where both data gap and abnormal conditions are met simultaneously. In such cases, only the data gap processing result will be applied.
Information Generation¶
Enable this option to generate "Information" events for detection results that do not match the above trigger conditions and write them.
Note
When trigger conditions, data gap, and information generation are configured simultaneously, triggering is judged according to the following priority: data gap > trigger conditions > information event generation.
Other Configurations¶
For more details, refer to Rule Configuration.

