Interval Detection V2¶
The V2 version of interval detection uses historical data to build confidence intervals and predict normal fluctuation ranges. The system compares current data characteristics with historical data to determine if they exceed the confidence interval, thereby identifying anomalies and triggering alerts to ensure data stability and security.
Key features:
- In-depth analysis: Predicts normal fluctuations based on historical data to build confidence intervals;
- Continuous updates: Continuously updated by the Guance algorithm team to enhance data processing capabilities.
Concepts¶
Confidence interval range (confidence_interval): A metric that measures the tolerance of time series data fluctuations within a specific detection range, with values between 1% and 100%. When data has high volatility and randomness, this value can be appropriately increased; when data fluctuations are regular, this value can be decreased. If the confidence interval is too large, the upper and lower boundaries widen, reducing the number of anomaly detections; if the confidence interval is too small, it may detect too many anomalies; if the confidence interval is too large, it may fail to detect any anomalies.
Therefore, adjusting this parameter reasonably based on the fluctuation characteristics of the data is crucial for balancing the sensitivity and accuracy of anomaly detection, effectively avoiding excessive false positives or missed anomalies.
Illustration:
Detection Configuration¶
Detection Frequency¶
The execution frequency of the detection rule, defaulting to 10 minutes, and cannot be changed.
Detection Metrics¶
The metrics data being monitored.
| Field | Description |
|---|---|
| Data Type | The type of data currently being detected, including Metrics, APM, and RUM data. |
| Measurement | The measurement to which the current detection metric belongs. |
| Metric | The specific metric targeted by the current detection. |
| Aggregation Algorithm | Includes Avg by (average), Min by (minimum), Max by (maximum), Sum by (sum), Last (last value), First by (first value), Count by (data point count), Count_distinct by (unique data point count), p50 (median), p75 (75th percentile), p90 (90th percentile), p99 (99th percentile). |
| Detection Dimension | Any string-type (keyword) field in the configuration data can be selected as a detection dimension. Currently, up to three fields can be selected as detection dimensions. By combining multiple detection dimension fields, a specific detection object can be determined. Guance will determine if the statistical metrics of a detection object meet the threshold of the trigger conditions. If the conditions are met, an event is generated.For example, selecting detection dimensions host and host_ip, the detection object can be {host: host1, host_ip: 127.0.0.1} |
| Filter Conditions | Filters the data of the detection metrics based on the labels of the metrics, limiting the data range of the detection; one or more label filters can be added; supports fuzzy matching and fuzzy non-matching filter conditions. |
| Alias | Custom name for the detection metric. |
| Query Method | Supports simple queries and expression queries. |
Cross-Workspace Query Metrics¶
After authorization, detection metrics from other workspaces under the current account can be selected. Once the monitor rule is successfully created, cross-workspace alert configurations can be implemented.
Note
After selecting another workspace, the detection metric dropdown will only display data types that have been authorized for the current workspace.
Trigger Conditions¶
Set the trigger conditions for alert levels: You can configure any one of the emergency, important, warning, or normal trigger conditions. Supports three forms of data comparison: upward (data increase), downward (data decrease), and upward or downward.
Configure trigger conditions and severity. When the query results in multiple values, an event is generated if any value meets the trigger conditions.
For more details, refer to Event Level Description.
Bulk Alert Protection¶
Enabled by default.
When the number of alerts generated in a single detection exceeds the preset threshold, the system automatically switches to a status summary strategy: instead of processing each alert object individually, it generates a small number of summary alerts based on the event status and pushes them.
This ensures timely notifications while significantly reducing alert noise and avoiding the risk of timeout due to processing too many alerts.
Note
When this switch is enabled, the subsequent Event Details generated by the monitor after detecting anomalies will not display historical records and related events.
Alert Levels¶
-
Emergency (red), Important (orange), Warning (yellow): Based on the configured condition judgment operators.
-
Normal (green): Based on the configured number of detections, as explained below:
-
Each execution of a detection task counts as 1 detection. For example, if
detection frequency = 5 minutes, then 1 detection = 5 minutes; -
The number of detections can be customized. For example, if
detection frequency = 5 minutes, then 3 detections = 15 minutes.
Level Description Normal After the detection rule takes effect, if emergency, important, or warning anomaly events are generated, and the data detection results return to normal within the configured custom number of detections, a recovery alert event is generated.
❗️ Recovery alert events are not subject to Alert Silence restrictions. If the number of detections for recovery alert events is not set, the alert event will not recover and will continue to appear in the Events > Unrecovered Events List. -
Data Gap¶
For data gap status, seven strategies can be configured.
-
Link to the detection interval time range to determine the query results of the detection metrics for the most recent minutes, no event is triggered;
-
Link to the detection interval time range to determine the query results of the detection metrics for the most recent minutes, the query results are treated as 0; at this point, the query results will be re-compared with the threshold configured in the Trigger Conditions above to determine if an anomaly event should be triggered.
-
Custom fill the detection interval value, trigger data gap events, trigger emergency events, trigger important events, trigger warning events, and trigger recovery events; when selecting this configuration strategy, it is recommended to set the custom data gap time >= detection interval time interval. If the configured time <= detection interval time interval, there may be cases where both data gap and anomaly conditions are met, in which case only the data gap processing results will be applied.
Information Generation¶
Enabling this option will generate "information" events for detection results that do not match the above trigger conditions.
Note
When both trigger conditions, data gap, and information generation are configured, the triggering is judged based on the following priority: data gap > trigger conditions > information event generation.
Other Configurations¶
For more details, refer to Rule Configuration.

