Log Details¶
In the log list, click the expand icon on the left side of a single log to slide out the details page of the current log. View detailed information about the log, including the time it was generated, host, source, service, content, extended fields, context, and more.
View Complete Log¶
When logs are reported to Guance, if a single log exceeds 1M, the system will split it into multiple logs according to the 1M standard. For example, a 2.5M log will be split into 3 logs, 1M, 1M, and 0.5M. The completeness of the split logs can be checked through the following fields:
Field |
Type | Description |
|---|---|---|
__truncated_id |
string | Represents the unique identifier of the log. Split logs use the same __truncated_id, with the ID prefix LT_xxx |
__truncated_count |
number | Represents the total number of split logs |
__truncated_number |
number | Represents the order of the split log, starting from 0, where 0 indicates the first log in the sequence |
On the log details page, if the current log is split into multiple logs, a View Complete Log button will appear in the upper right corner. Clicking this button will open a new page listing all related logs in the order of the split. The page will also highlight the log selected before the jump with color for quick navigation to upstream and downstream logs.
Obsy AI Error Analysis¶
Guance provides the ability to parse error logs with one click. It uses a large model to automatically extract key information from the logs, combines it with online search engines and operation knowledge bases, quickly analyzes possible fault causes, and provides preliminary solutions.
-
Filter all logs with the status
error; -
Click a single data entry to expand the details page;
-
Click Obsy AI Error Analysis in the upper right corner;
-
Start the anomaly analysis.
Error Details¶
If the current log contains error_stack or error_message field information, the system will provide you with error details related to this log.
To view more log error information, go to Log Error Tracing.
Attribute Fields¶
Click on attribute fields for quick filtering and viewing. You can view host, process, trace, and container data related to the log.
| Field | Description |
|---|---|
| Filter Field Value | Add this field to the log explorer to view all log data related to this field |
| Reverse Filter Field Value | Add this field to the log explorer to view all log data except for this field |
| Add to Display Column | Add this field to the explorer list for viewing |
| Copy | Copy this field to the clipboard |
| View Related Containers | View all containers related to this host |
| View Related Processes | View all processes related to this host |
| View Related Traces | View all traces related to this host |
| View Related Inspection Data | View all inspection data related to this host |
Log Content¶
-
The log content automatically switches between JSON and text viewing modes based on the
messagetype. If themessagefield does not exist in the log, the log content section will not be displayed. The log content supports expanding and collapsing, with the default state being expanded. When collapsed, only one line is displayed; -
For logs with
source:bpf_net_l4_log, JSON and packet viewing modes are automatically provided. The packet mode displays client, server, time, and other information, and supports switching between absolute and relative time display, with absolute time as the default. The configuration after switching is saved in the local browser.
JSON Search¶
In JSON format logs, you can search for both key and value. After clicking, the explorer search bar will add @key:value format for searching.
For multi-level JSON data, use . to represent hierarchical relationships. For example, @key1.key2:value means searching for the value corresponding to key2 under key1.
For more details, refer to JSON Search.
Extended Fields¶
-
In the search bar, you can enter field names or values for quick search and positioning;
-
After checking the field alias, you can view it after the field name;
-
Hover over an extended field, click the dropdown icon, and choose the following operations for this field:
-
Filter Field Value
-
Reverse Filter Field Value
-
Add to Display Column
-
Perform Dimension Analysis: Click to jump to Analysis Mode > Time Series Chart
-
Copy
-
Note
If you choose to add a field to the display column, an icon will appear in the list for easy identification.
Context Logs¶
The context query function of the log service helps you trace related records before and after an abnormal log through time clues, quickly locating the root cause of the problem.
-
On the log details page, you can directly view the context logs of the current data content;
-
The left dropdown box allows you to select an index to filter corresponding data;
-
Data sorting;
-
Directly jump to a new page of the log explorer based on the current details page;
Related Logic Supplementary Understanding
According to the returned data, 50 pieces of data are loaded each time you scroll.
How is the returned data queried?
Prerequisite: Does the log have the log_read_lines field? If it exists, follow logic a; if it does not exist, follow logic b.
a. Get the log_read_lines value of the current log, and filter with log_read_lines >= {{log_read_lines.value-30}} and log_read_lines <= {{log_read_lines.value +30}}
DQL Example: Current log line number = 1354170
Then:
L::RE(`.*`):(`message`) { `index` = 'default' and `host` = "ip-172-31-204-89.cn-northwest-1" AND `source` = "kodo-log" AND `service` = "kodo-inner" AND `filename` = "0.log" and `log_read_lines` >= 1354140 and `log_read_lines` <= 1354200} sorder by log_read_lines
b. Get the current log time, and derive the query start time and end time by moving forward/backward.
-
Start time: Move 5 minutes backward from the current log time.
-
End time: Take the time of the 50th log after the current log (·), if
time= current log time, then usetime+1 microsecondas the end time; iftime≠ current log time, then usetimeas the end time.
Log Context Page¶
Click to jump to the log context page. You can manage all current data through the following operations:
-
Enter text in the search box to search and locate data;
-
Click the button on the side to switch the system's default auto-wrap to content overflow mode, where each log is displayed as one line, and you can scroll left and right as needed;
-
Directly locate the current log;
-
Return to the top/bottom;
-
Load 100 logs up/down.
Correlation Analysis¶
The system supports correlation analysis of log data. In addition to error details, extended fields, and context logs, you can also get a one-stop understanding of the host, container, network, etc., corresponding to the log.
Built-in Pages¶
For built-in pages such as host, container, Pod, etc., you can perform the following operations:
(Taking the "Host" built-in page as an example)
-
Edit the current page's display fields, the system will automatically match corresponding data based on the fields;
-
Choose to jump to the metric view or host details page;
-
Filter the time range.
Note
Only workspace administrators can modify the display fields of built-in pages. It is recommended to configure common fields. If the page is shared by multiple explorers, field modifications will take effect in real time.
For example: Configuring the "index" field here, if the log contains this field, it will be displayed normally. If the trace explorer lacks this field, the corresponding value will not be displayed.
Built-in Views¶
In addition to the default views displayed by the system here, you can also bind user views.
-
Enter the built-in view binding page;
-
View the default associated fields. You can choose to keep or delete fields, or add new
key:valuefields; -
Select a view;
-
After completing the binding, you can view the bound built-in views in the host object details. You can click the jump button to go to the corresponding built-in view page.








