Log Details¶
In the log list, click the expand icon on the left side of a single log to slide out the detail page for that log. Here, you can view detailed information about the log, including the time it was generated, host, source, service, content, extended fields, context, and more.
View Complete Log¶
When logs are reported to Guance, if a single log entry exceeds 1M in size, the system will split it into multiple entries based on a 1M standard. For example, a 2.5M log will be split into 3 entries: 1M, 1M, and 0.5M. The completeness of split logs can be checked using the following fields:
Field |
Type | Description |
|---|---|---|
__truncated_id |
string | Represents the unique identifier of the log. Multiple logs split from the same original log share the same __truncated_id, with the prefix LT_xxx. |
__truncated_count |
number | Represents the total number of logs after splitting. |
__truncated_number |
number | Represents the split order of the log, starting from 0. 0 indicates the first piece of the log. |
On the log detail page, if the current log has been split into multiple pieces, a View Complete Log button will appear in the upper right corner. Clicking this button opens a new page listing all related logs in their split order. The page will also highlight the log selected before the jump with color for quick positioning of upstream and downstream logs.
Obsy AI Error Analysis¶
Guance provides the capability to analyze error logs with one click. It utilizes large language models to automatically extract key information from logs, combines online search engines and an operations knowledge base, quickly analyzes potential fault causes, and offers preliminary solutions.
- Filter all logs with a status of
error; - Click on a single data entry to expand its detail page;
- Click Obsy AI Error Analysis in the upper right corner;
- Begin the anomaly analysis.
Error Details¶
If the current log contains error_stack or error_message field information, the system will provide error details related to this log entry.
To view more log error information, go to Log Error Tracing.
Attribute Fields¶
Click on attribute fields for quick filtering and viewing. You can view host, process, trace, and container data related to the log.
| Field | Description |
|---|---|
| Filter Field Value | Add this field to the log explorer to view all log data related to this field. |
| Exclude Field Value | Add this field to the log explorer to view all log data excluding those related to this field. |
| Add to Display Columns | Add this field to the explorer list for viewing. |
| Copy | Copy this field to the clipboard. |
| View Related Containers | View all containers related to this host. |
| View Related Processes | View all processes related to this host. |
| View Related Traces | View all traces related to this host. |
| View Related Inspection Data | View all inspection data related to this host. |
Log Content¶
- The log content automatically switches between JSON and text viewing modes based on the
messagetype. If themessagefield does not exist in the log, the log content section is not displayed. Log content supports expanding and collapsing; it is expanded by default and collapses to show only one line height. - For logs with
source:bpf_net_l4_log, both JSON and packet viewing modes are automatically provided. The packet mode displays client, server, time information, and supports switching between absolute and relative time display (absolute time is default). The switched configuration is saved in the local browser.
JSON Search¶
In JSON-formatted logs, you can search within both the key and value. After clicking, the explorer search bar will add the format @key:value for searching.
For multi-level JSON data, use . to represent the hierarchical relationship. For example, @key1.key2:value means searching for the value corresponding to key2 under key1.
For more details, refer to JSON Search.
Extended Fields¶
- In the search bar, you can enter a field name or value to quickly search and locate fields.
- After checking the field alias, you can view it next to the field name.
- Hover over an extended field and click the dropdown icon to perform the following operations on that field:
- Filter Field Value
- Exclude Field Value
- Add to Display Columns
- Perform Dimensional Analysis: Click to jump to Analysis Mode > Time Series Chart
- Copy
Note
If you choose to add a field to the display columns, an icon will appear in the list for easy identification.
Context Logs¶
The log service's context query function helps you trace related records before and after an abnormal log occurrence through time clues, quickly locating the root cause of the problem.
- On the log detail page, you can directly view the context logs of the current data content.
- The dropdown on the left allows you to select an index to filter the corresponding data.
- Sort data.
- Directly jump to a new page of the log explorer from the current detail page.
- Display Item Configuration
- Settings
Supplementary Logic Explanation
According to the returned data, 50 entries are loaded each time upon scrolling.
How is the returned data queried?
Prerequisite: Does the log have a log_read_lines field? If it exists, follow logic a; if not, follow logic b.
a. Get the log_read_lines value of the current log and apply the filter log_read_lines >= {{log_read_lines.value-30}} and log_read_lines <= {{log_read_lines.value +30}}
DQL Example: Current log line number = 1354170
Then:
L::RE(`.*`):(`message`) { `index` = 'default' and `host` = "ip-172-31-204-89.cn-northwest-1" AND `source` = "kodo-log" AND `service` = "kodo-inner" AND `filename` = "0.log" and `log_read_lines` >= 1354140 and `log_read_lines` <= 1354200} sorder by log_read_lines
b. Get the time of the current log, calculate the query start and end times by going backward/forward.
- Start Time: 5 minutes before the current log time.
- End Time: Take the time of the 50th log entry after the current log (·). If
timeequals the current log time, usetime + 1 microsecondas the end time; iftimedoes not equal the current log time, usetimeas the end time.
Log Context Page¶
Click to jump to the log context page. You can manage all current data through the following operations:
- Enter text in the search box to search and locate data.
- Click the button on the side to switch the system's default word wrap to content overflow mode. In this mode, each log is displayed as a single line, and you can scroll left and right as needed.
- Directly locate the current log.
- Go to Top/Bottom.
- Load 100 entries Up/Down.
Correlation Analysis¶
The system supports correlation analysis of log data. In addition to error details, extended fields, and context logs, you can also get a one-stop understanding of the hosts, containers, networks, etc., corresponding to the log.
Built-in Pages¶
For built-in pages like Host, Container, Pod, etc., you can perform the following operations:
(Using the "Host" built-in page as an example)
- Edit the fields displayed on the current page. The system will automatically match corresponding data based on the fields.
- Choose to jump to the Metric View or Host Details page.
- Filter the time range.
Note
Only Workspace Administrators can modify the display fields of built-in pages. It is recommended to configure common fields. If the page is shared by multiple explorers, field modifications will take effect in real-time and synchronize.
For example: Configuring the "index" field here will display it normally if the log contains it. However, if the trace explorer lacks this field, the corresponding value cannot be displayed.
Built-in Views¶
In addition to the views displayed by the system by default here, you can also bind user views.
- Go to the built-in view binding page.
- View the default associated fields. You can choose to keep or delete fields, and also add new
key:valuefields. - Select a view.
- After binding is complete, the bound built-in view can be viewed in the host object details. You can click the jump button to go to the corresponding built-in view page.








