Explorer¶
After the application is created, the system will centralize each LLM request (from input prompt to output generation) as a log event in the LLM monitoring Explorer. You can use powerful log query methods to search, filter, and analyze Trace or Span data for a single or all applications.
| Description | |
|---|---|
| Span | Represents a single operation in the LLM application. Each model call, vector database query, or API request is an independent Span. It records detailed information about the operation, such as start time, duration, status (success/failure), model used, Token consumption, etc. |
| Trace | A Trace records the complete end-to-end path of an external request (e.g., user query) executed in the LLM application system. It is identified by a globally unique ID and organizes all Spans generated during the request into a tree structure with hierarchical and temporal relationships. In short, it may include a series of Spans such as "receive request > query database > call LLM > return result". |
View Modes¶
Based on lists and charts, the LLM monitoring Explorer provides various professional analysis views.
Displays the latest Metrics data of all Traces or Spans under a specific application in the current workspace, including input, output, total Token count, duration, etc.
Filters data based on count, last, first, count_distinct operation modes under by conditions:
- Top List
- Time Series
- Pie Chart
- Treemap
- Grouped Table Chart
Details Page¶
On the details page, the left side displays a single Span and Trace generated by the application, and the right side shows the corresponding preview and attribute information.
Related Operations¶
- Search and locate based on node name
source; - Choose whether to display Metrics and scores.
What are Metrics and scores?
- Metrics:
durationtime display - Scores: Model quality evaluation scores, descriptions, etc.
Preview¶
Includes node type, node name, execution duration, Token consumption, execution status, and execution details.
Status Display¶
Execution Status (status) |
Possible Causes |
|---|---|
| Success: Success | |
| Error: Error | LLM call timeout, tool return exception, etc. |
| Warning: Warning | Output content triggers sensitive word detection but does not interrupt execution, etc. |
Execution Details¶
Displays the Input and Output in the LLM application. The former contains all elements used to construct the final request sent to the model, while the latter shows the result generated based on the input.
Attribute Information¶
Relevant field information included in the current Span or Trace.
