跳转至

Langfuse

Langfuse 简介

Langfuse 是一个专为 LLM (Large Language Model) 应用设计的开源可观测性平台,主要提供以下核心功能:

  1. Trace 全链路追踪

    • 记录 LLM 调用链(Prompt→LLM→Output)
    • 支持多步骤复杂工作流跟踪
  2. 指标监控

    • Token 使用量统计
    • 请求延迟监控
    • 成本计算(按模型定价)
  3. 数据标注与分析

    • 人工标注功能
    • 输出质量评分
    • A/B 测试支持

Langfuse 接入站点

站点名(别名) 域名
杭州(CN1) https://llm-openway.guance.com
宁夏(CN2) https://aws-llm-openway.guance.com
北京(CN3) https://cn3-llm-openway.guance.com
广州(CN4) https://cn4-llm-openway.guance.com
香港(CN6) https://cn6-llm-openway.guance.com
俄勒冈(US1) https://us1-llm-openway.guance.com
法兰克福(EU1) https://eu1-llm-openway.guance.com
新加坡(EU1) https://ap1-llm-openway.guance.com
新加坡(AP1) https://id1-llm-openway.guance.com
中东(ME1) https://me1-llm-openway.guance.com

环境准备

安装依赖

# 核心 SDK
pip install langfuse

# 可选:异步支持(推荐生产环境使用)
pip install langfuse[async]

# 开发工具(测试用)
pip install pytest langfuse-test
Langfuse 集成接入说明
  • Langfuse 支持非常多的大模型集成,目前我们测试了如下数据接入,更多模型的支持尚有待测试。

    • Dify
    • LangChain
    • Ollama
    • Gemini
    • OpenAI
  • 下文的 YOUR_LLM_APP_IDYOUR_LLM_APP_TOKEN 分别对应 Langfuse 的公钥和密钥。

Python SDK 接入示例

  • 初始化客户端
LANGFUSE_PUBLIC_KEY="YOUR_LLM_APP_ID"
LANGFUSE_SECRET_KEY="YOUR_LLM_APP_TOKEN"
LANGFUSE_HOST="https://llm-openway.guance.com"
from langfuse import Langfuse
langfuse = Langfuse(
    public_key="YOUR_LLM_APP_ID",
    secret_key="YOUR_LLM_APP_TOKEN",
    host="https://llm-openway.guance.com"
)
  • 接入验证
from langfuse import Langfuse

# Initialize with constructor arguments
langfuse = Langfuse(
    public_key="YOUR_LLM_APP_ID",
    secret_key="YOUR_LLM_APP_TOKEN",
    host="https://llm-openway.guance.com"
)

# Verify connection, do not use in production as this is a synchronous call
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")

如果接入失败,则会有类似如下报错:

langfuse.api.resources.commons.errors.unauthorized_error.UnauthorizedError: status_code: 401, body: {}

应用示例

以下列举几个简单的示例。

Ollama 接入

如果我们本地部署了 Ollama,那么可以通过 Langfuse 来跟踪 Ollama 的 API 调用情况:

import os

from langfuse.openai import OpenAI, AsyncOpenAI, AzureOpenAI, AsyncAzureOpenAI

os.environ["LANGFUSE_PUBLIC_KEY"] = "YOUR_LLM_APP_ID"
os.environ["LANGFUSE_SECRET_KEY"] = "YOUR_LLM_APP_TOKEN"
os.environ["LANGFUSE_HOST"] = "https://llm-openway.guance.com"

# Configure the OpenAI client to use http://localhost:11434/v1 as base url
client = OpenAI(
    base_url = 'http://localhost:11434/v1', # local deployed ollama service
    api_key='ollama', # required, but unused
)

stream=False # use stream mode

response = client.chat.completions.create(
        #model="llama3.1:latest",
        model="gemma3:4b", # specify gemma3:4b
        stream=stream,
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "解释下核聚变/核裂变工作原理。"},
            ]
        )

if stream:
    for chk in response:
        content = chk.choices[0].delta.content
        if content is not None:
            print(content, end="", flush=True)
else:
    print(response)

DeepSeek 接入

import os
from langfuse.openai import OpenAI
from langfuse import observe
from langfuse import get_client

os.environ["LANGFUSE_PUBLIC_KEY"] = "YOUR_LLM_APP_ID" 
os.environ["LANGFUSE_SECRET_KEY"] = "YOUR_LLM_APP_TOKEN" 
os.environ["LANGFUSE_HOST"] = "https://llm-openway.guance.com"

# Your DeepSeek API key (get it from https://platform.deepseek.com/api_keys)
os.environ["DEEPSEEK_API_KEY"] = "YOUR_DEEPSEEK_API_KEY"  # Replace with your DeepSeek API key

client = OpenAI(
    base_url="https://api.deepseek.com",
    api_key=os.getenv('DEEPSEEK_API_KEY'),
)

langfuse = get_client()

@observe()
def my_llm_call(input):
    completion = client.chat.completions.create(
        name="story-generator",
        model="deepseek-chat",
        messages=[
            {"role": "system", "content": "You are a creative storyteller."},
            {"role": "user", "content": input}
        ],
        metadata={"genre": "adventure"},
    )
    return completion.choices[0].message.content

with langfuse.start_as_current_span(name="my-ds-trace") as span:
    # Run your application here
    output = my_llm_call("Tell me a short story about a token that got lost on its way to the language model. Answer in 100 words or less.")

    # Pass additional attributes to the span
    span.update_trace(
        input=input,
        output=output,
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
        )

# Flush events in short-lived applications
langfuse.flush()

更多 Langfuse 接入示例,参见这里

更多参考

文档评价

文档内容是否对您有帮助? ×