你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn

跟踪集成(预览版)

重要

本文中标记了“(预览版)”的项目目前为公共预览版。 此预览版未提供服务级别协议,不建议将其用于生产工作负载。 某些功能可能不受支持或者受限。 有关详细信息,请参阅 Microsoft Azure 预览版补充使用条款

Microsoft Foundry 通过我们与 Microsoft Agent Framework、Semantic Kernel、LangChain、LangGraph 和 OpenAI Agent SDK 的跟踪集成,无需过多改动即可轻松记录跟踪。

Microsoft代理框架

Foundry 与 Microsoft Agent Framework 具有原生集成。 基于 Microsoft 智能体框架构建的智能体在可观测性中可以获得开箱即用的跟踪功能。

若要详细了解 Microsoft Agent Framework 中的跟踪和可观测性,请参阅 Microsoft Agent Framework 工作流 - 可观测性

语义内核

Foundry与微软Semantic Kernel有原生集成。 基于 Microsoft 语义内核构建的智能体在可观测性中获得开箱即用的跟踪功能。

详细了解语义内核中的跟踪和可观测性

LangChain 和 LangGraph

注释

LangChain 和 LangGraph 的跟踪集成目前仅在 Python 中可用。 LangChain 和 LangGraph“v1”版本当前处于积极开发状态。 API 图面和跟踪行为在此版本中可能会更改。 在 LangChain v1.0 发行说明页中跟踪更新

可以为 LangChain 启用符合 OpenTelemetry 标准 的跟踪,使用 opentelemetry-instrumentation-langchain 实现。 安装必要的程序包后,可以轻松开始检测代码中的跟踪

示例:使用 Azure AI 跟踪的 LangChain v1 代理

使用此端到端示例通过 langchain-azure-ai 跟踪器检测 LangChain v1 智能体,该跟踪器实施了最新的 OpenTelemetry (OTel) 规范,以便可以在可观测性中查看丰富的跟踪。

LangChain v1:安装包

pip install \
  langchain-azure-ai \
  langchain \
  langgraph \
  langchain-openai \
  azure-identity \
  python-dotenv \
  rich

LangChain v1:配置环境

  • APPLICATION_INSIGHTS_CONNECTION_STRING:用于跟踪的 Azure Monitor Application Insights 连接字符串。
  • AZURE_OPENAI_ENDPOINT:Azure OpenAI 终结点 URL。
  • AZURE_OPENAI_CHAT_DEPLOYMENT:聊天模型部署名称。
  • AZURE_OPENAI_VERSION:API 版本,例如 2024-08-01-preview
  • Azure 凭据通过 DefaultAzureCredential 解析(支持环境变量、托管标识、VS Code 登录等)。

可以将这些 .env 内容存储在用于本地开发的文件中。

LangChain v1:跟踪设置

from dotenv import load_dotenv
import os
from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer

load_dotenv(override=True)

azure_tracer = AzureAIOpenTelemetryTracer(
    connection_string=os.environ.get("APPLICATION_INSIGHTS_CONNECTION_STRING"),
    enable_content_recording=True,
    name="Weather information agent",
    id="weather_info_agent_771929",
)

tracers = [azure_tracer]

LangChain v1:模型设置(Azure OpenAI)

import os
import azure.identity
from langchain_openai import AzureChatOpenAI

token_provider = azure.identity.get_bearer_token_provider(
    azure.identity.DefaultAzureCredential(),
    "https://cognitiveservices.azure.com/.default",
)

model = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    azure_deployment=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
    openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
    azure_ad_token_provider=token_provider,
)

LangChain v1:定义工具和提示

from dataclasses import dataclass
from langchain_core.tools import tool

system_prompt = """You are an expert weather forecaster, who speaks in puns.

You have access to two tools:

- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location

If a user asks you for the weather, make sure you know the location.
If you can tell from the question that they mean wherever they are,
use the get_user_location tool to find their location."""

# Mock user locations keyed by user id (string)
USER_LOCATION = {
    "1": "Florida",
    "2": "SF",
}


@dataclass
class UserContext:
    user_id: str


@tool
def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

LangChain v1:使用运行时上下文并定义用户信息工具

from langgraph.runtime import get_runtime
from langchain_core.runnables import RunnableConfig

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Retrieve user information based on user ID."""
    runtime = get_runtime(UserContext)
    user_id = runtime.context.user_id
    return USER_LOCATION[user_id]

LangChain v1:创建代理

from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySaver
from dataclasses import dataclass


@dataclass
class WeatherResponse:
    conditions: str
    punny_response: str


checkpointer = InMemorySaver()

agent = create_agent(
    model=model,
    prompt=system_prompt,
    tools=[get_user_info, get_weather],
    response_format=WeatherResponse,
    checkpointer=checkpointer,
)

LangChain v1:使用跟踪运行代理

from rich import print

def main():
    config = {"configurable": {"thread_id": "1"}, "callbacks": [azure_tracer]}
    context = UserContext(user_id="1")

    r1 = agent.invoke(
        {"messages": [{"role": "user", "content": "what is the weather outside?"}]},
        config=config,
        context=context,
    )
    print(r1.get("structured_response"))

    r2 = agent.invoke(
        {"messages": [{"role": "user", "content": "Thanks"}]},
        config=config,
        context=context,
    )
    print(r2.get("structured_response"))


if __name__ == "__main__":
    main()

启用 langchain-azure-ai 后,所有 LangChain v1操作(LLM 调用、工具调用、代理步骤)均使用最新的 OpenTelemetry 语义约定进行跟踪,并展示在可观测性中,并且链接到您的 Application Insights 资源。

示例:使用 Azure AI 跟踪的 LangGraph 代理

此示例展示了一个简单的 LangGraph 智能体,它已集成 langchain-azure-ai,针对图形步骤、工具调用和模型调用生成符合 OpenTelemetry 标准的跟踪。

LangGraph:安装包

pip install \
  langchain-azure-ai \
  langgraph==1.0.0a4 \
  langchain==1.0.0a10 \
  langchain-openai \
  azure-identity \
  python-dotenv

LangGraph:配置环境

  • APPLICATION_INSIGHTS_CONNECTION_STRING:用于跟踪的 Azure Monitor Application Insights 连接字符串。
  • AZURE_OPENAI_ENDPOINT:Azure OpenAI 终结点 URL。
  • AZURE_OPENAI_CHAT_DEPLOYMENT:聊天模型部署名称。
  • AZURE_OPENAI_VERSION:API 版本,例如 2024-08-01-preview

可以将这些 .env 内容存储在用于本地开发的文件中。

LangGraph 跟踪程序设置

import os
from dotenv import load_dotenv
from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer

load_dotenv(override=True)

azure_tracer = AzureAIOpenTelemetryTracer(
    connection_string=os.environ.get("APPLICATION_INSIGHTS_CONNECTION_STRING"),
    enable_content_recording=os.getenv("OTEL_RECORD_CONTENT", "true").lower() == "true",
    name="Music Player Agent",
)

LangGraph:工具

from langchain_core.tools import tool

@tool
def play_song_on_spotify(song: str):
    """Play a song on Spotify"""
    # Integrate with Spotify API here.
    return f"Successfully played {song} on Spotify!"


@tool
def play_song_on_apple(song: str):
    """Play a song on Apple Music"""
    # Integrate with Apple Music API here.
    return f"Successfully played {song} on Apple Music!"


tools = [play_song_on_apple, play_song_on_spotify]

LangGraph:模型设置(Azure OpenAI)

import os
import azure.identity
from langchain_openai import AzureChatOpenAI

token_provider = azure.identity.get_bearer_token_provider(
    azure.identity.DefaultAzureCredential(),
    "https://cognitiveservices.azure.com/.default",
)

model = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    azure_deployment=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
    openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
    azure_ad_token_provider=token_provider,
).bind_tools(tools, parallel_tool_calls=False)

生成 LangGraph 工作流

from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver

tool_node = ToolNode(tools)

def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    return "continue" if getattr(last_message, "tool_calls", None) else "end"


def call_model(state: MessagesState):
    messages = state["messages"]
    response = model.invoke(messages)
    return {"messages": [response]}


workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "action",
        "end": END,
    },
)
workflow.add_edge("action", "agent")

memory = MemorySaver()
app = workflow.compile(checkpointer=memory)

LangGraph:以追踪功能运行

from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "1"}, "callbacks": [azure_tracer]}
input_message = HumanMessage(content="Can you play Taylor Swift's most popular song?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
    event["messages"][-1].pretty_print()

启用 langchain-azure-ai 后,LangGraph 执行会针对模型调用、工具调用和图形转换,生成符合 OpenTelemetry 的跟踪跨度。 这些追踪流向 Application Insights,并在可观测性中显示。

示例:用 Azure AI 追踪配置 LangChain 0.3

此最小设置演示了如何在 LangChain 0.3 应用程序中使用 langchain-azure-ai 跟踪器和 AzureChatOpenAI 来启用 Azure AI 跟踪。

LangChain 0.3:安装包

pip install \
  "langchain>=0.3,<0.4" \
  langchain-openai \
  langchain-azure-ai \
  python-dotenv

LangChain 0.3:配置环境

  • APPLICATION_INSIGHTS_CONNECTION_STRING:用于跟踪的 Application Insights 连接字符串。
  • AZURE_OPENAI_ENDPOINT:Azure OpenAI 端点 URL。
  • AZURE_OPENAI_CHAT_DEPLOYMENT:聊天模型部署名称。
  • AZURE_OPENAI_VERSION:API 版本,例如 2024-08-01-preview
  • AZURE_OPENAI_API_KEY:Azure OpenAI API 密钥。

LangChain 0.3:跟踪和模型设置

import os
from dotenv import load_dotenv
from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer
from langchain_openai import AzureChatOpenAI

load_dotenv(override=True)

# Tracer: emits spans conforming to updated OTel spec
azure_tracer = AzureAIOpenTelemetryTracer(
    connection_string=os.environ.get("APPLICATION_INSIGHTS_CONNECTION_STRING"),
    enable_content_recording=True,
    name="Trip Planner Orchestrator",
    id="trip_planner_orchestrator_v3",
)
tracers = [azure_tracer]

# Model: Azure OpenAI with callbacks for tracing
llm = AzureChatOpenAI(
    azure_deployment=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
    api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    api_version=os.environ.get("AZURE_OPENAI_VERSION"),
    temperature=0.2,
    callbacks=tracers,
)

callbacks=[azure_tracer] 附加到您的链、工具或代理上,以确保 LangChain 0.3 操作在可观测性中被追踪和显示。

OpenAI 代理 SDK

使用此代码片段为 OpenAI 代理 SDK 配置 OpenTelemetry 跟踪并为框架添加监测。 如果 APPLICATION_INSIGHTS_CONNECTION_STRING 已设置,它将导出到 Azure Monitor;否则,它会回退到控制台。

import os
from opentelemetry import trace
from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter

# Configure tracer provider + exporter
resource = Resource.create({
    "service.name": os.getenv("OTEL_SERVICE_NAME", "openai-agents-app"),
})
provider = TracerProvider(resource=resource)

conn = os.getenv("APPLICATION_INSIGHTS_CONNECTION_STRING")
if conn:
    from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
    provider.add_span_processor(
        BatchSpanProcessor(AzureMonitorTraceExporter.from_connection_string(conn))
    )
else:
    provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))

trace.set_tracer_provider(provider)

# Instrument the OpenAI Agents SDK
OpenAIAgentsInstrumentor().instrument(tracer_provider=trace.get_tracer_provider())

# Example: create a session span around your agent run
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("agent_session[openai.agents]"):
    # ... run your agent here
    pass