Observação
O acesso a essa página exige autorização. Você pode tentar entrar ou alterar diretórios.
O acesso a essa página exige autorização. Você pode tentar alterar os diretórios.
O Microsoft Agent Framework dá suporte à criação de agentes que usam o serviço de respostas OpenAI .
Introdução
Adicione os pacotes NuGet necessários ao seu projeto.
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
Criar um agente de respostas OpenAI
Como primeira etapa, você precisa criar um cliente para se conectar ao serviço OpenAI.
using System;
using Microsoft.Agents.AI;
using OpenAI;
OpenAIClient client = new OpenAIClient("<your_api_key>");
O OpenAI dá suporte a vários serviços que fornecem recursos de chamada de modelo. Escolha o serviço Respostas para criar um agente baseado em respostas.
#pragma warning disable OPENAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates.
var responseClient = client.GetOpenAIResponseClient("gpt-4o-mini");
#pragma warning restore OPENAI001
Por fim, crie o agente usando o CreateAIAgent método de extensão no ResponseClient.
AIAgent agent = responseClient.CreateAIAgent(
instructions: "You are good at telling jokes.",
name: "Joker");
// Invoke the agent and output the text result.
Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));
Usando o agente
O agente é um AIAgent padrão e oferece suporte a todas as operações padrão AIAgent.
Para obter mais informações sobre como executar e interagir com agentes, consulte os tutoriais de introdução do Agente.
Pré-requisitos
Instale o pacote do Microsoft Agent Framework.
pip install agent-framework --pre
Configuração
Variáveis de ambiente
Configure as variáveis de ambiente necessárias para autenticação OpenAI:
# Required for OpenAI API access
OPENAI_API_KEY="your-openai-api-key"
OPENAI_RESPONSES_MODEL_ID="gpt-4o" # or your preferred Responses-compatible model
Como alternativa, você pode usar um .env arquivo na raiz do projeto:
OPENAI_API_KEY=your-openai-api-key
OPENAI_RESPONSES_MODEL_ID=gpt-4o
Introdução
Importe as classes necessárias do Agent Framework:
import asyncio
from agent_framework import ChatAgent
from agent_framework.openai import OpenAIResponsesClient
Criar um agente de respostas OpenAI
Criação básica de agente
A maneira mais simples de criar um agente de respostas:
async def basic_example():
# Create an agent using OpenAI Responses
agent = OpenAIResponsesClient().create_agent(
name="WeatherBot",
instructions="You are a helpful weather assistant.",
)
result = await agent.run("What's a good way to check the weather?")
print(result.text)
Usando a configuração explícita
Você pode fornecer uma configuração explícita em vez de depender de variáveis de ambiente:
async def explicit_config_example():
agent = OpenAIResponsesClient(
ai_model_id="gpt-4o",
api_key="your-api-key-here",
).create_agent(
instructions="You are a helpful assistant.",
)
result = await agent.run("Tell me about AI.")
print(result.text)
Padrões básicos de uso
Respostas do streaming
Obtenha respostas conforme elas são geradas para uma melhor experiência do usuário:
async def streaming_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a creative storyteller.",
)
print("Agent: ", end="", flush=True)
async for chunk in agent.run_stream("Tell me a short story about AI."):
if chunk.text:
print(chunk.text, end="", flush=True)
print() # New line after streaming
Recursos do agente
Modelos de raciocínio
Use recursos avançados de raciocínio com modelos como GPT-5:
from agent_framework import HostedCodeInterpreterTool, TextContent, TextReasoningContent
async def reasoning_example():
agent = OpenAIResponsesClient(ai_model_id="gpt-5").create_agent(
name="MathTutor",
instructions="You are a personal math tutor. When asked a math question, "
"write and run code to answer the question.",
tools=HostedCodeInterpreterTool(),
reasoning={"effort": "high", "summary": "detailed"},
)
print("Agent: ", end="", flush=True)
async for chunk in agent.run_stream("Solve: 3x + 11 = 14"):
if chunk.contents:
for content in chunk.contents:
if isinstance(content, TextReasoningContent):
# Reasoning content in gray text
print(f"\033[97m{content.text}\033[0m", end="", flush=True)
elif isinstance(content, TextContent):
print(content.text, end="", flush=True)
print()
Saída estruturada
Obter respostas em formatos estruturados:
from pydantic import BaseModel
from agent_framework import AgentRunResponse
class CityInfo(BaseModel):
"""A structured output for city information."""
city: str
description: str
async def structured_output_example():
agent = OpenAIResponsesClient().create_agent(
name="CityExpert",
instructions="You describe cities in a structured format.",
)
# Non-streaming structured output
result = await agent.run("Tell me about Paris, France", response_format=CityInfo)
if result.value:
city_data = result.value
print(f"City: {city_data.city}")
print(f"Description: {city_data.description}")
# Streaming structured output
structured_result = await AgentRunResponse.from_agent_response_generator(
agent.run_stream("Tell me about Tokyo, Japan", response_format=CityInfo),
output_format_type=CityInfo,
)
if structured_result.value:
tokyo_data = structured_result.value
print(f"City: {tokyo_data.city}")
print(f"Description: {tokyo_data.description}")
Ferramentas de Funções
Equipe seu agente com funções personalizadas:
from typing import Annotated
from pydantic import Field
def get_weather(
location: Annotated[str, Field(description="The location to get weather for")]
) -> str:
"""Get the weather for a given location."""
# Your weather API implementation here
return f"The weather in {location} is sunny with 25°C."
async def tools_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful weather assistant.",
tools=get_weather,
)
result = await agent.run("What's the weather like in Tokyo?")
print(result.text)
Interpretador de Código
Habilite o agente para executar o código do Python:
from agent_framework import HostedCodeInterpreterTool
async def code_interpreter_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful assistant that can write and execute Python code.",
tools=HostedCodeInterpreterTool(),
)
result = await agent.run("Calculate the factorial of 100 using Python code.")
print(result.text)
Interpretador de código com upload de arquivo
Para tarefas de análise de dados, você pode carregar arquivos e analisá-los com código:
import os
import tempfile
from agent_framework import HostedCodeInterpreterTool
from openai import AsyncOpenAI
async def code_interpreter_with_files_example():
print("=== OpenAI Code Interpreter with File Upload ===")
# Create the OpenAI client for file operations
openai_client = AsyncOpenAI()
# Create sample CSV data
csv_data = """name,department,salary,years_experience
Alice Johnson,Engineering,95000,5
Bob Smith,Sales,75000,3
Carol Williams,Engineering,105000,8
David Brown,Marketing,68000,2
Emma Davis,Sales,82000,4
Frank Wilson,Engineering,88000,6
"""
# Create temporary CSV file
with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as temp_file:
temp_file.write(csv_data)
temp_file_path = temp_file.name
# Upload file to OpenAI
print("Uploading file to OpenAI...")
with open(temp_file_path, "rb") as file:
uploaded_file = await openai_client.files.create(
file=file,
purpose="assistants", # Required for code interpreter
)
print(f"File uploaded with ID: {uploaded_file.id}")
# Create agent using OpenAI Responses client
agent = ChatAgent(
chat_client=OpenAIResponsesClient(),
instructions="You are a helpful assistant that can analyze data files using Python code.",
tools=HostedCodeInterpreterTool(inputs=[{"file_id": uploaded_file.id}]),
)
# Test the code interpreter with the uploaded file
query = "Analyze the employee data in the uploaded CSV file. Calculate average salary by department."
print(f"User: {query}")
result = await agent.run(query)
print(f"Agent: {result.text}")
# Clean up: delete the uploaded file
await openai_client.files.delete(uploaded_file.id)
print(f"Cleaned up uploaded file: {uploaded_file.id}")
# Clean up temporary local file
os.unlink(temp_file_path)
print(f"Cleaned up temporary file: {temp_file_path}")
Gerenciamento de threads
Manter o contexto da conversa em várias interações:
async def thread_example():
agent = OpenAIResponsesClient().create_agent(
name="Agent",
instructions="You are a helpful assistant.",
)
# Create a persistent thread for conversation context
thread = agent.get_new_thread()
# First interaction
first_query = "My name is Alice"
print(f"User: {first_query}")
first_result = await agent.run(first_query, thread=thread)
print(f"Agent: {first_result.text}")
# Second interaction - agent remembers the context
second_query = "What's my name?"
print(f"User: {second_query}")
second_result = await agent.run(second_query, thread=thread)
print(f"Agent: {second_result.text}") # Should remember "Alice"
Pesquisa de Arquivo
Habilite seu agente para pesquisar por meio de documentos e arquivos carregados:
from agent_framework import HostedFileSearchTool, HostedVectorStoreContent
async def file_search_example():
client = OpenAIResponsesClient()
# Create a file with sample content
file = await client.client.files.create(
file=("todays_weather.txt", b"The weather today is sunny with a high of 75F."),
purpose="user_data"
)
# Create a vector store for document storage
vector_store = await client.client.vector_stores.create(
name="knowledge_base",
expires_after={"anchor": "last_active_at", "days": 1},
)
# Add file to vector store and wait for processing
result = await client.client.vector_stores.files.create_and_poll(
vector_store_id=vector_store.id,
file_id=file.id
)
# Check if processing was successful
if result.last_error is not None:
raise Exception(f"Vector store file processing failed with status: {result.last_error.message}")
# Create vector store content reference
vector_store_content = HostedVectorStoreContent(vector_store_id=vector_store.id)
# Create agent with file search capability
agent = ChatAgent(
chat_client=client,
instructions="You are a helpful assistant that can search through files to find information.",
tools=[HostedFileSearchTool(inputs=vector_store_content)],
)
# Test the file search
message = "What is the weather today? Do a file search to find the answer."
print(f"User: {message}")
response = await agent.run(message)
print(f"Agent: {response}")
# Cleanup
await client.client.vector_stores.delete(vector_store.id)
await client.client.files.delete(file.id)
Pesquisa na Web
Habilitar recursos de pesquisa na Web em tempo real:
from agent_framework import HostedWebSearchTool
async def web_search_example():
agent = OpenAIResponsesClient().create_agent(
name="SearchBot",
instructions="You are a helpful assistant that can search the web for current information.",
tools=HostedWebSearchTool(),
)
result = await agent.run("What are the latest developments in artificial intelligence?")
print(result.text)
Análise de imagens
Analisar e entender imagens com funcionalidades multi modais:
from agent_framework import ChatMessage, TextContent, UriContent
async def image_analysis_example():
agent = OpenAIResponsesClient().create_agent(
name="VisionAgent",
instructions="You are a helpful agent that can analyze images.",
)
# Create message with both text and image content
message = ChatMessage(
role="user",
contents=[
TextContent(text="What do you see in this image?"),
UriContent(
uri="your-image-uri",
media_type="image/jpeg",
),
],
)
result = await agent.run(message)
print(result.text)
Geração de imagem
Gere imagens usando a API de Respostas:
from agent_framework import DataContent, UriContent
async def image_generation_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful AI that can generate images.",
tools=[{
"type": "image_generation",
"size": "1024x1024",
"quality": "low",
}],
)
result = await agent.run("Generate an image of a sunset over the ocean.")
# Check for generated images in the response
for content in result.contents:
if isinstance(content, (DataContent, UriContent)):
print(f"Image generated: {content.uri}")
Ferramentas do Protocolo de Contexto de Modelo (MCP)
Ferramentas MCP locais
Conecte-se a servidores MCP locais para recursos estendidos:
from agent_framework import MCPStreamableHTTPTool
async def local_mcp_example():
agent = OpenAIResponsesClient().create_agent(
name="DocsAgent",
instructions="You are a helpful assistant that can help with Microsoft documentation.",
tools=MCPStreamableHTTPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
),
)
result = await agent.run("How do I create an Azure storage account using az cli?")
print(result.text)
Ferramentas MCP hospedadas
Use ferramentas MCP hospedadas para ampliar as capacidades:
from agent_framework import HostedMCPTool
async def hosted_mcp_example():
agent = OpenAIResponsesClient().create_agent(
name="DocsBot",
instructions="You are a helpful assistant with access to various tools.",
tools=HostedMCPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
),
)
result = await agent.run("How do I create an Azure storage account?")
print(result.text)
Usando o agente
O agente é um padrão BaseAgent e dá suporte a todas as operações de agente padrão.
Para obter mais informações sobre como executar e interagir com agentes, consulte os tutoriais de introdução do Agente.