Remarque
L’accès à cette page nécessite une autorisation. Vous pouvez essayer de vous connecter ou de modifier des répertoires.
L’accès à cette page nécessite une autorisation. Vous pouvez essayer de modifier des répertoires.
Microsoft Agent Framework prend en charge la création d’agents qui utilisent le service de réponses OpenAI .
Getting Started
Ajoutez les packages NuGet requis à votre projet.
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
Créer un agent de réponses OpenAI
Pour commencer, vous devez créer un client pour vous connecter au service OpenAI.
using System;
using Microsoft.Agents.AI;
using OpenAI;
OpenAIClient client = new OpenAIClient("<your_api_key>");
OpenAI prend en charge plusieurs services qui fournissent toutes des fonctionnalités d’appel de modèle. Sélectionnez le service Réponses pour créer un agent basé sur les réponses.
#pragma warning disable OPENAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates.
var responseClient = client.GetOpenAIResponseClient("gpt-4o-mini");
#pragma warning restore OPENAI001
Enfin, créez l’agent à l’aide de la méthode d’extension CreateAIAgent sur le ResponseClient.
AIAgent agent = responseClient.CreateAIAgent(
instructions: "You are good at telling jokes.",
name: "Joker");
// Invoke the agent and output the text result.
Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));
Utilisation de l’agent
L’agent standard AIAgent prend en charge toutes les opérations standard AIAgent.
Pour plus d’informations sur l’exécution et l’interaction avec les agents, consultez les didacticiels de prise en main de l’agent.
Prerequisites
Installez le package Microsoft Agent Framework.
pip install agent-framework --pre
Paramétrage
Variables d’environnement
Configurez les variables d’environnement requises pour l’authentification OpenAI :
# Required for OpenAI API access
OPENAI_API_KEY="your-openai-api-key"
OPENAI_RESPONSES_MODEL_ID="gpt-4o" # or your preferred Responses-compatible model
Vous pouvez également utiliser un .env fichier à la racine de votre projet :
OPENAI_API_KEY=your-openai-api-key
OPENAI_RESPONSES_MODEL_ID=gpt-4o
Getting Started
Importez les classes requises à partir d’Agent Framework :
import asyncio
from agent_framework import ChatAgent
from agent_framework.openai import OpenAIResponsesClient
Créer un agent de réponses OpenAI
Création d'un agent basique
La façon la plus simple de créer un agent de réponses :
async def basic_example():
# Create an agent using OpenAI Responses
agent = OpenAIResponsesClient().create_agent(
name="WeatherBot",
instructions="You are a helpful weather assistant.",
)
result = await agent.run("What's a good way to check the weather?")
print(result.text)
Utilisation de la configuration explicite
Vous pouvez fournir une configuration explicite au lieu de vous appuyer sur des variables d’environnement :
async def explicit_config_example():
agent = OpenAIResponsesClient(
ai_model_id="gpt-4o",
api_key="your-api-key-here",
).create_agent(
instructions="You are a helpful assistant.",
)
result = await agent.run("Tell me about AI.")
print(result.text)
Modèles d’utilisation de base
Réponses en continu
Obtenez des réponses à mesure qu’elles sont générées pour une meilleure expérience utilisateur :
async def streaming_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a creative storyteller.",
)
print("Agent: ", end="", flush=True)
async for chunk in agent.run_stream("Tell me a short story about AI."):
if chunk.text:
print(chunk.text, end="", flush=True)
print() # New line after streaming
Fonctionnalités de l’agent
Modèles de raisonnement
Utilisez des fonctionnalités de raisonnement avancées avec des modèles tels que GPT-5 :
from agent_framework import HostedCodeInterpreterTool, TextContent, TextReasoningContent
async def reasoning_example():
agent = OpenAIResponsesClient(ai_model_id="gpt-5").create_agent(
name="MathTutor",
instructions="You are a personal math tutor. When asked a math question, "
"write and run code to answer the question.",
tools=HostedCodeInterpreterTool(),
reasoning={"effort": "high", "summary": "detailed"},
)
print("Agent: ", end="", flush=True)
async for chunk in agent.run_stream("Solve: 3x + 11 = 14"):
if chunk.contents:
for content in chunk.contents:
if isinstance(content, TextReasoningContent):
# Reasoning content in gray text
print(f"\033[97m{content.text}\033[0m", end="", flush=True)
elif isinstance(content, TextContent):
print(content.text, end="", flush=True)
print()
Sortie structurée
Obtenez des réponses dans des formats structurés :
from pydantic import BaseModel
from agent_framework import AgentRunResponse
class CityInfo(BaseModel):
"""A structured output for city information."""
city: str
description: str
async def structured_output_example():
agent = OpenAIResponsesClient().create_agent(
name="CityExpert",
instructions="You describe cities in a structured format.",
)
# Non-streaming structured output
result = await agent.run("Tell me about Paris, France", response_format=CityInfo)
if result.value:
city_data = result.value
print(f"City: {city_data.city}")
print(f"Description: {city_data.description}")
# Streaming structured output
structured_result = await AgentRunResponse.from_agent_response_generator(
agent.run_stream("Tell me about Tokyo, Japan", response_format=CityInfo),
output_format_type=CityInfo,
)
if structured_result.value:
tokyo_data = structured_result.value
print(f"City: {tokyo_data.city}")
print(f"Description: {tokyo_data.description}")
Outils de fonction
Équipez votre agent avec des fonctions personnalisées :
from typing import Annotated
from pydantic import Field
def get_weather(
location: Annotated[str, Field(description="The location to get weather for")]
) -> str:
"""Get the weather for a given location."""
# Your weather API implementation here
return f"The weather in {location} is sunny with 25°C."
async def tools_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful weather assistant.",
tools=get_weather,
)
result = await agent.run("What's the weather like in Tokyo?")
print(result.text)
Interpréteur de code
Permettre à votre agent d’exécuter du code Python :
from agent_framework import HostedCodeInterpreterTool
async def code_interpreter_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful assistant that can write and execute Python code.",
tools=HostedCodeInterpreterTool(),
)
result = await agent.run("Calculate the factorial of 100 using Python code.")
print(result.text)
Interpréteur de code avec chargement de fichiers
Pour les tâches d’analyse des données, vous pouvez charger des fichiers et les analyser avec du code :
import os
import tempfile
from agent_framework import HostedCodeInterpreterTool
from openai import AsyncOpenAI
async def code_interpreter_with_files_example():
print("=== OpenAI Code Interpreter with File Upload ===")
# Create the OpenAI client for file operations
openai_client = AsyncOpenAI()
# Create sample CSV data
csv_data = """name,department,salary,years_experience
Alice Johnson,Engineering,95000,5
Bob Smith,Sales,75000,3
Carol Williams,Engineering,105000,8
David Brown,Marketing,68000,2
Emma Davis,Sales,82000,4
Frank Wilson,Engineering,88000,6
"""
# Create temporary CSV file
with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as temp_file:
temp_file.write(csv_data)
temp_file_path = temp_file.name
# Upload file to OpenAI
print("Uploading file to OpenAI...")
with open(temp_file_path, "rb") as file:
uploaded_file = await openai_client.files.create(
file=file,
purpose="assistants", # Required for code interpreter
)
print(f"File uploaded with ID: {uploaded_file.id}")
# Create agent using OpenAI Responses client
agent = ChatAgent(
chat_client=OpenAIResponsesClient(),
instructions="You are a helpful assistant that can analyze data files using Python code.",
tools=HostedCodeInterpreterTool(inputs=[{"file_id": uploaded_file.id}]),
)
# Test the code interpreter with the uploaded file
query = "Analyze the employee data in the uploaded CSV file. Calculate average salary by department."
print(f"User: {query}")
result = await agent.run(query)
print(f"Agent: {result.text}")
# Clean up: delete the uploaded file
await openai_client.files.delete(uploaded_file.id)
print(f"Cleaned up uploaded file: {uploaded_file.id}")
# Clean up temporary local file
os.unlink(temp_file_path)
print(f"Cleaned up temporary file: {temp_file_path}")
Gestion des threads
Maintenez le contexte de conversation entre plusieurs interactions :
async def thread_example():
agent = OpenAIResponsesClient().create_agent(
name="Agent",
instructions="You are a helpful assistant.",
)
# Create a persistent thread for conversation context
thread = agent.get_new_thread()
# First interaction
first_query = "My name is Alice"
print(f"User: {first_query}")
first_result = await agent.run(first_query, thread=thread)
print(f"Agent: {first_result.text}")
# Second interaction - agent remembers the context
second_query = "What's my name?"
print(f"User: {second_query}")
second_result = await agent.run(second_query, thread=thread)
print(f"Agent: {second_result.text}") # Should remember "Alice"
Recherche de fichiers
Permettre à votre agent de rechercher dans des documents et des fichiers chargés :
from agent_framework import HostedFileSearchTool, HostedVectorStoreContent
async def file_search_example():
client = OpenAIResponsesClient()
# Create a file with sample content
file = await client.client.files.create(
file=("todays_weather.txt", b"The weather today is sunny with a high of 75F."),
purpose="user_data"
)
# Create a vector store for document storage
vector_store = await client.client.vector_stores.create(
name="knowledge_base",
expires_after={"anchor": "last_active_at", "days": 1},
)
# Add file to vector store and wait for processing
result = await client.client.vector_stores.files.create_and_poll(
vector_store_id=vector_store.id,
file_id=file.id
)
# Check if processing was successful
if result.last_error is not None:
raise Exception(f"Vector store file processing failed with status: {result.last_error.message}")
# Create vector store content reference
vector_store_content = HostedVectorStoreContent(vector_store_id=vector_store.id)
# Create agent with file search capability
agent = ChatAgent(
chat_client=client,
instructions="You are a helpful assistant that can search through files to find information.",
tools=[HostedFileSearchTool(inputs=vector_store_content)],
)
# Test the file search
message = "What is the weather today? Do a file search to find the answer."
print(f"User: {message}")
response = await agent.run(message)
print(f"Agent: {response}")
# Cleanup
await client.client.vector_stores.delete(vector_store.id)
await client.client.files.delete(file.id)
Web Search
Activer les fonctionnalités de recherche web en temps réel :
from agent_framework import HostedWebSearchTool
async def web_search_example():
agent = OpenAIResponsesClient().create_agent(
name="SearchBot",
instructions="You are a helpful assistant that can search the web for current information.",
tools=HostedWebSearchTool(),
)
result = await agent.run("What are the latest developments in artificial intelligence?")
print(result.text)
Analyse d’image
Analysez et comprenez les images avec des fonctionnalités multimodaux :
from agent_framework import ChatMessage, TextContent, UriContent
async def image_analysis_example():
agent = OpenAIResponsesClient().create_agent(
name="VisionAgent",
instructions="You are a helpful agent that can analyze images.",
)
# Create message with both text and image content
message = ChatMessage(
role="user",
contents=[
TextContent(text="What do you see in this image?"),
UriContent(
uri="your-image-uri",
media_type="image/jpeg",
),
],
)
result = await agent.run(message)
print(result.text)
Génération d’images
Générez des images à l’aide de l’API Réponses :
from agent_framework import DataContent, UriContent
async def image_generation_example():
agent = OpenAIResponsesClient().create_agent(
instructions="You are a helpful AI that can generate images.",
tools=[{
"type": "image_generation",
"size": "1024x1024",
"quality": "low",
}],
)
result = await agent.run("Generate an image of a sunset over the ocean.")
# Check for generated images in the response
for content in result.contents:
if isinstance(content, (DataContent, UriContent)):
print(f"Image generated: {content.uri}")
Outils MCP (Model Context Protocol)
Outils MCP locaux
Connectez-vous aux serveurs MCP locaux pour les fonctionnalités étendues :
from agent_framework import MCPStreamableHTTPTool
async def local_mcp_example():
agent = OpenAIResponsesClient().create_agent(
name="DocsAgent",
instructions="You are a helpful assistant that can help with Microsoft documentation.",
tools=MCPStreamableHTTPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
),
)
result = await agent.run("How do I create an Azure storage account using az cli?")
print(result.text)
Outils MCP hébergés
Utilisez les outils MCP hébergés pour les fonctionnalités étendues :
from agent_framework import HostedMCPTool
async def hosted_mcp_example():
agent = OpenAIResponsesClient().create_agent(
name="DocsBot",
instructions="You are a helpful assistant with access to various tools.",
tools=HostedMCPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
),
)
result = await agent.run("How do I create an Azure storage account?")
print(result.text)
Utilisation de l’agent
L’agent est standard BaseAgent et prend en charge toutes les opérations d’agent standard.
Pour plus d’informations sur l’exécution et l’interaction avec les agents, consultez les didacticiels de prise en main de l’agent.