Remarque
L’accès à cette page nécessite une autorisation. Vous pouvez essayer de vous connecter ou de modifier des répertoires.
L’accès à cette page nécessite une autorisation. Vous pouvez essayer de modifier des répertoires.
Microsoft Agent Framework prend en charge la création d’agents qui utilisent le service Réponses Azure OpenAI .
Getting Started
Ajoutez les packages NuGet requis à votre projet.
dotnet add package Azure.AI.OpenAI --prerelease
dotnet add package Azure.Identity
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
Créer un agent de réponses Azure OpenAI
Pour commencer, vous devez créer un client pour vous connecter au service Azure OpenAI.
using System;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;
using OpenAI;
AzureOpenAIClient client = new AzureOpenAIClient(
new Uri("https://<myresource>.openai.azure.com/"),
new AzureCliCredential());
Azure OpenAI prend en charge plusieurs services qui fournissent toutes des fonctionnalités d’appel de modèle. Sélectionnez le service Réponses pour créer un agent basé sur les réponses.
#pragma warning disable OPENAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates.
var responseClient = client.GetOpenAIResponseClient("gpt-4o-mini");
#pragma warning restore OPENAI001
Enfin, créez l’agent à l’aide de la méthode d’extension CreateAIAgent sur le ResponseClient.
AIAgent agent = responseClient.CreateAIAgent(
instructions: "You are good at telling jokes.",
name: "Joker");
// Invoke the agent and output the text result.
Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));
Utilisation de l’agent
L’agent standard AIAgent prend en charge toutes les opérations standard AIAgent.
Pour plus d’informations sur l’exécution et l’interaction avec les agents, consultez les didacticiels de prise en main de l’agent.
Paramétrage
Variables d’environnement
Avant d’utiliser les agents Réponses Azure OpenAI, vous devez configurer ces variables d’environnement :
export AZURE_OPENAI_ENDPOINT="https://<myresource>.openai.azure.com"
export AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME="gpt-4o-mini"
Si vous le souhaitez, vous pouvez également définir :
export AZURE_OPENAI_API_VERSION="preview" # Required for Responses API
export AZURE_OPENAI_API_KEY="<your-api-key>" # If not using Azure CLI authentication
Installation
Ajoutez le package Agent Framework à votre projet :
pip install agent-framework-core --pre
Getting Started
Authentication
Les agents réponses Azure OpenAI utilisent les informations d’identification Azure pour l’authentification. L’approche la plus simple consiste à utiliser AzureCliCredential après l’exécution az login:
from azure.identity import AzureCliCredential
credential = AzureCliCredential()
Créer un agent de réponses Azure OpenAI
Création d'un agent basique
La façon la plus simple de créer un agent consiste à utiliser les AzureOpenAIResponsesClient variables d’environnement :
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
instructions="You are good at telling jokes.",
name="Joker"
)
result = await agent.run("Tell me a joke about a pirate.")
print(result.text)
asyncio.run(main())
Configuration explicite
Vous pouvez également fournir une configuration explicitement au lieu d’utiliser des variables d’environnement :
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(
endpoint="https://<myresource>.openai.azure.com",
deployment_name="gpt-4o-mini",
api_version="preview",
credential=AzureCliCredential()
).create_agent(
instructions="You are good at telling jokes.",
name="Joker"
)
result = await agent.run("Tell me a joke about a pirate.")
print(result.text)
asyncio.run(main())
Fonctionnalités de l’agent
Modèles de raisonnement
Les agents Réponses Azure OpenAI prennent en charge des modèles de raisonnement avancés comme o1 pour résoudre des problèmes complexes :
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(
deployment_name="o1-preview", # Use reasoning model
credential=AzureCliCredential()
).create_agent(
instructions="You are a helpful assistant that excels at complex reasoning.",
name="ReasoningAgent"
)
result = await agent.run("Solve this logic puzzle: If A > B, B > C, and C > D, and we know D = 5, B = 10, what can we determine about A?")
print(result.text)
asyncio.run(main())
Sortie structurée
Obtenir des réponses structurées à partir d’agents de réponses Azure OpenAI :
import asyncio
from typing import Annotated
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
from pydantic import BaseModel, Field
class WeatherForecast(BaseModel):
location: Annotated[str, Field(description="The location")]
temperature: Annotated[int, Field(description="Temperature in Celsius")]
condition: Annotated[str, Field(description="Weather condition")]
humidity: Annotated[int, Field(description="Humidity percentage")]
async def main():
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
instructions="You are a weather assistant that provides structured forecasts.",
response_format=WeatherForecast
)
result = await agent.run("What's the weather like in Paris today?")
weather_data = result.value
print(f"Location: {weather_data.location}")
print(f"Temperature: {weather_data.temperature}°C")
print(f"Condition: {weather_data.condition}")
print(f"Humidity: {weather_data.humidity}%")
asyncio.run(main())
Outils de fonction
Vous pouvez fournir des outils de fonction personnalisés aux agents réponses Azure OpenAI :
import asyncio
from typing import Annotated
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
from pydantic import Field
def get_weather(
location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
"""Get the weather for a given location."""
return f"The weather in {location} is sunny with a high of 25°C."
async def main():
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
instructions="You are a helpful weather assistant.",
tools=get_weather
)
result = await agent.run("What's the weather like in Seattle?")
print(result.text)
asyncio.run(main())
Interpréteur de code
Les agents Réponses Azure OpenAI prennent en charge l’exécution du code via l’interpréteur de code hébergé :
import asyncio
from agent_framework import ChatAgent, HostedCodeInterpreterTool
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
async with ChatAgent(
chat_client=AzureOpenAIResponsesClient(credential=AzureCliCredential()),
instructions="You are a helpful assistant that can write and execute Python code.",
tools=HostedCodeInterpreterTool()
) as agent:
result = await agent.run("Calculate the factorial of 20 using Python code.")
print(result.text)
asyncio.run(main())
Interpréteur de code avec chargement de fichiers
Pour les tâches d’analyse des données, vous pouvez charger des fichiers et les analyser avec du code :
import asyncio
import os
import tempfile
from agent_framework import ChatAgent, HostedCodeInterpreterTool
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
from openai import AsyncAzureOpenAI
async def create_sample_file_and_upload(openai_client: AsyncAzureOpenAI) -> tuple[str, str]:
"""Create a sample CSV file and upload it to Azure OpenAI."""
csv_data = """name,department,salary,years_experience
Alice Johnson,Engineering,95000,5
Bob Smith,Sales,75000,3
Carol Williams,Engineering,105000,8
David Brown,Marketing,68000,2
Emma Davis,Sales,82000,4
Frank Wilson,Engineering,88000,6
"""
# Create temporary CSV file
with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as temp_file:
temp_file.write(csv_data)
temp_file_path = temp_file.name
# Upload file to Azure OpenAI
print("Uploading file to Azure OpenAI...")
with open(temp_file_path, "rb") as file:
uploaded_file = await openai_client.files.create(
file=file,
purpose="assistants", # Required for code interpreter
)
print(f"File uploaded with ID: {uploaded_file.id}")
return temp_file_path, uploaded_file.id
async def cleanup_files(openai_client: AsyncAzureOpenAI, temp_file_path: str, file_id: str) -> None:
"""Clean up both local temporary file and uploaded file."""
# Clean up: delete the uploaded file
await openai_client.files.delete(file_id)
print(f"Cleaned up uploaded file: {file_id}")
# Clean up temporary local file
os.unlink(temp_file_path)
print(f"Cleaned up temporary file: {temp_file_path}")
async def main():
print("=== Azure OpenAI Code Interpreter with File Upload ===")
# Initialize Azure OpenAI client for file operations
credential = AzureCliCredential()
async def get_token():
token = credential.get_token("https://cognitiveservices.azure.com/.default")
return token.token
openai_client = AsyncAzureOpenAI(
azure_ad_token_provider=get_token,
api_version="2024-05-01-preview",
)
temp_file_path, file_id = await create_sample_file_and_upload(openai_client)
# Create agent using Azure OpenAI Responses client
async with ChatAgent(
chat_client=AzureOpenAIResponsesClient(credential=credential),
instructions="You are a helpful assistant that can analyze data files using Python code.",
tools=HostedCodeInterpreterTool(inputs=[{"file_id": file_id}]),
) as agent:
# Test the code interpreter with the uploaded file
query = "Analyze the employee data in the uploaded CSV file. Calculate average salary by department."
print(f"User: {query}")
result = await agent.run(query)
print(f"Agent: {result.text}")
await cleanup_files(openai_client, temp_file_path, file_id)
asyncio.run(main())
Recherche de fichiers
Permettre à votre agent de rechercher dans des documents et des fichiers chargés :
import asyncio
from agent_framework import ChatAgent, HostedFileSearchTool, HostedVectorStoreContent
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def create_vector_store(client: AzureOpenAIResponsesClient) -> tuple[str, HostedVectorStoreContent]:
"""Create a vector store with sample documents."""
file = await client.client.files.create(
file=("todays_weather.txt", b"The weather today is sunny with a high of 75F."),
purpose="assistants"
)
vector_store = await client.client.vector_stores.create(
name="knowledge_base",
expires_after={"anchor": "last_active_at", "days": 1},
)
result = await client.client.vector_stores.files.create_and_poll(
vector_store_id=vector_store.id,
file_id=file.id
)
if result.last_error is not None:
raise Exception(f"Vector store file processing failed with status: {result.last_error.message}")
return file.id, HostedVectorStoreContent(vector_store_id=vector_store.id)
async def delete_vector_store(client: AzureOpenAIResponsesClient, file_id: str, vector_store_id: str) -> None:
"""Delete the vector store after using it."""
await client.client.vector_stores.delete(vector_store_id=vector_store_id)
await client.client.files.delete(file_id=file_id)
async def main():
print("=== Azure OpenAI Responses Client with File Search Example ===\n")
# Initialize Responses client
client = AzureOpenAIResponsesClient(credential=AzureCliCredential())
file_id, vector_store = await create_vector_store(client)
async with ChatAgent(
chat_client=client,
instructions="You are a helpful assistant that can search through files to find information.",
tools=[HostedFileSearchTool(inputs=vector_store)],
) as agent:
query = "What is the weather today? Do a file search to find the answer."
print(f"User: {query}")
result = await agent.run(query)
print(f"Agent: {result}\n")
await delete_vector_store(client, file_id, vector_store.vector_store_id)
asyncio.run(main())
Outils MCP (Model Context Protocol)
Outils MCP locaux
Connectez-vous aux serveurs MCP locaux pour les fonctionnalités étendues :
import asyncio
from agent_framework import ChatAgent, MCPStreamableHTTPTool
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
"""Example showing local MCP tools for Azure OpenAI Responses Agent."""
# Create Azure OpenAI Responses client
responses_client = AzureOpenAIResponsesClient(credential=AzureCliCredential())
# Create agent
agent = responses_client.create_agent(
name="DocsAgent",
instructions="You are a helpful assistant that can help with Microsoft documentation questions.",
)
# Connect to the MCP server (Streamable HTTP)
async with MCPStreamableHTTPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
) as mcp_tool:
# First query — expect the agent to use the MCP tool if it helps
first_query = "How to create an Azure storage account using az cli?"
first_result = await agent.run(first_query, tools=mcp_tool)
print("\n=== Answer 1 ===\n", first_result.text)
# Follow-up query (connection is reused)
second_query = "What is Microsoft Agent Framework?"
second_result = await agent.run(second_query, tools=mcp_tool)
print("\n=== Answer 2 ===\n", second_result.text)
asyncio.run(main())
Outils MCP hébergés
Utilisez les outils MCP hébergés avec des flux de travail d’approbation :
import asyncio
from agent_framework import ChatAgent, HostedMCPTool
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
"""Example showing hosted MCP tools without approvals."""
credential = AzureCliCredential()
async with ChatAgent(
chat_client=AzureOpenAIResponsesClient(credential=credential),
name="DocsAgent",
instructions="You are a helpful assistant that can help with microsoft documentation questions.",
tools=HostedMCPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
# Auto-approve all function calls for seamless experience
approval_mode="never_require",
),
) as agent:
# First query
first_query = "How to create an Azure storage account using az cli?"
print(f"User: {first_query}")
first_result = await agent.run(first_query)
print(f"Agent: {first_result.text}\n")
print("\n=======================================\n")
# Second query
second_query = "What is Microsoft Agent Framework?"
print(f"User: {second_query}")
second_result = await agent.run(second_query)
print(f"Agent: {second_result.text}\n")
asyncio.run(main())
Analyse d’image
Les agents Réponses Azure OpenAI prennent en charge les interactions modales, notamment l’analyse d’images :
import asyncio
from agent_framework import ChatMessage, TextContent, UriContent
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
print("=== Azure Responses Agent with Image Analysis ===")
# Create an Azure Responses agent with vision capabilities
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
name="VisionAgent",
instructions="You are a helpful agent that can analyze images.",
)
# Create a message with both text and image content
user_message = ChatMessage(
role="user",
contents=[
TextContent(text="What do you see in this image?"),
UriContent(
uri="https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
media_type="image/jpeg",
),
],
)
# Get the agent's response
print("User: What do you see in this image? [Image provided]")
result = await agent.run(user_message)
print(f"Agent: {result.text}")
asyncio.run(main())
Utilisation de threads pour la gestion du contexte
Maintenez le contexte de conversation entre plusieurs interactions :
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
instructions="You are a helpful programming assistant."
)
# Create a new thread for conversation context
thread = agent.get_new_thread()
# First interaction
result1 = await agent.run("I'm working on a Python web application.", thread=thread, store=True)
print(f"Assistant: {result1.text}")
# Second interaction - context is preserved
result2 = await agent.run("What framework should I use?", thread=thread, store=True)
print(f"Assistant: {result2.text}")
asyncio.run(main())
Réponses en continu
Obtenez des réponses au fur et à mesure qu’elles sont générées à l’aide de la diffusion en continu :
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(credential=AzureCliCredential()).create_agent(
instructions="You are a helpful assistant."
)
print("Agent: ", end="", flush=True)
async for chunk in agent.run_stream("Tell me a short story about a robot"):
if chunk.text:
print(chunk.text, end="", flush=True)
print()
asyncio.run(main())
Utilisation de l’agent
L’agent est standard BaseAgent et prend en charge toutes les opérations d’agent standard.
Pour plus d’informations sur l’exécution et l’interaction avec les agents, consultez les didacticiels de prise en main de l’agent.