AIModel Class
Protocol defining the interface for AI models that can generate text responses.
This protocol standardizes how different AI providers (OpenAI, Azure OpenAI, etc.) integrate with the Teams AI framework. Implementations should handle message processing, function calling, and optional streaming.
Constructor
AIModel(*args, **kwargs)
Methods
| generate_text |
Generate a text response from the AI model. Note Implementations should handle function calling recursively - if the returned ModelMessage contains function_calls, they should be executed and the results fed back into the model for a final response. |
generate_text
Generate a text response from the AI model.
Note
Implementations should handle function calling recursively - if the returned
ModelMessage contains function_calls, they should be executed and the results
fed back into the model for a final response.
async generate_text(input: UserMessage | ModelMessage | SystemMessage | FunctionMessage, *, system: SystemMessage | None = None, memory: Memory | None = None, functions: dict[str, microsoft.teams.ai.function.Function[pydantic.main.BaseModel]] | None = None, on_chunk: Callable[[str], Awaitable[None]] | None = None) -> ModelMessage
Parameters
| Name | Description |
|---|---|
|
input
Required
|
The input message to process (user, model, function, or system message) |
|
system
Required
|
Optional system message to guide model behavior |
|
memory
Required
|
Optional memory storage for conversation history |
|
functions
Required
|
Optional dictionary of available functions the model can call |
|
on_chunk
Required
|
Optional callback for streaming text chunks as they arrive |
Keyword-Only Parameters
| Name | Description |
|---|---|
|
system
|
Default value: None
|
|
memory
|
Default value: None
|
|
functions
|
Default value: None
|
|
on_chunk
|
Default value: None
|
Returns
| Type | Description |
|---|---|
|
ModelMessage containing the generated response, potentially with function calls |