หมายเหตุ
การเข้าถึงหน้านี้ต้องได้รับการอนุญาต คุณสามารถลอง ลงชื่อเข้าใช้หรือเปลี่ยนไดเรกทอรีได้
การเข้าถึงหน้านี้ต้องได้รับการอนุญาต คุณสามารถลองเปลี่ยนไดเรกทอรีได้
Note
This document refers to the Microsoft Foundry (classic) portal.
🔄 Switch to the Microsoft Foundry (new) documentation if you're using the new portal.
Note
This document refers to the Microsoft Foundry (new) portal.
Important
If you're currently using an Azure AI Inference beta SDK with Microsoft Foundry Models or Azure OpenAI service, we strongly recommend that you transition to the generally available OpenAI/v1 API, which uses an OpenAI stable SDK.
For more information on how to migrate to the OpenAI/v1 API by using an SDK in your programming language of choice, see Migrate from Azure AI Inference SDK to OpenAI SDK.
In this article, you'll learn how to add a new model deployment to a Foundry Models endpoint. The deployment is available for inference in your Foundry resource when you specify the deployment name in your requests.
Prerequisites
To complete this article, you need the following:
An Azure subscription. If you're using GitHub Models, you can upgrade your experience and create an Azure subscription in the process. For more information, see Upgrade from GitHub Models to Foundry Models.
A Foundry project. This project type is managed under a Foundry resource (formerly known as Azure AI Services resource). If you don't have a Foundry project, see Create a project for Microsoft Foundry.
Azure role-based access control (RBAC) permissions to create and manage deployments. You need the Cognitive Services Contributor role or equivalent permissions for the Foundry resource.
Foundry Models from partners and community require access to Azure Marketplace. Ensure you have the permissions required to subscribe to model offerings. Foundry Models sold directly by Azure don't have this requirement.
Install the Azure CLI and the
cognitiveservicesextension for Foundry Tools.az extension add -n cognitiveservicesSome commands in this tutorial use the
jqtool, which might not be installed on your system. For installation instructions, see Downloadjq.Identify the following information:
Your Azure subscription ID
Your Foundry Tools resource name
The resource group where you deployed the Foundry Tools resource
Add models
To add a model, first identify the model that you want to deploy. Query the available models as follows:
Sign in to your Azure subscription.
az loginIf you have more than one subscription, select the subscription where your resource is located.
az account set --subscription $subscriptionIdSet the following environment variables with the name of the Foundry Tools resource you plan to use and resource group.
accountName="<ai-services-resource-name>" resourceGroupName="<resource-group>" location="eastus2"If you haven't created a Foundry Tools account yet, create one.
az cognitiveservices account create -n $accountName -g $resourceGroupName --custom-domain $accountName --location $location --kind AIServices --sku S0Reference: az cognitiveservices account
Check which models are available to you and under which SKU. SKUs, also known as deployment types, define how Azure infrastructure processes requests. Models might offer different deployment types. The following command lists all the model definitions available:
az cognitiveservices account list-models \ -n $accountName \ -g $resourceGroupName \ | jq '.[] | { name: .name, format: .format, version: .version, sku: .skus[0].name, capacity: .skus[0].capacity.default }'The output includes available models with their properties:
{ "name": "Phi-3.5-vision-instruct", "format": "Microsoft", "version": "2", "sku": "GlobalStandard", "capacity": 1 }Reference: az cognitiveservices account list-models
Identify the model you want to deploy. You need the properties
name,format,version, andsku. The propertyformatindicates the provider offering the model. Depending on the type of deployment, you might also need capacity.Add the model deployment to the resource. The following example adds
Phi-3.5-vision-instruct:az cognitiveservices account deployment create \ -n $accountName \ -g $resourceGroupName \ --deployment-name Phi-3.5-vision-instruct \ --model-name Phi-3.5-vision-instruct \ --model-version 2 \ --model-format Microsoft \ --sku-capacity 1 \ --sku-name GlobalStandardReference: az cognitiveservices account deployment
The model is ready to use.
You can deploy the same model multiple times if needed as long as it's under a different deployment name. This capability is useful if you want to test different configurations for a given model, including content filters.
Use the model
Note
This section is identical for both the CLI and Bicep approaches.
You can consume deployed models using the Endpoints for Foundry Models for the resource. When you construct your request, specify the parameter model and insert the model deployment name you created. You can programmatically get the URI for the inference endpoint by using the following code:
Inference endpoint
az cognitiveservices account show -n $accountName -g $resourceGroupName | jq '.properties.endpoints["Azure AI Model Inference API"]'
To make requests to the Foundry Models endpoint, append the route models. For example: https://<resource>.services.ai.azure.com/models. You can see the API reference for the endpoint at Azure AI Model Inference API reference page.
Inference keys
az cognitiveservices account keys list -n $accountName -g $resourceGroupName
Manage deployments
You can see all the deployments available using the CLI:
Run the following command to see all the active deployments:
az cognitiveservices account deployment list -n $accountName -g $resourceGroupNameReference: az cognitiveservices account deployment list
You can see the details of a given deployment:
az cognitiveservices account deployment show \ --deployment-name "Phi-3.5-vision-instruct" \ -n $accountName \ -g $resourceGroupNameReference: az cognitiveservices account deployment show
You can delete a given deployment as follows:
az cognitiveservices account deployment delete \ --deployment-name "Phi-3.5-vision-instruct" \ -n $accountName \ -g $resourceGroupName
Install the Azure CLI.
Identify the following information:
- Your Azure subscription ID
Your Foundry resource (formerly known as Azure AI Services resource) name
The resource group where the Foundry resource is deployed
The model name, provider, version, and SKU you want to deploy. You can use the Foundry portal or the Azure CLI to find this information. In this example, you deploy the following model:
- Model name:
Phi-3.5-vision-instruct - Provider:
Microsoft - Version:
2 - Deployment type: Global standard
- Model name:
Set up the environment
The example in this article is based on code samples contained in the Azure-Samples/azureai-model-inference-bicep repository. To run the commands locally without having to copy or paste file content, clone the repository:
git clone https://github.com/Azure-Samples/azureai-model-inference-bicep
The files for this example are in:
cd azureai-model-inference-bicep/infra
Permissions required to subscribe to Models from Partners and Community
Foundry Models from partners and community available for deployment (for example, Cohere models) require Azure Marketplace. Model providers define the license terms and set the price for use of their models using Azure Marketplace.
When deploying third-party models, ensure you have the following permissions in your account:
- On the Azure subscription:
Microsoft.MarketplaceOrdering/agreements/offers/plans/readMicrosoft.MarketplaceOrdering/agreements/offers/plans/sign/actionMicrosoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/readMicrosoft.Marketplace/offerTypes/publishers/offers/plans/agreements/readMicrosoft.SaaS/register/action
- On the resource group—to create and use the SaaS resource:
Microsoft.SaaS/resources/readMicrosoft.SaaS/resources/write
Add the model
Use the template
ai-services-deployment-template.bicepto describe model deployments:ai-services-deployment-template.bicep
@description('Name of the Azure AI services account') param accountName string @description('Name of the model to deploy') param modelName string @description('Version of the model to deploy') param modelVersion string @allowed([ 'AI21 Labs' 'Cohere' 'Core42' 'DeepSeek' 'xAI' 'Meta' 'Microsoft' 'Mistral AI' 'OpenAI' ]) @description('Model provider') param modelPublisherFormat string @allowed([ 'GlobalStandard' 'DataZoneStandard' 'Standard' 'GlobalProvisioned' 'Provisioned' ]) @description('Model deployment SKU name') param skuName string = 'GlobalStandard' @description('Content filter policy name') param contentFilterPolicyName string = 'Microsoft.DefaultV2' @description('Model deployment capacity') param capacity int = 1 resource modelDeployment 'Microsoft.CognitiveServices/accounts/deployments@2024-04-01-preview' = { name: '${accountName}/${modelName}' sku: { name: skuName capacity: capacity } properties: { model: { format: modelPublisherFormat name: modelName version: modelVersion } raiPolicyName: contentFilterPolicyName == null ? 'Microsoft.Nill' : contentFilterPolicyName } }Run the deployment:
RESOURCE_GROUP="<resource-group-name>" ACCOUNT_NAME="<azure-ai-model-inference-name>" MODEL_NAME="Phi-3.5-vision-instruct" PROVIDER="Microsoft" VERSION=2 az deployment group create \ --resource-group $RESOURCE_GROUP \ --template-file ai-services-deployment-template.bicep \ --parameters accountName=$ACCOUNT_NAME modelName=$MODEL_NAME modelVersion=$VERSION modelPublisherFormat=$PROVIDER
Use the model
Note
This section is identical for both the CLI and Bicep approaches.
You can consume deployed models using the Endpoints for Foundry Models for the resource. When you construct your request, specify the parameter model and insert the model deployment name you created. You can programmatically get the URI for the inference endpoint by using the following code:
Inference endpoint
az cognitiveservices account show -n $accountName -g $resourceGroupName | jq '.properties.endpoints["Azure AI Model Inference API"]'
To make requests to the Foundry Models endpoint, append the route models. For example: https://<resource>.services.ai.azure.com/models. You can see the API reference for the endpoint at Azure AI Model Inference API reference page.
Inference keys
az cognitiveservices account keys list -n $accountName -g $resourceGroupName