Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Note
This document refers to the Microsoft Foundry (classic) portal.
🔄 Switch to the Microsoft Foundry (new) documentation if you're using the new portal.
Note
This document refers to the Microsoft Foundry (new) portal.
Important
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
As you build with state-of-the-art models and create agents and apps with them, Microsoft Foundry playgrounds provide an on-demand, zero-setup environment designed for rapid prototyping, API exploration, and technical validation before you commit a single line of code to your production codebase.
Highlights of the Foundry playgrounds experience
Some highlights of the Foundry playgrounds experience include:
- AgentOps support for evaluations and tracing in the Agents playground.
- Open in VS Code for Chat and Agents playground. This feature saves you time by automatically importing your endpoint and key from Foundry to VS Code for multilingual code samples.
- Images Playground 2.0 for models such as gpt-image-1, Stable Diffusion 3.5 Large, and FLUX.1-Kontext-pro models.
- Video playground for Azure OpenAI Sora-2.
- Audio playground for models such as gpt-4o-audio-preview, gpt-4o-transcribe, and gpt-4o-mini-tts models.
Tip
In the screenshot of the playground landing page, the left pane of the portal is customized to show the Playgrounds tab. To learn more about seeing the other items in the left pane, see Customize the left pane.
Playgrounds as the prelude to production
Modern development involves working across multiple systems—APIs, services, SDKs, and data models—often before you're ready to fully commit to a framework, write tests, or spin up infrastructure. As the complexity of software ecosystems increases, the need for safe, lightweight environments to validate ideas becomes critical. The playgrounds are built to meet this need.
The Foundry playgrounds provide ready-to-use environments with all the necessary tools and features preinstalled, so you don't need to set up projects, manage dependencies, or solve compatibility issues. The playgrounds can accelerate developer velocity by validating API behavior, going quicker to code, reducing cost of experimentation and time to ship, accelerating integration, optimizing prompts, and more.
Playgrounds also provide clarity quickly when you have questions, by providing answers in seconds—rather than hours—and allowing you to test and validate ideas before you commit to building at scale. For example, the playgrounds are ideal for quickly answering questions like:
- What's the minimal prompt I need to get the output I want?
- Will this logic work before I write a full integration?
- How does latency or token usage change with different configurations?
- What model provides the best price-to-performance ratio before I evolve it into an agent?
Open in VS Code capability
The Chat playground and Agents playground let you work in VS Code by using the Open in VS Code button. You can find this button through the Foundry extension in VS Code.
Available on the multilingual sample code samples, Open in VS Code automatically imports your code sample, API endpoint, and key to a VS Code workspace in an /azure environment. This functionality makes it easy to work in the VS Code IDE from the Foundry portal.
To use the Open in VS Code functionality from the chat and agents playgrounds, follow these steps:
Select Try the Chat playground to open it. Alternatively, you can follow these steps in the Agents playground by selecting Let's go on the Agents playground card.
If you don't have a deployment already, select Create new deployment and deploy a model such as
gpt-4o-mini.Make sure your deployment is selected in the Deployment box.
Select View code to see the code sample.
Select Open in VS Code to open VS Code in a new tab of your browser window.
You're redirected to the
/azureenvironment of VS Code where your code sample, API endpoint, and key are already imported from the Foundry playground.Browse the
INSTRUCTIONS.mdfile for instructions to run your model.View your code sample in the
run_model.pyfile.View relevant dependencies in the
requirements.txtfile.
The Model playground and Agents playground let you work in VS Code by using the Open in VS Code for the Web button. You can find this button from the Code tab in the chat pane of the model playground.
Available on the multilingual sample code samples, Open in VS Code for the Web automatically imports your code sample, API endpoint, and key to a VS Code workspace in an /azure environment. This functionality makes it easy to work in the VS Code IDE from the Foundry portal.
Agents playground
The agents playground lets you explore, prototype, and test agents without running any code. From this page, you can quickly iterate and experiment with new ideas.
To get started with the agents playground, see the Quickstart: Create a new agent.
To get started with the agents playground, see Understanding the agent development lifecycle.
Chat playground
The chat playground is the place to test the latest reasoning models from models including Azure OpenAI, DeepSeek, and Meta. For all reasoning models, the chat playground provides a chain-of-thought summary drop-down that lets you see how the model thinks through its response before sharing the output.
To learn more about the chat playground, see the Quickstart: Get answers in the chat playground.
Model playground
When you deploy a model in the Microsoft Foundry portal, you immediately land on its playground. The model playground is an interactive experience designed for developers to test and experiment with the latest models from providers like Azure Open AI, DeepSeek, xAI, and Meta. The playground gives you full control over model behavior, safety, and deployment so that you can tune system prompts, compare model outputs in real time, or integrate tools like web search and code execution.
The playground is designed for fast iteration and production readiness. It supports everything from prototyping to performance benchmarking. The playground prepares you to use your model in a production workflow, easily upgrade your model as an agent, and continue to prototype in the agent playground with additional tools, knowledge, and memory before deploying as an agentic web application.
Benefits of using the model playground
Full-stack experimentation and control: Configure parameters (such as temperature, top_p, max_tokens), inject system prompts, and enable advanced tools like web search, file search, and code interpreter, all within a single environment. This setup allows you to precisely tune model behavior and rapidly iterate on prompt engineering, grounding, and RAG workflows, upgrading your model into an agent.
Built-in safety and governance: Assign or create guardrails to protect against jailbreaks, indirect prompt injections, and unsafe outputs. This integrated safety layer ensures you can validate compliance and responsible AI behaviors in a controlled, testable sandbox, without needing to wire external moderation logic.
Comparative and deployable by design: Compare up to three models in parallel with synced input/output to benchmark response quality. Export multilingual code samples, grab endpoints and keys, and open in VS Code for immediate integration, bridging experimentation to production in one streamlined developer workflow.
Compare models
Compare mode enables developers to run controlled, parallel evaluations across up to three models simultaneously, using a synchronized input stream. Each model receives the exact same prompt context, system message, and parameter configuration, ensuring consistent test conditions for output benchmarking. Responses stream in real time, allowing developers to measure and visualize differences in latency, token throughput, and response fidelity side-by-side.
To use compare mode from the playground of a deployed model:
- Select Compare models in the upper-right corner.
- Select up to two more models from existing or new deployments. Chat windows for the selected models open up side-by-side in the playground with synced prompt bars and setup. You can switch off sync from the Setup pane for each model, if needed.
- Enter your prompt in any of the prompt bars and see the prompt simultaneously appear in the others.
- Submit the prompt to see the output from each model simultaneously and compare the quality of the responses.
- Switch to the Code tab in the chat pane of each model to see multilingual code samples.
- For your preferred model, select either Open in VS Code for the Web from the code tab to continue development work or Save as agent to continue prototyping in the agent playground.
Generate and interpret code
With code interpreter, you can extend model capabilities beyond text generation by enabling in-line code execution within the playground. When activated, supported models can write, run, and debug code directly in a secure, sandboxed environment. This environment is ideal for performing calculations, data transformations, plotting visualizations, or validating logic.
To use code interpreter from the playground of a deployed model:
Expand the Tools section in the deployed model's playground.
Tip
The Tools section isn't visible in the playground if you use compare mode to run parallel evaluations on models. You first have to close the other models that you're using for comparison before you can see the detailed playground that includes tools and other options for your deployed model.
Select Add > Code interpreter, and attach your code files for the code interpreter.
Use the playground to ask questions, interpret, or streamline your code. For example, "How should I make the attached code files more efficient?"
Audio playground
The audio playground (preview) lets you use text-to-speech and transcription capabilities with the latest audio models from Azure OpenAI.
To try the text-to-speech capability, follow these steps:
Select Try the Audio playground to open it.
If you don't have a deployment already, select Create new deployment and deploy a model such as
gpt-4o-mini-tts.Make sure your deployment is selected in the Deployment box.
Input a text prompt.
Adjust model parameters such as voice and response format.
Select Generate to receive a speech output with playback controls that include play, rewind, forward, adjust speed, and volume.
Download the audio file to your local computer.
To try the transcription capability, follow these steps:
If you don't have a deployment already, select Create new deployment and deploy a model such as
gpt-4o-transcribe.Make sure your deployment is selected in the Deployment box.
(Optional) Include a phrase list as a text mechanism to guide your audio input.
Input an audio file, by either uploading one or recording the audio from the prompt bar.
Select Generate transcription to send the audio input to the model and receive a transcribed output in both text and JSON formats.
Video playground
The video playground (preview) is your rapid iteration environment for exploring, refining, and validating generative video workflows. It's designed for developers who need to go from idea to prototype with precision, control, and speed. The playground gives you a low-friction interface to test prompt structures, assess motion fidelity, evaluate model consistency across frames, and compare outputs across models—without writing boilerplate or wasting compute cycles. It's also a great demo interface for your chief product officer and engineering VP.
All model endpoints are integrated with Azure AI Content Safety. As a result, the video playground filters out harmful and unsafe images before they appear. If content moderation policies flag your text prompt or video generation, you get a warning notification.
You can use the video playground with the Azure OpenAI Sora-2 model.
Tip
See the DevBlog for Sora and video playground in Foundry.
Follow these steps to use the video playground:
Caution
Videos you generate are retained for 24 hours due to data privacy. Download videos to your local computer for longer retention.
Select Try the Video playground to open it.
If you don't have a deployment already, select Deploy now from the top right side of the homepage and deploy the
sora-2model.On the homepage of the video playground, get inspired by pre-built prompts sorted by the industry filter. From here, you can view the videos in full display and copy the prompt from the bottom right corner of a video to build from it.
Copy the prompt to paste it in the prompt bar. Adjust key controls (for example, aspect ratio or resolution) to deeply understand specific model responsiveness and constraints.
Select Generate to generate a video based on the copied prompt.
Rewrite your text prompt syntax with gpt-4o by using Re-write with AI.
Switch on the Start with an industry system prompt capability, choose an industry, and specify the change required for your original prompt.
Select Update to update the prompt, and then select Generate to create a new video.
Go to the Generation history tab to review your generations as a grid or list view. When you select the videos, you open them in full screen mode for full immersion. Visually observe outputs across prompt tweaks or parameter changes.
In full screen mode, edit the prompt and submit it for regeneration.
Either in full screen mode or through the options button that shows up when you hover across the video, download the videos to your local computer, view the video generation information tag, view code, or delete the video.
Select View code from the options menu to view contextual sample code for your video generations in several languages, including Python, JavaScript, C#, JSON, Curl, and Go.
Port the code samples to production by copying them into VS Code.
Follow these steps to use the video playground:
Caution
Videos you generate are retained for 24 hours due to data privacy. Download videos to your local computer for longer retention.
- Select Build from the upper-right navigation.
- Select Models from the left pane.
- Select a video generation model, such as sora-2 from your list of deployed models. If you don't have a deployment already, select Deploy base model from the top right side of the page and deploy the
sora-2model. - Enter your text prompt: Start with any text prompt for the video you want to generate. For models that enable image-to-video generation, upload an image attachment to the prompt bar and generate the video.
- Explore the model API-specific generation controls: Adjust key controls (for example, aspect ratio and duration) for a deeper understanding of specific model responsiveness and constraints.
- Side-by-side observations in grid view: Visually observe outputs across prompt tweaks or parameter changes.
- Port to production with multi-lingual code samples: Use multi-language code samples with View Code. Video playground is your launchpad to development work in VS Code.
What to validate when experimenting in video playground
When you use the video playground to plan your production workload, explore and validate the following attributes:
Prompt-to-Motion Translation
- Does the video model interpret your prompt in a way that makes logical and temporal sense?
- Is motion coherent with the described action or scene?
Frame Consistency
- Do characters, objects, and styles remain consistent across frames?
- Are there visual artifacts, jitter, or unnatural transitions?
Scene Control
- How well can you control scene composition, subject behavior, or camera angles?
- Can you guide scene transitions or background environments?
Length and Timing
- How do different prompt structures affect video length and pacing?
- Does the video feel too fast, too slow, or too short?
Multimodal Input Integration
- What happens when you provide a reference image, pose data, or audio input?
- Can you generate video with lip-sync to a given voiceover?
Post-Processing Needs
- What level of raw fidelity can you expect before you need editing tools?
- Do you need to upscale, stabilize, or retouch the video before using it in production?
Latency and Performance
- How long does it take to generate video for different prompt types or resolutions?
- What's the cost-performance tradeoff of generating 5-second versus 15-second clips?
Images playground
The images playground is ideal for developers who build image generation flows. This playground is a full-featured, controlled environment for high-fidelity experiments designed for model-specific APIs to generate and edit images.
Tip
See the 60-second reel of the Images playground for gpt-image-1 and our DevBlog for Images Playground in Foundry.
You can use the images playground with these models:
- gpt-image-1 and dall-e-3 from Azure OpenAI.
- Stable Diffusion 3.5 Large, Stable Image Core, Stable Image Ultra from Stability AI.
- FLUX.1-Kontext-pro and FLUX-1.1-pro from Black Forest Labs.
Follow these steps to use the images playground:
Select Try the Images playground to open it.
If you don't have a deployment already, select Create a deployment and deploy a model such as
gpt-image-1.Enter your text prompt: Start with any text prompt for the image you want to generate. For models that enable image-to-image generation, upload an image attachment to the prompt bar.
Explore the model API-specific generation controls after model deployment: Adjust key controls (for example, number of variations, quality, size, image format) to deeply understand specific model responsiveness and constraints.
Select Generate.
Side-by-side observations in grid view: Visually observe outputs across prompt tweaks or parameter changes.
Transform with API tooling: Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
Port to production with multi-lingual code samples: Use Python, Java, JavaScript, C# code samples with View Code. Images playground is your launchpad to development work in VS Code.
You can use the images playground with these models:
- gpt-image-1 and dall-e-3 from Azure OpenAI.
- Stable Diffusion 3.5 Large, Stable Image Core, Stable Image Ultra from Stability AI.
- FLUX.1-Kontext-pro and FLUX-1.1-pro from Black Forest Labs.
Follow these steps to use the images playground:
- Select Build from the upper-right navigation.
- Select Models from the left pane.
- Select an image generation model, such as gpt-image-1 from your list of deployed models. If you don't have a deployment already, select Deploy base model from the top right side of the page and deploy the
gpt-image-1model. - Enter your text prompt: Start with any text prompt for the image you want to generate. For models that enable image-to-image generation, upload an image attachment to the prompt bar and generate the image.
- Explore the model API-specific generation controls: Adjust key controls (for example, number of variations and aspect ratio) for a deeper understanding of specific model responsiveness and constraints.
- Side-by-side observations in grid view: Visually observe outputs across prompt tweaks or parameter changes.
- Transform with API tooling: Inpainting with text transformation is available for gpt-image-1. Alter parts of your original image with inpainting selection. Use text prompts to specify the change.
- Port to production with multi-lingual code samples: Use multi-language code samples with View Code. Images playground is your launchpad to development work in VS Code.
What to validate when experimenting in images playground
By using the images playground, you can explore and validate the following aspects as you plan your production workload:
Prompt Effectiveness
- What kind of visual output does this prompt generate for my enterprise use case?
- How specific or abstract can my language be and still get good results?
- Does the model understand style references like "surrealist" or "cyberpunk" accurately?
Stylistic Consistency
- How do I maintain the same character, style, or theme across multiple images?
- Can I iterate on variations of the same base prompt with minimal drift?
Parameter Tuning
- What's the effect of changing model parameters like guidance scale, seed, steps, and others?
- How can I balance creativity versus prompt fidelity?
Model Comparison
- How do results differ between models, such as SDXL versus DALL·E?
- Which model performs better for realistic faces versus artistic compositions?
Composition Control
- What happens when I use spatial constraints like bounding boxes or inpainting masks?
- Can I guide the model toward specific layouts or focal points?
Input Variation
- How do slight changes in prompt wording or structure impact results?
- What's the best way to prompt for symmetry, specific camera angles, or emotions?
Integration Readiness
- Will this image meet the constraints of my product's UI, including aspect ratio, resolution, and content safety?
- Does the output conform to brand guidelines or customer expectations?