Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
As organizations embrace AI agents to streamline operations and enhance productivity, they also face new security risks that these tools can introduce.
Without strong visibility and controls, misconfigured AI agents can expose sensitive data, enable unauthorized access, escalate privileges, and trigger unintended actions that weaken your organization’s security posture.
To provide comprehensive threat protection, we include both posture management to minimize the attack threat landscape, while at the same time we operate under the assumption that a breach can occur.
AI agent protection features
Microsoft Defender protects you against security threats with comprehensive AI agent protection, offering proactive exposure management and advanced threat hunting with these features:
- Detects all of your AI agents created with Microsoft Copilot Studio or Azure AI Foundry.
- Collects audit logs for your AI agents, continuously monitors the agents for suspicious activity, and enables detections and alerts. To enable this monitoring, make sure that you:
- For Copilot Studio AI agents, Microsoft Defender:
- Integrates data from Copilot Studio AI agents into advanced hunting for proactive threat detection. You can use this data to create custom queries and hunt for potential threats.
- Protects your environment in real-time to block suspicious or harmful actions initiated by your Copilot Studio AI agents during agent runtime, and triggers an informative alert integrated into the XDR incidents and alerts environment.
- For Azure AI Foundry AI agents, Microsoft Defender:
- Monitors your AI agents for misconfigurations and vulnerabilities, and identifies potential attack paths.
- Provides security recommendations to improve the security posture of your AI agents.
Prerequisites
To enable AI agent inventory and detection you must opt in to the Microsoft Defender preview features of:
- Microsoft Defender for Cloud Apps
- Microsoft Defender for Cloud
- Microsoft Defender XDR
Discover your AI agents with the AI agent inventory in the Defender portal (Preview)
Microsoft Defender detects all of the AI agents created with Microsoft Copilot Studio and Azure AI Foundry. This inventory helps security teams discover, catalog, and continuously monitor AI agents across your organization.
- To set up AI agent inventory for agents created in Coplot Studio, see Discover and protect your AI Agents (Preview).
- To set up AI agent inventory for agents created in Azure AI Foundry, see Microsoft Defender for Cloud AI Security posture management.
The AI agent inventory page
The AI agent inventory page in Microsoft Defender provides a centralized view of all detected AI agents, along with their key attributes and security status.
Sign in to the Microsoft Defender portal.
In the left navigation pane, select Assets > AI Agents.
A list of all detected AI agents appears.
Select Copilot Studio or Azure AI Foundry to see a filtered list of AI agents based on the tool used to create the agent.
To see detailed information about a specific AI agent, select the agent from the list.
AI agent details
When you select an AI agent from the inventory, the Agent pane opens, providing detailed information about the selected agent. The information displayed varies based on whether the agent was created in Azure AI Foundry or Copilot Studio.
-- Select Open agent page to open the AI Agent page.
- Select Go hunt to perform advanced hunting.
- Select View on map to see the agent's location and related attack paths.
These AI agent details are displayed:
| AI Agent Information | Description |
|---|---|
| ID | Unique identifier for the agent as assigned to it in Azure AI Foundry |
| Name | Display name of the agent |
| Account | The account or tenant under which the AI agent operates, typically linked to organizational ownership. |
| Deployment | Details about where and how the AI agent is deployed (e.g., cloud environment, on-premises, hybrid). |
| Attack paths | Potential routes or methods that could be exploited to compromise the AI agent or its environment. |
| Risk factors | Key vulnerabilities or conditions that increase the likelihood of security threats to the AI agent. |
| Creation time | Date and time when the agent was created |
| Project | The associated project or initiative that the AI agent supports or belongs to. |
| Model | The underlying AI/ML model powering the agent, including version or architecture details. |
| Recommendations | Suggested actions or best practices to improve security, performance, or compliance for the AI agent. |