Share via


Use Microsoft Purview capabilities to develop and deploy secure and compliant Microsoft Foundry or custom AI apps

Security and compliance are essential for enterprise AI adoption. Enterprise customers expect AI applications to be secure by design and to comply with both internal policies and external regulations.

Failure to meet these expectations can result in:

  • Rejection during security and compliance reviews.
  • Data leaks or other harmful AI behavior.
  • Loss of customer trust and reduced adoption.

To scale AI responsibly, organizations must build on a foundation of strong data security and governance. Because AI applications increasingly process sensitive enterprise data, developers should incorporate security and compliance controls early in the development lifecycle—during design and implementation.

Microsoft Purview provides APIs and services that enable Microsoft Foundry and other AI platforms to integrate enterprise-grade data security and governance controls into custom AI applications and agents, regardless of model or deployment platform.

This article explains how developers can integrate with Microsoft Purview to support data security and compliance requirements in their AI apps and agents built using Microsoft Foundry, Agent Framework, or custom AI apps. The following table summarizes the Microsoft Purview scenarios and which options developers should choose for each scenario.

Scenario Microsoft Foundry Agent Framework Microsoft Purview APIs
Govern data used at runtime by AI apps Supported Supported Supported
Protect against data leaks and insider risks Not supported Supported Supported
Prevent data oversharing Not supported Not supported Supported

Govern data used at runtime by AI apps

The following Microsoft Purview features can be used to provide data governance for AI apps:

  • Real-time analytics for sensitive data usage, risky behaviors, and unethical AI interactions.
  • Auditing for traceability.
  • Communication Compliance to detect harmful or unauthorized content.
  • Data Lifecycle Management and eDiscovery for legal and regulatory needs.

You can integrate these Microsoft Purview capabilities into your Microsoft Foundry or custom AI app, using the following options:

  • Native Integration of Microsoft Purview into Microsoft Foundry (recommended): When using models in Microsoft Foundry, use the Microsoft Purview settings for audit and related governance outcomes that's embedded into Microsoft Foundry. Azure Admins can turn on the setting for any given Azure subscription. There's no action your developers need to take. This setting enables data from all Azure AI based applications running in that subscription to be sent to Microsoft Purview to support governance and compliance outcomes. For more information on turning on the setting, see Enable Data Security for Azure AI with Microsoft Purview.

  • Use the APIs: Microsoft Foundry developers can use Microsoft Purview APIs in the Microsoft Graph to programmatically send the prompts and responses data from their AI apps into Microsoft Purview.

    Use these APIs for API-based integration:

    For more information, see Use Microsoft Purview APIs in the Microsoft Graph.

  • Agent Framework: If you are building agents using Microsoft Agent Framework, you can add Microsoft Purview capabilities to secure your agents. These capabilities will add data security and compliance to the agent. In your agent's middleware pipeline, add Microsoft Purview policy middleware to:

    • Intercept and pass prompts and responses to Microsoft Purview.
    • Enforce your organization’s DLP policies in Microsoft Purview.

    For more information, see Use Microsoft Purview SDK with Agent Framework.

Protect against data leaks and insider risks

Developers can use Agent Framework or Microsoft Purview APIs to enforce Microsoft Purview Data Loss Prevention (DLP) policies in their applications.

Agent Framework: If you are building agents using Microsoft Agent Framework, you can add Microsoft Purview capabilities to secure your agents. These capabilities will add data security and compliance to the agent. In your agent's middleware pipeline, add Microsoft Purview policy middleware to:

  • Intercept and pass prompts and responses to Microsoft Purview.
  • Enforce your organization’s DLP policies in Microsoft Purview.

For more information, see Use Microsoft Purview SDK with Agent Framework.

Use the APIs: This option allows applications to understand and enforce the behavior in AI apps and agents according to the policies set in Microsoft Purview. For example, you can protect sensitive information shared with Large Language Models (LLMs), or control sensitive information shared with risky users in your AI apps.

Use these APIs for API-based integration:

Important

To setup a new DLP policy in Microsoft Purview to test DLP integration, run the New-DlpComplianceRule cmdlet. For more information, see New-DlpComplianceRule.

Azure AI code sample:

Prevent oversharing data

Oversharing data is restricted when sensitivity labels are applied to data. Developers can use either Microsoft Purview APIs or Azure AI Search to honor sensitivity labels applied to data used by LLMs to generate responses. Both of these options ensure that AI-generated responses respect access controls, prevent oversharing data, and limit users to content they’re authorized to view - just as they would outside the AI app environment.

When Microsoft Purview sensitivity label indexing is enabled, Azure AI Search checks the document label metadata at query time, applies access filters based on Microsoft Purview policies, and returns only the results that the requesting user is allowed to access.

You can use any of the following APIs when building this scenario: