Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
To maintain security and control when deploying agents in Microsoft 365, you must understand the underlying governance and administration models. Microsoft 365 offers two distinct architectural approaches, each with different security controls, consent mechanisms, and administrative capabilities.
You can deploy agents as Teams bots, in Microsoft 365 Copilot and Copilot Chat. In addition, you can self-host agents on platforms you own, such as a web portal or mobile app.
The choice between these models affects how your organization manages data access, user permissions, and external service integrations. This article compares the Teams app model and Copilot agent model to help you understand their security implications and determine which approach best fits your organizational requirements.
Teams app model
Existing Teams and Power Platform apps provide control on the per app and per connector level, which means admin consent occurs at the time of acquisition. For example, Teams apps request consent on install, and you can't acquire Power Platform connectors if blocked by data policies.
The Teams app model implements outside-in security controls where administrative consent occurs at the time of application acquisition. This model provides granular control over external service access to Microsoft 365 tenant boundaries.
This model allows defining content and workload as granular objects, such as Teams messages, emails, and Microsoft Entra gated external data like Service Now.
In this model, external services require explicit permission to access tenant data, even when they don't actively use granted permissions. This approach enables granular content and workload definitions including Teams messages, emails, and Entra-gated external data sources.
The service-to-service authentication mechanism reduces risks from DNS (Domain Name System) poisoning or domain hijacking attacks, as compromising the application service itself is required to obtain application credentials. However, achieving appropriate content granularity for diverse customer requirements (per mailbox, per site, and so on) can be challenging.
Copilot agent model
In this model, users provide consent at the point of invocation. Each time the app sends data, the user is prompted to allow access to the specified endpoint. You can't isolate content or workload in this model because all content is of the Copilot Chat type once synthesized. The external URL becomes the granular object or scope of control, with link allowlisting and app package inspection.
The Copilot agent model uses inside-out security controls where users give consent at the point of invocation. Users receive prompts to allow data sharing each time information exits the tenant boundary to external services.
This model doesn't include service-level authentication credentials, so apply appropriate API hardening. Administrators can only prevent content types from leaving the tenant by blocking their consumption in Copilot entirely through data loss prevention labels.
The model enables granular consent at the per-message or per-invocation level but relies entirely on user decisions without administrative intervention capabilities.
Next step
Understand agent data flows to identify security boundaries, trust requirements, and potential vulnerabilities in agent systems.