Edit

Share via


Remote Power BI MCP server tools

The remote Power BI MCP server provides tools that enable AI agents to chat with data in Power BI semantic models using natural language. Through these tools, AI assistants can retrieve model schemas, generate DAX queries, and execute queries to deliver insights from your data.

Important

The remote Power BI MCP server is in preview. Tool definitions, request formats, and response schemas may change as we enhance capabilities.

Note

The remote Power BI MCP server isn't a traditional REST API. Access it through MCP-compatible agents and frameworks rather than making direct HTTP calls. The server implements the Model Context Protocol specification, which provides a standardized interface for AI agents to discover and invoke tools.

Available tools

The MCP server provides the following tools for AI agents to invoke. For connection details, see Get started with the remote Power BI MCP server.

Get Semantic Model Schema

Retrieves comprehensive metadata for a semantic model, including:

  • Tables, columns, measures, and relationships
  • Data types and hierarchies
  • AI-optimized metadata when configured by the model author:

Required input: Semantic model ID

Generate Query

Generates optimized DAX queries from natural language prompts using Copilot in Power BI. The tool uses the same DAX generation engine as Copilot for Power BI to create queries that follow best practices.

Required inputs:

  • Semantic model ID
  • Natural language question or prompt
  • Relevant schema context as determined by the agent (tables, columns, measures)

Requirements:

Note

If you prefer not to consume Copilot capacity, disable this tool in your MCP client configuration and rely on your client's LLM to generate DAX directly.

Execute Query

Executes a DAX query against a semantic model and returns the results to the AI agent.

Required inputs:

  • Semantic model ID
  • DAX query expression

Permissions:

  • Users must have at least Build permissions on the semantic model
  • Queries execute in the context of the authenticated user

Security considerations:

  • Row-level security (RLS) is enforced for user authentication
  • RLS is currently not supported when using Service Principal authentication

See also: Execute Queries REST API

Best practices

Store semantic model IDs for reuse

Each tool requires a semantic model ID. Rather than asking users to provide the ID in every chat session, store frequently used model IDs where your agent can access them. For example:

  • VS Code: Create a semantic-model-ids.json file in your workspace
  • Custom agents: Store IDs in environment variables or configuration files
  • Multi-model scenarios: Maintain a catalog mapping friendly names to model IDs

Find your semantic model ID

To get a semantic model ID from the Power BI service:

  1. Sign in to Power BI
  2. Navigate to the workspace containing your semantic model
  3. Select the semantic model to open its details page
  4. Copy the semantic model ID from the URL

Semantic model URLs follow this format:

https://app.powerbi.com/groups/{workspaceId}/datasets/{semanticModelId}

You can also retrieve semantic model IDs programmatically using the Power BI REST API.

Limitations and considerations

Authentication and security

  • Row-level security (RLS): Currently not enforced when using Service Principal authentication. When a service principal executes queries, it has access to all data the principal is authorized to access. Carefully review security implications before exposing service principal-authenticated agents to end users.
  • Tenant settings: Administrators must enable "Users can use the Power BI Model Context Protocol server endpoint (preview)" for your organization.

Query generation

  • Complex DAX: Highly complex calculations or nested logic may not translate perfectly from natural language prompts.
  • Model optimization: Query generation quality improves significantly when you prepare your data for AI.

Performance

  • Model design impact: Query execution performance depends on semantic model design, size, and optimization.
  • Large schemas: Models with hundreds of tables or thousands of columns may result in large schema payloads.
  • Query complexity: Complex DAX queries may take longer to generate and execute.

Context and conversation

  • Context window limits: There are limits to how much context can be maintained across conversation turns, depending on the AI model used by your MCP client.
  • Stateless queries: Each query executes independently. The server doesn't maintain query state between requests.