semantic_model Package
Functions
cancel_refresh
Cancel the specific refresh of a semantic model.
cancel_refresh(dataset: str | UUID, request_id: str | None = None, workspace: str | UUID | None = None)
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
Name or ID of the semantic model. |
|
request_id
|
The request id of a semantic model refresh. Defaults to finding the latest active refresh of the semantic model. Default value: None
|
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
get_refresh_execution_details
Poll the status for a specific refresh requests using Enhanced refresh with the Power BI REST API.
Note
This is a wrapper function for
Datasets - Get Refresh Execution Details In Group.
More details on the underlying implementation in PBI Documentation.
get_refresh_execution_details(dataset: str | UUID, refresh_request_id: str | UUID, workspace: str | UUID | None = None, credential: TokenCredential | None = None) -> RefreshExecutionDetails
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
Name or UUID of the dataset. |
|
refresh_request_id
Required
|
Id of refresh request on which to check the status. |
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
RefreshExecutionDetails instance with statuses of refresh request retrieved based on the passed URL. |
list_datasets
List datasets in a Fabric workspace.
⚠️ By default (mode="xmla"), this function leverages the Tabular Object Model (TOM) to interact with the target semantic model. To use this function in xmla mode, you must have at least ReadWrite permissions on the model. Alternatively, you can use mode="rest".
list_datasets(workspace: str | UUID | None = None, mode: str = 'xmla', additional_xmla_properties: str | List[str] | None = None, endpoint: Literal['powerbi', 'fabric'] = 'powerbi', credential: TokenCredential | None = None) -> DataFrame
Parameters
| Name | Description |
|---|---|
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
mode
|
Whether to use the XMLA "xmla" or REST API "rest". See REST docs for returned fields. Default value: xmla
|
|
additional_xmla_properties
|
Additional XMLA model properties to include in the returned dataframe. Default value: None
|
|
endpoint
|
<xref:Literal>[<xref:
”powerbi”, "fabric"]
The endpoint to use when mode="rest". Supported values are "powerbi" and "fabric". When mode="xmla", this parameter is ignored. See PowerBI List Datasets for using "powerbi" and Fabric List Datasets for using "fabric". Default value: powerbi
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
Dataframe listing databases and their attributes. |
list_refresh_requests
Poll the status or refresh requests for a given dataset using Enhanced refresh with the Power BI REST API.
Note
This is a wrapper function for
Datasets - Get Refresh History In Group.
See details in: PBI Documentation.
list_refresh_requests(dataset: str | UUID, workspace: str | UUID | None = None, top_n: int | None = None, credential: TokenCredential | None = None) -> DataFrame
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
Name or UUID of the dataset. |
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
top_n
|
Limit the number of refresh operations returned. Default value: None
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
Dataframe with statuses of refresh request retrieved based on the passed parameters. |
refresh_dataset
Refresh data associated with the given dataset.
Note
This is a wrapper function for
Datasets - Refresh Dataset In Group.
For detailed documentation on the implementation see
refresh_dataset(dataset: str | UUID, workspace: str | UUID | None = None, refresh_type: str = 'automatic', max_parallelism: int = 10, commit_mode: str = 'transactional', retry_count: int = 0, objects: List | None = None, apply_refresh_policy: bool = True, effective_date: date = datetime.date(2025, 12, 17), verbose: int = 0, credential: TokenCredential | None = None) -> str
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
Name or UUID of the dataset. |
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
refresh_type
|
The type of processing to perform. Types align with the TMSL refresh command types: full, clearValues, calculate, dataOnly, automatic, and defragment. The add type isn't supported. Defaults to "automatic". Default value: automatic
|
|
max_parallelism
|
Determines the maximum number of threads that can run the processing commands in parallel. This value aligns with the MaxParallelism property that can be set in the TMSL Sequence command or by using other methods. Defaults to 10. Default value: 10
|
|
commit_mode
|
Determines whether to commit objects in batches or only when complete. Modes are "transactional" and "partialBatch". Defaults to "transactional". Default value: transactional
|
|
retry_count
|
Number of times the operation retries before failing. Defaults to 0. Default value: 0
|
|
objects
|
A list of objects to process. Each object includes table when processing an entire table, or table and partition when processing a partition. If no objects are specified, the entire dataset refreshes. Pass output of json.dumps of a structure that specifies the objects that you want to refresh. For example, this is to refresh "DimCustomer1" partition of table "DimCustomer" and complete table "DimDate":
Default value: None
|
|
apply_refresh_policy
|
If an incremental refresh policy is defined, determines whether to apply the policy. Modes are true or false. If the policy isn't applied, the full process leaves partition definitions unchanged, and fully refreshes all partitions in the table. If commitMode is transactional, applyRefreshPolicy can be true or false. If commitMode is partialBatch, applyRefreshPolicy of true isn't supported, and applyRefreshPolicy must be set to false. Default value: True
|
|
effective_date
|
If an incremental refresh policy is applied, the effectiveDate parameter overrides the current date. Default value: 2025-12-17
|
|
verbose
|
If set to non-zero, extensive log output is printed. Default value: 0
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
The refresh request id. |
refresh_semantic_model
Refresh a semantic model synchronously with options to visualize the refresh progress and collect SSAS traces.
Note
This is a wrapper function for
Datasets - Refresh Dataset In Group.
For detailed documentation on the implementation see
refresh_semantic_model(dataset: str | UUID, tables: str | List[str] | None = None, partitions: str | List[str] | None = None, workspace: str | UUID | None = None, refresh_type: str = 'automatic', max_parallelism: int = 10, commit_mode: str = 'transactional', retry_count: int = 0, apply_refresh_policy: bool = True, effective_date: date = datetime.date(2025, 12, 17), visualize: bool = False, verbose: int = 0, credential: TokenCredential | None = None) -> DataFrame | None
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
Name or ID of the semantic model. |
|
tables
|
A string or a list of tables to refresh. The entire table will be refreshed.
Use Default value: None
|
|
partitions
|
A string or a list of partitions to refresh. Partitions must be formatted as such: 'Table Name'[Partition Name].
Use Default value: None
|
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
refresh_type
|
The type of processing to perform. Types align with the TMSL refresh command types: full, clearValues, calculate, dataOnly, automatic, and defragment. The add type isn't supported. Defaults to "automatic". Default value: automatic
|
|
max_parallelism
|
Determines the maximum number of threads that can run the processing commands in parallel. This value aligns with the MaxParallelism property that can be set in the TMSL Sequence command or by using other methods. Defaults to 10. Default value: 10
|
|
commit_mode
|
Determines whether to commit objects in batches or only when complete. Modes are "transactional" and "partialBatch". Defaults to "transactional". Default value: transactional
|
|
retry_count
|
Number of times the operation retries before failing. Defaults to 0. Default value: 0
|
|
apply_refresh_policy
|
If an incremental refresh policy is defined, determines whether to apply the policy. Modes are true or false. If the policy isn't applied, the full process leaves partition definitions unchanged, and fully refreshes all partitions in the table. If commitMode is transactional, applyRefreshPolicy can be true or false. If commitMode is partialBatch, applyRefreshPolicy of true isn't supported, and applyRefreshPolicy must be set to false. Default value: True
|
|
effective_date
|
If an incremental refresh policy is applied, the effectiveDate parameter overrides the current date. Default value: 2025-12-17
|
|
visualize
|
If True, displays a Gantt chart showing the refresh statistics for each table/partition. Default value: False
|
|
verbose
|
If set to non-zero, extensive log output is printed. Default value: 0
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
If 'visualize' is set to True, returns a pandas dataframe showing the SSAS trace output used to generate the visualization. |
resolve_dataset_id
Resolve the dataset ID by name in the specified workspace.
resolve_dataset_id(dataset_name: str, workspace: str | UUID | None = None, credential: TokenCredential | None = None) -> str
Parameters
| Name | Description |
|---|---|
|
dataset_name
Required
|
Name of the dataset to be resolved. |
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
The ID of the specified dataset. |
resolve_dataset_name
Resolve the dataset name by ID in the specified workspace.
resolve_dataset_name(dataset_id: str | UUID, workspace: str | UUID | None = None, credential: TokenCredential | None = None) -> str
Parameters
| Name | Description |
|---|---|
|
dataset_id
Required
|
Dataset ID or UUID object containing the dataset ID to be resolved. |
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
The name of the specified dataset. |
resolve_dataset_name_and_id
Resolve the name and ID of a dataset in the specified or default workspace.
resolve_dataset_name_and_id(dataset: str | UUID, workspace: str | UUID | None = None, credential: TokenCredential | None = None) -> Tuple[str, str]
Parameters
| Name | Description |
|---|---|
|
dataset
Required
|
str or
<xref:sempy.fabric.semantic_model.UUID>
The dataset name or ID. |
|
workspace
|
str or
<xref:sempy.fabric.semantic_model.UUID>
The Fabric workspace name or ID. If None, the default workspace is used. Default value: None
|
|
credential
|
<xref:sempy.fabric.semantic_model.TokenCredential>
The credential for token acquisition. Must be an instance of azure.core.credentials.TokenCredential. If None, the default credential will be used. Default value: None
|
Returns
| Type | Description |
|---|---|
|
A tuple containing the dataset name and dataset ID. |