Share via


What's coming?

Learn about features and behavioral changes in upcoming Azure Databricks releases.

Email notifications for expiring personal access tokens

Azure Databricks will soon send email notifications to workspace users approximately seven days before their personal access tokens expire. Notifications are sent only to workspace users (not service principals) with email-based usernames. All expiring tokens within the same workspace are grouped together in a single email.

See Monitor and revoke personal access tokens.

Databricks Assistant Agent Mode preview

The Databricks Assistant Agent Mode preview will soon be enabled by default for most customers.

  • The agent can automate multiple steps. From a single prompt, it can retrieve relevant assets, generate and run code, fix errors automatically, and visualize results. It adds the ability to sample data and cell outputs to provide better results.
  • The Assistant in Agent Mode will choose between Azure OpenAI or Anthropic on Databricks (uses endpoints hosted by Databricks Inc. in AWS within the Databricks security perimeter), and is only available when the partner-powered AI features setting is enabled.
  • Admins can disable the preview if needed until the feature reaches General Availability.

See Use the Data Science Agent, the blog post, and Partner-powered AI features.

Genie Research agent mode can soon use models served through Amazon Bedrock

Genie Research agent mode will soon be able to use models served through Amazon Bedrock when partner-powered AI features are enabled.

Updated end of support timeline for legacy dashboards

  • Official support for the legacy version of dashboards has ended as of April 7, 2025. Only critical security issues and service outages will be addressed.
  • November 3, 2025: Databricks began presenting users with a dismissable warning dialog when accessing any legacy dashboard. The dialog reminds users that access to legacy dashboards will end on January 12, 2026, and provides a one-click option to migrate to AI/BI.
  • January 12, 2026: Legacy dashboards and APIs will no longer be directly accessible. However, they will still provide the ability to update in place to AI/BI. The migration page will be available until March 2, 2026.

To help transition to AI/BI dashboards, upgrade tools are available in both the user interface and the API. For instructions on how to use the built-in migration tool in the UI, see Clone a legacy dashboard to an AI/BI dashboard. For tutorials about creating and managing dashboards using the REST API, see Use Azure Databricks APIs to manage dashboards.

Behavioral change for working with Delta table history and VACUUM

Databricks Runtime 18.0 will change how time travel queries and VACUUM work in Delta Lake for more predictable behavior.

Current behavior:

Time travel availability depends on when VACUUM last ran, which can be difficult to predict.

Changes in Databricks Runtime 18.0:

The following updates will make time travel deterministic and aligned with retention settings:

  • Time travel queries (SELECT, RESTORE, CDC, and CLONE with AS OF syntax) are blocked if they exceed delta.deletedFileRetentionDuration.
  • The retention period (RETAIN num HOURS) in VACUUM is ignored with a warning, with an exception of 0 hours, which permanently removes all history from a Delta table.
  • delta.logRetentionDuration must be greater than or equal to delta.deletedFileRetentionDuration if you modify either property.

These changes will be released on the following timeline:

  • Mid-December 2025: Applies to all Delta tables on Databricks Runtime 18.0.
  • January 2026: Extends to serverless compute, Databricks SQL, and Databricks Runtime 12.2 and above for Unity Catalog managed tables.

For Unity Catalog managed tables, the changes apply to Databricks Runtime 12.2 and above. For all other Delta tables, changes apply to Databricks Runtime 18.0 and above.

Action required:

Verify that your time travel queries continue to work after the Databricks Runtime 18.0 release:

  • Review and update delta.deletedFileRetentionDuration to match your time travel needs. Verify that it’s less than or equal to delta.logRetentionDuration.
  • Stop setting the retention period in the VACUUM command. Use delta.deletedFileRetentionDuration instead.

Lakehouse Federation sharing and default storage

Delta Sharing on Lakehouse Federation is in Beta, allowing Delta Sharing data providers to share foriegn catalogs and tables. By default, data must be temporarily materialized and stored on default storage (Private Preview). Currently, users must manually enable the Delta Sharing for Default Storage – Expanded Access feature in the account console to use Lakehouse Federation sharing.

After Delta Sharing for Default Storage – Expanded Access is enabled by default for all Azure Databricks users, Delta Sharing on Lakehouse Federation will automatically be available in regions where default storage is supported.

See Default storage in Databricks and Add foreign schemas or tables to a share.

Reload notification in workspaces

In an upcoming release, a message to reload your workspace tab will display if your workspace tab has been open for a long time without refreshing. This will help ensure you are always using the latest version of Databricks with the newest features and fixes.

SAP Business Data Cloud (BDC) Connector for Azure Databricks will soon be generally available

The SAP Business Data Cloud (BDC) Connector for Azure Databricks is a new feature that allows you to share data from SAP BDC to Azure Databricks and from Azure Databricks to SAP BDC using Delta Sharing. This feature will be generally available at the end of September.

Delta Sharing for tables on default storage will soon be enabled by default (Beta)

This default storage update for Delta Sharing has expanded sharing capabilities, allowing providers to share tables backed by default storage to any Delta Sharing recipient (open or Azure Databricks), including recipients using classic compute. This feature is currently in Beta and requires providers to manually enable Delta Sharing for Default Storage – Expanded Access in the account console. Soon, this will be enabled by default for all users.

See Limitations.

Updates to the outbound control plane public IPs

Azure Databricks is updating the outbound control plane public IPs and Azure service tags for improved security and zone availability. These changes are part of a control plane update that began rolling out on May 20, 2025.

If your organization uses resource firewalls to control inbound access:

  • If your firewall rules reference the Azure Databricks service tag, no action is required.
  • If you allow specific control plane public IPs, you must add all the outbound control plane IPs by September 26, 2025.

The previous outbound control plane IPs continue to be supported.

Behavior change for the Auto Loader incremental directory listing option

Note

The Auto Loader cloudFiles.useIncrementalListing option is deprecated. Although this note discusses a change to the options's default value and how to continue using it after this change, Databricks recommends against using this option in favor of file notification mode with file events.

In an upcoming Databricks Runtime release, the value of the deprecated Auto Loader cloudFiles.useIncrementalListing option will, by default, be set to false. Setting this value to false causes Auto Loader to perform a full directory listing each time it's run. Currently, the default value of the cloudFiles.useIncrementalListing option is auto, instructing Auto Loader to make a best-effort attempt at detecting if an incremental listing can be used with a directory.

To continue using the incremental listing feature, set the cloudFiles.useIncrementalListing option to auto. When you set this value to auto, Auto Loader makes a best-effort attempt to do a full listing once every seven incremental listings, which matches the behavior of this option before this change.

To learn more about Auto Loader directory listing, see Auto Loader streams with directory listing mode.

Behavior change when dataset definitions are removed from Lakeflow Spark Declarative Pipelines

An upcoming release of Lakeflow Spark Declarative Pipelines will change the behavior when a materialized view or streaming table is removed from a pipeline. With this change, the removed materialized view or streaming table will not be deleted automatically when the next pipeline update runs. Instead, you will be able to use the DROP MATERIALIZED VIEW command to delete a materialized view or the DROP TABLE command to delete a streaming table. After dropping an object, running a pipeline update will not recover the object automatically. A new object is created if a materialized view or streaming table with the same definition is re-added to the pipeline. You can, however, recover an object using the UNDROP command.

The sourceIpAddress field in audit logs will no longer include a port number

Due to a bug, certain authorization and authentication audit logs include a port number in addition to the IP in the sourceIPAddress field (for example, "sourceIPAddress":"10.2.91.100:0"). The port number, which is logged as 0, does not provide any real value and is inconsistent with the rest of the Databricks audit logs. To enhance the consistency of audit logs, Databricks plans to change the format of the IP address for these audit log events. This change will gradually roll out starting in early August 2024.

If the audit log contains a sourceIpAddress of 0.0.0.0, Databricks might stop logging it.