แชร์ผ่าน


Managed connectors in Lakeflow Connect

Important

Managed connectors in Lakeflow Connect are in various release states.

This page provides an overview of managed connectors in Databricks Lakeflow Connect for ingesting data from SaaS applications and databases. The resulting ingestion pipeline is governed by Unity Catalog and is powered by serverless compute and Lakeflow Spark Declarative Pipelines. Managed connectors leverage efficient incremental reads and writes to make data ingestion faster, scalable, and more cost-efficient, while your data remains fresh for downstream consumption.

SaaS connector components

A SaaS connector has the following components:

Component Description
Connection A Unity Catalog securable object that stores authentication details for the application.
Ingestion pipeline A pipeline that copies the data from the application into the destination tables. The ingestion pipeline runs on serverless compute.
Destination tables The tables where the ingestion pipeline writes the data. These are streaming tables, which are Delta tables with extra support for incremental data processing.

SaaS connector components diagram

Database connector components

A database connector has the following components:

Component Description
Connection A Unity Catalog securable object that stores authentication details for the database.
Ingestion gateway A pipeline that extracts snapshots, change logs, and metadata from the source database. The gateway runs on classic compute, and it runs continuously to capture changes before change logs can be truncated in the source.
Staging storage A Unity Catalog volume that temporarily stores extracted data before it's applied to the destination table. This allows you to run your ingestion pipeline at whatever schedule you'd like, even as the gateway continuously captures changes. It also helps with failure recovery. You automatically create a staging storage volume when you deploy the gateway, and you can customize the catalog and schema where it lives. Data is automatically purged from staging after 30 days.
Ingestion pipeline A pipeline that moves the data from staging storage into the destination tables. The pipeline runs on serverless compute.
Destination tables The tables where the ingestion pipeline writes the data. These are streaming tables, which are Delta tables with extra support for incremental data processing.

Database connector components diagram

Orchestration

You can run your ingestion pipeline on one or more custom schedules. For each schedule that you add to a pipeline, Lakeflow Connect automatically creates a job for it. The ingestion pipeline is a task within the job. You can optionally add more tasks to the job.

Pipeline orchestration diagram for SaaS connectors

For database connectors, the ingestion gateway runs in its own job as a continuous task.

Pipeline orchestration diagram for database connectors

Incremental ingestion

Lakeflow Connect uses incremental ingestion to improve pipeline efficiency. On the first run of your pipeline, it ingests all of the selected data from the source. In parallel, it tracks changes to the source data. On each subsequent run of the pipeline, it uses that change tracking to ingest only the data that's changed from the prior run, when possible.

The exact approach depends on what's available in your data source. For example, you can use both change tracking and change data capture (CDC) with SQL Server. In contrast, the Salesforce connector selects a cursor column from a set list of options.

Some sources or specific tables don't support incremental ingestion at this time. Databricks plans to expand coverage for incremental support.

Networking

There are several options for connecting to a SaaS application or database.

  • Connectors for SaaS applications reach out to the source's APIs. They're also automatically compatible with serverless egress controls.
  • Connectors for cloud databases can connect to the source via Private Link. Alternatively, if your workspace has a Virtual Network (VNet) or Virtual Private Cloud (VPC) that's peered with the VNet or VPC hosting your database, then you can deploy the ingestion gateway inside of it.
  • Connectors for on-premises databases can connect using services like AWS Direct Connect and Azure ExpressRoute.

Deployment

You can deploy ingestion pipelines using Databricks Asset Bundles, which enable best practices like source control, code review, testing, and continuous integration and delivery (CI/CD). Bundles are managed using the Databricks CLI and can be run in different target workspaces, such as development, staging, and production.

Failure recovery

As a fully-managed service, Lakeflow Connect aims to automatically recover from issues when possible. For example, when a connector fails, it automatically retries with exponential backoff.

However, it's possible that an error requires your intervention (for example, when credentials expire). In these cases, the connector tries to avoid missing data by storing the last position of the cursor. It can then pick back up from that position on the next run of the pipeline when possible.

Monitoring

Lakeflow Connect provides robust alerting and monitoring to help you maintain your pipelines. This includes event logs, cluster logs, pipeline health metrics, and data quality metrics.

Release statuses

Connector Release status
Dynamics 365 Public Preview
Google Analytics Generally Available
MySQL Public Preview
NetSuite Public Preview
PostgreSQL Public Preview
Salesforce Generally Available
ServiceNow Generally Available
SharePoint Beta
SQL Server Generally Available
Workday Generally Available

Feature availability

The following tables summarize feature availability for each managed ingestion connector. For additional features and limitations, see the documentation for your specific connector.

Google Analytics

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering Yes
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames Yes - Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

MySQL

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring check marked yes Supported
Databricks Asset Bundles check marked yes Supported
Incremental ingestion check marked yes Supported
Unity Catalog governance check marked yes Supported
Orchestration using Databricks Workflows check marked yes Supported
SCD type 2 check marked yes Supported
API-based column selection and deselection check marked yes Supported
API-based row filtering No
Automated schema evolution: New and deleted columns check marked yes Supported
Automated schema evolution: Data type changes x mark no Not supported
Automated schema evolution: Column renames check marked yes Supported
Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables check marked yes Supported
If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

NetSuite

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering No
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames Yes - Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 200

Salesforce

Feature Availability
UI-based pipeline authoring check marked yes Supported
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes - By default, formula fields require full snapshots. To enable incremental ingestion for formula fields, see Ingest Salesforce formula fields incrementally.
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering Yes
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames Yes - Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables N/A
Maximum number of tables per pipeline 250

Workday

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion No
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering No
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames No - When DDL objects are enabled, the connector can rename the column. When DDL objects are not enabled, the connector treats this as a new column (new name) and a deleted column (old name). In either case, it requires a full refresh.
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

SQL Server

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering No
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames No - When DDL objects are enabled, the connector can rename the column. When DDL objects are not enabled, the connector treats this as a new column (new name) and a deleted column (old name). In either case, it requires a full refresh.
Automated schema evolution: New tables check marked yes Supported
If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

PostgreSQL

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring check marked yes Supported
Databricks Asset Bundles check marked yes Supported
Incremental ingestion check marked yes Supported
Unity Catalog governance check marked yes Supported
Orchestration using Databricks Workflows check marked yes Supported
SCD type 2 check marked yes Supported
API-based column selection and deselection check marked yes Supported
API-based row filtering No
Automated schema evolution: New and deleted columns check marked yes Supported
Automated schema evolution: Data type changes x mark no Not supported
Automated schema evolution: Column renames Yes - Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

ServiceNow

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes - With exceptions when your table lacks a cursor field.
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering Yes
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames Yes - Treated as a new column (new name) and deleted column (old name).
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

SharePoint

Feature Availability
UI-based pipeline authoring check marked yes Supported
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering No
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames No - Requires full refresh.
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

Dynamics 365

Feature Availability
UI-based pipeline authoring Yes
API-based pipeline authoring Yes
Databricks Asset Bundles Yes
Incremental ingestion Yes - Via VersionNumber from Azure Synapse Link
Unity Catalog governance Yes
Orchestration using Databricks Workflows Yes
SCD type 2 Yes
API-based column selection and deselection Yes
API-based row filtering No
Automated schema evolution: New and deleted columns Yes
Automated schema evolution: Data type changes No
Automated schema evolution: Column renames No - Requires full refresh.
Automated schema evolution: New tables Yes - If you ingest the entire schema. See the limitations on the number of tables per pipeline.
Maximum number of tables per pipeline 250

Authentication methods

The following table lists the supported authentication methods for each managed ingestion connector. Databricks recommends using OAuth U2M or OAuth M2M when possible. If your connector supports OAuth, basic authentication is considered a legacy method.

Dynamics 365

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

Google Analytics

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) check marked yes Supported (API-only)
Basic authentication (service account JSON key) x mark no Not supported

MySQL

Authentication method Availability
OAuth U2M x mark no Not supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) check marked yes Supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

NetSuite

Authentication method Availability
OAuth U2M x mark no Not supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) check marked yes Supported

Salesforce

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

ServiceNow

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) check marked yes Supported (API-only)
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

SharePoint

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M check marked yes Supported (Public Preview)
OAuth (manual refresh token) check marked yes Supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

SQL Server

Authentication method Availability
OAuth U2M check marked yes Supported
OAuth M2M check marked yes Supported
OAuth (manual refresh token) x mark no Not supported
Basic authentication (username/password) x mark no Not supported
Basic authentication (API key) check marked yes Supported
Basic authentication (service account JSON key) x mark no Not supported

Workday

Authentication method Availability
OAuth U2M x mark no Not supported
OAuth M2M x mark no Not supported
OAuth (manual refresh token) check marked yes Supported
Basic authentication (username/password) check marked yes Supported
Basic authentication (API key) x mark no Not supported
Basic authentication (service account JSON key) x mark no Not supported

Dependence on external services

Databricks SaaS, database, and other fully-managed connectors depend on the accessibility, compatibility, and stability of the application, database, or external service they connect to. Databricks does not control these external services and, therefore, has limited (if any) influence over their changes, updates, and maintenance.

If changes, disruptions, or circumstances related to an external service impede or render impractical the operation of a connector, Databricks may discontinue or cease maintaining that connector. Databricks will make reasonable efforts to notify customers of discontinuation or cessation of maintenance, including updates to the applicable documentation.