Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Use the JAR task to deploy Scala or Java code compiled into a JAR (Java ARchive). Before you create a JAR task, see Create a Databricks compatible JAR.
Important
Serverless Scala and Java jobs are in Beta. You can use JAR tasks to deploy your JAR. See Manage Azure Databricks previews if it's not already enabled.
Requirements
- Choose a compute configuration that supports your workload.
- Upload your JAR file to a location or Maven repository compatible with your compute. See JAR library support.
- For standard access mode: An admin must add Maven coordinates and JAR paths to an allowlist. See standard access limitations.
Configure a JAR task
Add a JAR task from the Tasks tab in the Jobs UI by doing the following:
Click Add task.
Enter a name into the Task name field.
In the Type drop-down menu, select
JAR.Specify the Main class.
- This is the full name of the class containing the main method to be executed. This class must be included in a JAR configured as a Dependent library.
Click Compute to select or configure compute. Choose a classic or serverless compute.
Configure your environment and add dependencies:
For classic compute, click
Add under Dependent libraries. The Add dependent library dialog appears.
- You can select an existing JAR file or upload a new JAR file.
- Not all locations support JAR files.
- Not all compute configurations support JAR files in all supported locations.
- Each Library Source has a different flow for selecting or uploading a JAR file. See Install libraries.
For serverless compute, choose an environment then click
edit to configure it.
- You must select 4 or higher for the Environment version.
- Add your JAR file.
- Add any other dependencies that you have. Don't include Spark dependencies, because these are already provided in the environment by Databricks Connect. For more information on dependencies in JARs, see Create an Azure Databricks compatible JAR.
(Optional) Configure Parameters as a list of strings passed as arguments to the main class. See Configure task parameters.
Click Save task.