Databricks Volumes
Connect Databricks Volumes to your preprocessing pipeline, and use the Unstructured Ingest CLI or the Unstructured Ingest Python library to batch process all your documents and store structured outputs locally on your filesystem.
The requirements are as follows.
The preceding video shows how to use Databricks personal access tokens (PATs), which are supported only for Unstructured Ingest.
To learn how to use Databricks-managed service principals, which are supported by both the Unstructured Platform and Unstructured Ingest, see the additional videos later on this page.
-
The Databricks workspace URL. Get the workspace URL for AWS, Azure, or GCP.
Examples:
- AWS:
https://<workspace-id>.cloud.databricks.com
- Azure:
https://adb-<workspace-id>.<random-number>.azuredatabricks.net
- GCP:
https://<workspace-id>.<random-number>.gcp.databricks.com
- AWS:
-
The Databricks authentication details. For more information, see the documentation for AWS, Azure, or GCP.
The following videos show how to create a Databricks-managed service principal and then grant it access to a Databricks volume:
For the Unstructured Platform, only the following Databricks authentication type is supported:
- For OAuth machine-to-machine (M2M) authentication (AWS, Azure, and GCP): The client ID and OAuth secret values for the corresponding service principal. Note that for Azure, only Databricks-managed service principals are supported. Microsoft Entra ID-managed service principals are not supported.
For Unstructured Ingest, the following Databricks authentication types are supported:
- For Databricks personal access token authentication (AWS, Azure, and GCP): The personal access token’s value.
- For username and password (basic) authentication (AWS only): The user’s name and password values.
- For OAuth machine-to-machine (M2M) authentication (AWS, Azure, and GCP): The client ID and OAuth secret values for the corresponding service principal.
- For OAuth user-to-machine (U2M) authentication (AWS, Azure, and GCP): No additional values.
- For Azure managed identities (MSI) authentication (Azure only): The client ID value for the corresponding managed identity.
- For Microsoft Entra ID service principal authentication (Azure only): The tenant ID, client ID, and client secret values for the corresponding service principal.
- For Azure CLI authentication (Azure only): No additional values.
- For Microsoft Entra ID user authentication (Azure only): The Entra ID token for the corresponding Entra ID user.
- For Google Cloud Platform credentials authentication (GCP only): The local path to the corresponding Google Cloud service account’s credentials file.
- For Google Cloud Platform ID authentication (GCP only): The Google Cloud service account’s email address.
-
The Databricks catalog name for the volume. Get the catalog name for AWS, Azure, or GCP.
-
The Databricks schema name for the volume. Get the schema name for AWS, Azure, or GCP.
-
The Databricks volume name, and optionally any path in that volume that you want to access directly. Get the volume information for AWS, Azure, or GCP.
-
Make sure that the target user or service principal has access to the target volume. To learn more, see the documentation for AWS, Azure, or GCP.
The Databricks Volumes connector dependencies:
You might also need to install additional dependencies, depending on your needs. Learn more.
The following environment variables:
DATABRICKS_HOST
- The Databricks host URL, represented by--host
(CLI) orhost
(Python).DATABRICKS_CATALOG
- The Databricks catalog name for the Volume, represented by--catalog
(CLI) orcatalog
(Python).DATABRICKS_SCHEMA
- The Databricks schema name for the Volume, represented by--schema
(CLI) orschema
(Python). If not specified,default
is used.DATABRICKS_VOLUME
- The Databricks Volume name, represented by--volume
(CLI) orvolume
(Python).DATABRICKS_VOLUME_PATH
- Any optional path to access within the volume, specified by--volume-path
(CLI) orvolume_path
(Python).
Environment variables based on your authentication type, depending on your cloud provider:
-
For Databricks personal access token authentication (AWS, Azure, and GCP):
DATABRICKS_TOKEN
- The personal access token, represented by--token
(CLI) ortoken
(Python).
-
For username and password (basic) authentication (AWS only): The user’s name and password values.
DATABRICKS_USERNAME
- The user’s name, represented by--username
(CLI) orusername
(Python).DATABRICKS_PASSWORD
- The user’s password, represented by--password
(CLI) orpassword
(Python).
-
For OAuth machine-to-machine (M2M) authentication (AWS, Azure, and GCP): The client ID and OAuth secret values for the corresponding service principal.
DATABRICKS_CLIENT_ID
- The client ID value for the corresponding service principal, represented by--client-id
(CLI) orclient_id
(Python).DATABRICKS_CLIENT_SECRET
- The client ID and OAuth secret values for the corresponding service principal, represented by--client-secret
(CLI) orclient_secret
(Python).
-
For OAuth user-to-machine (U2M) authentication (AWS, Azure, and GCP): No additional environment variables.
-
For Azure managed identities (MSI) authentication (Azure only):
ARM_CLIENT_ID
- The client ID value for the corresponding managed identity, represented by--azure-client-id
(CLI) orazure_client_id
(Python).- If the target identity has not already been added to the workspace, then you must also specify the
DATABRICKS_AZURE_RESOURCE_ID
, represented by--azure-workspace-resource-id
(CLI) orazure_workspace_resource_id
(Python).
-
For Microsoft Entra ID service principal authentication (Azure only):
ARM_TENANT_ID
- The tenant ID value for the corresponding service principal, represented by--azure-tenant-id
(CLI) orazure_tenant_id
(Python).ARM_CLIENT_ID
- The client ID value for the corresponding service principal, represented by--azure-client-id
(CLI) orazure_client_id
(Python).ARM_CLIENT_SECRET
- The client secret value for the corresponding service principal, represented by--azure-client-secret
(CLI) orazure_client_secret
(Python).- If the service principal has not already been added to the workspace, then you must also specify the
DATABRICKS_AZURE_RESOURCE_ID
, represented by--azure-workspace-resource-id
(CLI) orazure_workspace_resource_id
(Python).
-
For Azure CLI authentication (Azure only): No additional environment variables.
-
For Microsoft Entra ID user authentication (Azure only):
DATABRICKS_TOKEN
- The Entra ID token for the corresponding Entra ID user, represented by--token
(CLI) ortoken
(Python).
-
For Google Cloud Platform credentials authentication (GCP only):
GOOGLE_CREDENTIALS
- The local path to the corresponding Google Cloud service account’s credentials file, represented by--google-credentials
(CLI) orgoogle_credentials
-
For Google Cloud Platform ID authentication (GCP only):
GOOGLE_SERVICE_ACCOUNT
- The Google Cloud service account’s email address, represented by--google-service-account
(CLI) orgoogle_service_account
(Python).
-
Alternatively, you can store the preceding settings in a local Databricks configuration profile and then just refer to the profile’s name:
DATABRICKS_PROFILE
- The name of the Databricks configuration profile, represented by--profile
(CLI) orprofile
(Python).
Now call the Unstructured Ingest CLI or the Unstructured Ingest Python library. The destination connector can be any of the ones supported. This example uses the local destination connector.
This example sends data to Unstructured API services for processing by default. To process data locally instead, see the instructions at the end of this page.
For the Unstructured Ingest CLI and the Unstructured Ingest Python library, you can use the --partition-by-api
option (CLI) or partition_by_api
(Python) parameter to specify where files are processed:
-
To do local file processing, omit
--partition-by-api
(CLI) orpartition_by_api
(Python), or explicitly specifypartition_by_api=False
(Python).Local file processing does not use an Unstructured API key or API URL, so you can also omit the following, if they appear:
--api-key $UNSTRUCTURED_API_KEY
(CLI) orapi_key=os.getenv("UNSTRUCTURED_API_KEY")
(Python)--partition-endpoint $UNSTRUCTURED_API_URL
(CLI) orpartition_endpoint=os.getenv("UNSTRUCTURED_API_URL")
(Python)- The environment variables
UNSTRUCTURED_API_KEY
andUNSTRUCTURED_API_URL
-
To send files to Unstructured API services for processing, specify
--partition-by-api
(CLI) orpartition_by_api=True
(Python).Unstructured API services also requires an Unstructured API key and API URL, by adding the following:
--api-key $UNSTRUCTURED_API_KEY
(CLI) orapi_key=os.getenv("UNSTRUCTURED_API_KEY")
(Python)--partition-endpoint $UNSTRUCTURED_API_URL
(CLI) orpartition_endpoint=os.getenv("UNSTRUCTURED_API_URL")
(Python)- The environment variables
UNSTRUCTURED_API_KEY
andUNSTRUCTURED_API_URL
, representing your API key and API URL, respectively.
Was this page helpful?