This article covers connecting Unstructured to Databricks Volumes.

For information about connecting Unstructured to Delta Tables in Databricks instead, see Delta Tables in Databricks.

Batch process all your records to store structured outputs in Databricks Volumes.

The requirements are as follows.

  • A Databricks account on AWS, Azure, or GCP.

  • A workspace within the Datbricks account for AWS, Azure, or GCP.

  • The workspace’s URL. Get the workspace URL for AWS, Azure, or GCP.

    Examples:

    • AWS: https://<workspace-id>.cloud.databricks.com
    • Azure: https://adb-<workspace-id>.<random-number>.azuredatabricks.net
    • GCP: https://<workspace-id>.<random-number>.gcp.databricks.com
  • The Databricks authentication details. For more information, see the documentation for AWS, Azure, or GCP.

    For the Unstructured Platform, only Databricks OAuth machine-to-machine (M2M) authentication is supported for AWS, Azure, and GCP. You will need the the Client ID (or UUID or Application ID) and OAuth Secret (client secret) values for the corresponding service principal. Note that for Azure, only Databricks managed service principals are supported. Microsoft Entra ID managed service principals are not supported.

    The following video shows how to create a Databricks managed service principal:

    For Unstructured Ingest, the following Databricks authentication types are supported:

    • For Databricks personal access token authentication for AWS, Azure, or GCP: The personal access token’s value.

      The following video shows how to create a Databricks personal access token:

    • For username and password (basic) authentication (AWS only): The user’s name and password values.

    • For OAuth machine-to-machine (M2M) authentication (AWS, Azure, and GCP): The client ID and OAuth secret values for the corresponding service principal.

    • For OAuth user-to-machine (U2M) authentication (AWS, Azure, and GCP): No additional values.

    • For Azure managed identities (formerly Managed Service Identities (MSI) authentication) (Azure only): The client ID value for the corresponding managed identity.

    • For Microsoft Entra ID service principal authentication (Azure only): The tenant ID, client ID, and client secret values for the corresponding service principal.

    • For Azure CLI authentication (Azure only): No additional values.

    • For Microsoft Entra ID user authentication (Azure only): The Entra ID token for the corresponding Entra ID user.

    • For Google Cloud Platform credentials authentication (GCP only): The local path to the corresponding Google Cloud service account’s credentials file.

    • For Google Cloud Platform ID authentication (GCP only): The Google Cloud service account’s email address.

  • The name of the parent catalog in Unity Catalog for AWS, Azure, or GCP for the volume.

  • The name of the parent schema (formerly known as a database) in Unity Catalog for AWS, Azure, or GCP for the volume.

  • The name of the volume in Unity Catalog for AWS, Azure, or GCP, and optionally any path in that volume that you want to access directly, beginning with the volume’s root.

  • The Databricks workspace user or service principal must have the following minimum set of privileges to read from or write to the existing volume in Unity Catalog:

    • USE CATALOG on the volume’s parent catalog in Unity Catalog.
    • USE SCHEMA on the volume’s parent schema (formerly known as a database) in Unity Catalog.
    • READ VOLUME and WRITE VOLUME on the volume.

    The following videos shows how to create and set privileges for a catalog, schema (formerly known as a database), and volume in Unity Catalog.

    Learn more about how to check and set Unity Catalog privileges for AWS, Azure, or GCP.

The Databricks Volumes connector dependencies:

CLI, Python
pip install "unstructured-ingest[databricks-volumes]"

You might also need to install additional dependencies, depending on your needs. Learn more.

The following environment variables:

  • DATABRICKS_HOST - The Databricks host URL, represented by --host (CLI) or host (Python).
  • DATABRICKS_CATALOG - The Databricks catalog name for the Volume, represented by --catalog (CLI) or catalog (Python).
  • DATABRICKS_SCHEMA - The Databricks schema name for the Volume, represented by --schema (CLI) or schema (Python). If not specified, default is used.
  • DATABRICKS_VOLUME - The Databricks Volume name, represented by --volume (CLI) or volume (Python).
  • DATABRICKS_VOLUME_PATH - Any optional path to access within the volume, specified by --volume-path (CLI) or volume_path (Python).

Environment variables based on your authentication type, depending on your cloud provider:

  • For Databricks personal access token authentication (AWS, Azure, and GCP):

    • DATABRICKS_TOKEN - The personal access token, represented by --token (CLI) or token (Python).
  • For username and password (basic) authentication (AWS only): The user’s name and password values.

    • DATABRICKS_USERNAME - The user’s name, represented by --username (CLI) or username (Python).
    • DATABRICKS_PASSWORD - The user’s password, represented by --password (CLI) or password (Python).
  • For OAuth machine-to-machine (M2M) authentication (AWS, Azure, and GCP): The client ID and OAuth secret values for the corresponding service principal.

    • DATABRICKS_CLIENT_ID - The client ID value for the corresponding service principal, represented by --client-id (CLI) or client_id (Python).
    • DATABRICKS_CLIENT_SECRET - The client ID and OAuth secret values for the corresponding service principal, represented by --client-secret (CLI) or client_secret (Python).
  • For OAuth user-to-machine (U2M) authentication (AWS, Azure, and GCP): No additional environment variables.

  • For Azure managed identities (MSI) authentication (Azure only):

    • ARM_CLIENT_ID - The client ID value for the corresponding managed identity, represented by --azure-client-id (CLI) or azure_client_id (Python).
    • If the target identity has not already been added to the workspace, then you must also specify the DATABRICKS_AZURE_RESOURCE_ID, represented by --azure-workspace-resource-id (CLI) or azure_workspace_resource_id (Python).
  • For Microsoft Entra ID service principal authentication (Azure only):

    • ARM_TENANT_ID - The tenant ID value for the corresponding service principal, represented by --azure-tenant-id (CLI) or azure_tenant_id (Python).
    • ARM_CLIENT_ID - The client ID value for the corresponding service principal, represented by --azure-client-id (CLI) or azure_client_id (Python).
    • ARM_CLIENT_SECRET - The client secret value for the corresponding service principal, represented by --azure-client-secret (CLI) or azure_client_secret (Python).
    • If the service principal has not already been added to the workspace, then you must also specify the DATABRICKS_AZURE_RESOURCE_ID, represented by --azure-workspace-resource-id (CLI) or azure_workspace_resource_id (Python).
  • For Azure CLI authentication (Azure only): No additional environment variables.

  • For Microsoft Entra ID user authentication (Azure only):

    • DATABRICKS_TOKEN - The Entra ID token for the corresponding Entra ID user, represented by --token (CLI) or token (Python).
  • For Google Cloud Platform credentials authentication (GCP only):

    • GOOGLE_CREDENTIALS - The local path to the corresponding Google Cloud service account’s credentials file, represented by --google-credentials (CLI) or google_credentials
  • For Google Cloud Platform ID authentication (GCP only):

    • GOOGLE_SERVICE_ACCOUNT - The Google Cloud service account’s email address, represented by --google-service-account (CLI) or google_service_account (Python).
  • Alternatively, you can store the preceding settings in a local Databricks configuration profile and then just refer to the profile’s name:

    • DATABRICKS_PROFILE - The name of the Databricks configuration profile, represented by --profile (CLI) or profile (Python).

These environment variables:

  • UNSTRUCTURED_API_KEY - Your Unstructured API key value.
  • UNSTRUCTURED_API_URL - Your Unstructured API URL.

Now call the Unstructured Ingest CLI or the Unstructured Ingest Python library. The source connector can be any of the ones supported. This example uses the local source connector:

#!/usr/bin/env bash

# Chunking and embedding are optional.

unstructured-ingest \
  local \
    --input-path $LOCAL_FILE_INPUT_DIR \
    --partition-by-api \
    --api-key $UNSTRUCTURED_API_KEY \
    --partition-endpoint $UNSTRUCTURED_API_URL \
    --strategy hi_res \
    --additional-partition-args="{\"split_pdf_page\":\"true\", \"split_pdf_allow_failed\":\"true\", \"split_pdf_concurrency_level\": 15}" \
    --chunking-strategy by_title \
    --embedding-provider huggingface \
  databricks-volumes \
    --profile $DATABRICKS_PROFILE \
    --host $DATABRICKS_HOST \
    --catalog $DATABRICKS_CATALOG \
    --schema $DATABRICKS_SCHEMA \
    --volume $DATABRICKS_VOLUME \
    --volume-path $DATABRICKS_VOLUME_PATH