This article covers connecting Unstructured to Delta Tables in Amazon S3. For information about
connecting Unstructured to Delta Tables in Databricks instead, see
Delta Tables in Databricks.
If you’re new to Unstructured, read this note first.Before you can create a destination connector, you must first sign in to your Unstructured account:
- If you do not already have an Unstructured account, sign up for free. After you sign up, you are automatically signed in to your new Unstructured Starter account, at https://platform.unstructured.io. To sign up for a Team or Enterprise account instead, contact Unstructured Sales, or learn more.
- If you already have an Unstructured Starter or Team account and are not already signed in, sign in to your account at https://platform.unstructured.io. For an Enterprise account, see your Unstructured account administrator for instructions, or email Unstructured Support at support@unstructured.io.
If you are experiencing S3 connector or workflow failures after adding a new S3 bucket or updating an existing S3 bucket,
it could be due to S3 latency issues. You might need to wait up to a few hours before any related S3 connectors
and workflows begin working without failures.Various Amazon S3 operations such as propagating DNS records for new buckets, updating bucket access policies and
permissions, reusing bucket names after deletion, and using AWS Regions that are not geographically closer
to your users or applications, can take a few minutes to hours to fully propagate across the Amazon network.
- An AWS account. Create an AWS account.
- An S3 bucket. You can create an S3 bucket by using the S3 console, following the steps in the S3 documentation or in the following video. Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page.
-
For authenticated bucket write access or both, you should first
block all public access to the bucket.
After blocking all public access to the bucket, for read access, the authenticated AWS IAM user must have at minimum the permissions of
s3:ListBucket
ands3:GetObject
for that bucket. For write access, the authenticated AWS IAM user must have at minimum the permission ofs3:PutObject
for that bucket. To grant permissions, attach the appropriate bucket policy to the bucket. See the policy examples later on this page, and learn about bucket policies for S3. These permissions remain in effect until the bucket policy is removed from the bucket. To apply a bucket policy by using the S3 console, follow the steps in the S3 documentation or in the following video. Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page. - Provide an AWS access key and secret access key for the authenticated AWS IAM user in the account. Create an AWS access key and secret access key by following the steps in the IAM documentation or in the following video.
-
If the target files are in the root of the bucket, provide the path to the bucket, formatted as
protocol://bucket/
(for example,s3://my-bucket/
). If the target files are in a folder, the path to the target folder in the S3 bucket, formatted asprotocol://bucket/path/to/folder/
(for example,s3://my-bucket/my-folder/
). - If the target files are in a folder, make sure the authenticated AWS IAM user has authenticated access to the folder as well. See examples of authenticated folder access.
Add an access policy to an existing bucket
To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to an existing S3 bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
- Sign in to the AWS Management Console.
- Open the Amazon S3 Console.
- Browse to the existing bucket and open it.
- Click the Permissions tab.
- In the Bucket policy area, click Edit.
-
In the Policy text area, copy the following JSON-formatted policy.
To change the following policy to restrict it to a specific user in the AWS account, change
root
to that specific username. In this policy, replace the following:- Replace
<my-account-id>
with your AWS account ID. - Replace
<my-bucket-name>
in two places with the name of your bucket.
- Replace
- Click Save changes.
Create a bucket with AWS CloudFormation
To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
-
Save the following YAML to a file on your local machine, for example
create-s3-bucket.yaml
. To change the following bucket policy to restrict it to a specific user in the AWS account, changeroot
to that specific username. - Sign in to the AWS Management Console.
- Open the AWS CloudFormation Console.
- Click Create stack > With new resources (standard).
- On the Create stack page, with Choose an existing template already selected, select Upload a template file.
- Click Choose file, and browse to and select the YAML file from your local machine.
- Click Next.
- Enter a unique Stack name and BucketName.
- Click Next two times.
- Click Submit.
- Wait until the Status changes to CREATE_COMPLETE.
- After the bucket is created, you can delete the YAML file, if you want.
Create a bucket with the AWS CLI
To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
- Install the AWS CLI.
- Set up the AWS CLI.
-
Copy the following script to a file on your local machine, for example a file named
create-s3-bucket.sh
. To change the following bucket policy to restrict it to a specific user in the AWS account, changeroot
to that specific username. In this script, replace the following:- Replace
<my-account-id>
with your AWS account ID. - Replace
<my-unique-bucket-name>
with the name of your bucket. - Replace
<us-east-1>
with your AWS Region.
- Replace
-
Run the script, for example:
- After the bucket is created, you can delete the script file, if you want.
Delta table output format
A Delta table consists of Parquet files that contain data and a transaction log that stores metadata about the transactions. Learn more. The Delta Tables in Amazon S3 destination connector generates the following output within the specified path to the S3 bucket (or the specified folder within the bucket):- Initially, one Parquet (
.parquet
) file per file in the source location. For example, for a file in the source location namedmy-file.pdf
, an associated file with the extension.parquet
is generated. Various kinds of file transactions can result in additional Parquet files being generated. These Parquet filenames are automatically generated by the Delta Lake engine and are not meant to be manually modified. - A folder named
_delta_log
that contains metadata and change history about the.parquet
files. As Parquet files are added to, changed, or removed from the specified bucket or folder path, the_delta_log
folder is updated with any related metadata and change history details.
_delta_log
folder (and its contents) describe a single, versioned Delta table. Because of this, Unstructured recommends the following usage best practices:
- In the source location, each set of source files that is to be considered as a unit for change management purposes should be controlled by a unique, dedicated Delta Tables in S3 destination connector. This connector should reference a unique, dedicated output folder within the bucket. Having multiple workflows refer to different sets of source files, yet all share the same Delta table, could results in data loss or table corruption.
- Avoid directly modifying, adding, or deleting Parquet data files or the
_delta_log
folder within a Delta table’s directory. This can lead to data loss or table corruption. - If you need to copy or move a Delta table to a different location,
you must move or copy its entire set of Parquet files and its associated
_delta_log
folder (and its contents) together as a unit. Note that the copied or moved Delta table will no longer be controlled by the original Delta Tables in S3 destination connector.
Create the destination connector
To create the destination connector:- On the sidebar, click Connectors.
- Click Destinations.
- Cick New or Create Connector.
- Give the connector some unique Name.
- In the Provider area, click Delta Table.
- Click Continue.
- Follow the on-screen instructions to fill in the fields as described later on this page.
- Click Save and Test.
- Name (required): A unique name for this connector.
- AWS Region (required): The AWS Region identifier (for example,
us-east-1
) for the Amazon S3 bucket you want to store the Delta Table in. - Bucket URI (required): The URI of the Amazon S3 bucket you want to store the Delta Table in. This typically takes the format
s3://my-bucket/my-folder
. - AWS Access Key ID (required): The AWS access key ID for the AWS IAM principal (such as an IAM user) that has the appropriate access to the S3 bucket.
- AWS Secret Access Key (required): The AWS secret access key for the corresponding AWS access key ID.