If you’re new to Unstructured, read this note first.Before you can create a source connector, you must first sign in to your Unstructured account:
- If you do not already have an Unstructured account, sign up for free. After you sign up, you are automatically signed in to your new Unstructured Starter account, at https://platform.unstructured.io. To sign up for a Team or Enterprise account instead, contact Unstructured Sales, or learn more.
- If you already have an Unstructured Starter or Team account and are not already signed in, sign in to your account at https://platform.unstructured.io. For an Enterprise account, see your Unstructured account administrator for instructions, or email Unstructured Support at support@unstructured.io.
-
After you sign in to your Unstructured Starter account, click API Keys on the sidebar.
For a Team or Enterprise account, before you click API Keys, make sure you have selected the organizational workspace you want to create an API key for. Each API key works with one and only one organizational workspace. Learn more. -
Click Generate API Key.
-
Follow the on-screen instructions to finish generating the key.
-
Click the Copy icon next to your new key to add the key to your system’s clipboard. If you lose this key, simply return and click the Copy icon again.
If you are experiencing S3 connector or workflow failures after adding a new S3 bucket or updating an existing S3 bucket,
it could be due to S3 latency issues. You might need to wait up to a few hours before any related S3 connectors
and workflows begin working without failures.Various Amazon S3 operations such as propagating DNS records for new buckets, updating bucket access policies and
permissions, reusing bucket names after deletion, and using AWS Regions that are not geographically closer
to your users or applications, can take a few minutes to hours to fully propagate across the Amazon network.
- An AWS account. Create an AWS account.
- An S3 bucket. You can create an S3 bucket by using the S3 console, following the steps in the S3 documentation or in the following video. Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page.
-
Anonymous access to the bucket is supported but not recommended. (Use authenticated bucket read or write access or both instead.) To enable anonymous access, follow the steps
in the S3 documentation or in the following animation.
-
For authenticated bucket read or write access or both, which is recommended over anonymous access, you should first
block all public access to the bucket.
After blocking all public access to the bucket, for read access, the authenticated AWS IAM user must have at minimum the permissions of
s3:ListBucket
ands3:GetObject
for that bucket. For write access, the authenticated AWS IAM user must have at minimum the permission ofs3:PutObject
for that bucket. Permissions can be granted in one of the following ways:- Attach the appropriate bucket policy to the bucket. See the policy examples later on this page, and learn about bucket policies for S3. These permissions remain in effect until the bucket policy is removed from the bucket. To apply a bucket policy by using the S3 console, follow the steps in the S3 documentation or in the following video. Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page.
-
Have the IAM user temporarily assume an IAM role that contains the appropriate user policy. See the policy examples later on this page, and learn about user policies for S3. These permission remain in effect until the assumed role’s time period expires.
Learn how to use the IAM console to create a policy,
create a role that references this policy, and then
have the user temporarily assume the role by using the AWS CLI or an AWS SDK, which produces
a temporary AWS access key (
AccessKeyId
), AWS secret access key (SecretAccessKey
), and AWS STS session token (SessionToken
).AWS STS credentials (consisting of an AWS access key, AWS secret access key, and AWS STS session token) can be valid for as little as 15 minutes or as long as 36 hours, depending on how the credentials were initially generated. After the expiry time, the credentials are no longer valid and will no longer work with the corresponding S3 connector. You must get a new set of credentials to replace the expired ones by having the user temporarily assume the role again by using the AWS CLI or an AWS SDK, which produces a new, refreshed temporary AWS access key, AWS secret access key, and AWS STS session token.To overwrite the expired credentials with the new set:- For the Unstructured user interface (UI), manually update the AWS Key, AWS Secret Key, and STS Token fields in the Unstructured UI for the corresponding S3 source or destination connector.
- For the Unstructured API, use the Unstructured Workflow Endpoint to call the update source or update destination connector operation for the corresponding S3 source or destination connector.
- For Unstructured Ingest, change the values of
--key
,--secret
, and--token
(CLI) orkey
,secret
, andtoken
(Python) in your command or code for the corresponding S3 source or destination connector.
- If you used a bucket policy intead of having the IAM user temporarily assume an IAM role for authenticated bucket access, you must provide a long-term AWS access key and secret access key for the authenticated AWS IAM user in the account. Create an AWS access key and secret access key by following the steps in the IAM documentation or in the following video.
-
If the target files are in the root of the bucket, you will need the path to the bucket, formatted as
protocol://bucket/
(for example,s3://my-bucket/
). If the target files are in a folder, the path to the target folder in the S3 bucket, formatted asprotocol://bucket/path/to/folder/
(for example,s3://my-bucket/my-folder/
). - If the target files are in a folder, and authenticated bucket access is enabled, make sure the authenticated AWS IAM user has authenticated access to the folder as well. See examples of authenticated folder access.
Add an access policy to an existing bucket
To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to an existing S3 bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
- Sign in to the AWS Management Console.
- Open the Amazon S3 Console.
- Browse to the existing bucket and open it.
- Click the Permissions tab.
- In the Bucket policy area, click Edit.
-
In the Policy text area, copy the following JSON-formatted policy.
To change the following policy to restrict it to a specific user in the AWS account, change
root
to that specific username. In this policy, replace the following:- Replace
<my-account-id>
with your AWS account ID. - Replace
<my-bucket-name>
in two places with the name of your bucket.
- Replace
- Click Save changes.
Create a bucket with AWS CloudFormation
To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
-
Save the following YAML to a file on your local machine, for example
create-s3-bucket.yaml
. To change the following bucket policy to restrict it to a specific user in the AWS account, changeroot
to that specific username. - Sign in to the AWS Management Console.
- Open the AWS CloudFormation Console.
- Click Create stack > With new resources (standard).
- On the Create stack page, with Choose an existing template already selected, select Upload a template file.
- Click Choose file, and browse to and select the YAML file from your local machine.
- Click Next.
- Enter a unique Stack name and BucketName.
- Click Next two times.
- Click Submit.
- Wait until the Status changes to CREATE_COMPLETE.
- After the bucket is created, you can delete the YAML file, if you want.
Create a bucket with the AWS CLI
To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.
- Install the AWS CLI.
- Set up the AWS CLI.
-
Copy the following script to a file on your local machine, for example a file named
create-s3-bucket.sh
. To change the following bucket policy to restrict it to a specific user in the AWS account, changeroot
to that specific username. In this script, replace the following:- Replace
<my-account-id>
with your AWS account ID. - Replace
<my-unique-bucket-name>
with the name of your bucket. - Replace
<us-east-1>
with your AWS Region.
- Replace
-
Run the script, for example:
- After the bucket is created, you can delete the script file, if you want.
Work with user-defined metadata
User-defined metadata in S3 is metadata that you can choose to set at the time that you upload a file to an S3 bucket. User-defined metadata is specified in S3 as a set of name-value pairs. Each name-value pair begins withx-amz-meta-
and is
followed by a unique name.
For more information about how to add or replace user-defined metadata for a file in S3, see the following:
Unstructured outputs any user-defined metadata that it finds for a file into the metadata.data_source.record_locator.metadata
field of
the document elements’ output for the corresponding file. For example, if Unstructured processes a file with the user-defined metadata
x-amz-meta-mymetadata
name set to the value myvalue
, Unstructured outputs the following into the metadata.data_source.record_locator.metadata
field of
the document elements’ output for the corresponding file:
FIPS and ambient credentials
The following information applies to Unstructured Enterprise accounts only.
fips
-based
S3 endpoints and, for authenticated access to
S3 FIPS buckets, S3 ambient credentials.
To use the Unstructured (UI) to set up an S3 source or destination connector
to use an S3 FIPS bucket and S3 ambient credentials, do the following:
- Create an environment variable named
ALLOW_AMBIENT_CREDENTIALS_S3
, and set its value totrue
. - When creating the connector, for the S3 connector’s Bucket URI field, specify the path to the S3 FIPS bucket, formatted as
https://<bucket-name>.<endpoint>
, for examplehttps://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com
. If the target files are in a folder, specify the path to the target folder in the S3 FIPS bucket instead, formatted ashttps://<bucket-name>.<endpoint>/path/to/folder
(for example,https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/
). - For the Authentication Method field, select Ambient Credentials.
- Check the box labelled Use Ambient Credentials.
- Save and test the connector.
-
Create an environment variable named
ALLOW_AMBIENT_CREDENTIALS_S3
, and set its value totrue
. -
When creating the connector,
for the
config
parameter’sremote_url
field, specify the path to the S3 FIPS bucket, formatted ashttps://<bucket-name>.<endpoint>
, for examplehttps://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com
. If the target files are in a folder, specify the path to the target folder in the S3 FIPS bucket instead, formatted ashttps://<bucket-name>.<endpoint>/path/to/folder
(for example,https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/
). -
For the
config
parameter, add anambient_credentials
field, and set its value to true. For example: - Run your code to create the connector.
Create the source connector
To create an S3 source connector, see the following examples.-
<name>
(required) - A unique name for this connector. -
For AWS access key ID with AWS secret access key authentication, or for AWS STS token authentication:
<key>
- The AWS access key ID for the authenticated AWS IAM user (required).<secret>
- The AWS secret access key corresponding to the preceding AWS access key ID (required).
-
For AWS STS token authentication:
<token>
- The AWS STS session token for temporary access (required).
-
<endpoint-url>
- A custom URL, if connecting to a non-AWS S3 bucket. -
<remote-url>
(required) - The S3 URI to the bucket or folder, formatted ass3://my-bucket/
(if the files are in the bucket’s root) ors3://my-bucket/my-folder/
. -
For
recursive
(source connector only), set totrue
to access subfolders within the bucket. The default isfalse
if not otherwise specified.