> ## Documentation Index
> Fetch the complete documentation index at: https://docs.unstructured.io/llms.txt
> Use this file to discover all available pages before exploring further.

# S3

<Note>
  If you're new to Unstructured, read this note first.

  Before you can create a destination connector, you must first sign in to your Unstructured account:

  * If you do not already have an Unstructured account, [sign up for free](https://unstructured.io/?modal=try-for-free). After you sign up, you are automatically signed in to your new Unstructured **Let's Go** account, at [https://platform.unstructured.io](https://platform.unstructured.io).
    To sign up for a **Business** account instead, [contact Unstructured Sales](https://unstructured.io/?modal=contact-sales), or [learn more](/ui/overview#how-am-i-billed%3F).
  * If you already have an Unstructured **Let's Go**, **Pay-As-You-Go**, or **Business SaaS** account and are not already signed in, sign in to your account at
    [https://platform.unstructured.io](https://platform.unstructured.io). For other types of **Business** accounts, see your Unstructured account administrator for sign-in instructions,
    or email Unstructured Support at [support@unstructured.io](mailto:support@unstructured.io).

  After you sign in, the [Unstructured user interface](/ui/overview) (UI) appears, which you use to create your destination connector.

  After you create the destination connector, add it along with a
  [source connector](/ui/sources/overview) to a [workflow](/ui/workflows). Then run the worklow as a
  [job](/ui/jobs). To learn how, try out the [hands-on UI quickstart](/ui/quickstart#remote-quickstart) or watch the 4-minute
  [video tutorial](https://www.youtube.com/watch?v=Wn2FfHT6H-o).

  You can also create destination connectors with the Unstructured API.
  [Learn how](/api-reference/workflow/destinations/overview).

  If you need help, email Unstructured Support at [support@unstructured.io](mailto:support@unstructured.io).

  You are now ready to start creating a destination connector! Keep reading to learn how.
</Note>

Send processed data from Unstructured to Amazon S3.

The requirements are as follows.

The following video shows how to fulfill the minimum set of Amazon S3 requirements:

<iframe width="560" height="315" src="https://www.youtube.com/embed/hyDHfhVVAhs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen />

<Warning>
  If you are experiencing S3 connector or workflow failures after adding a new S3 bucket or updating an existing S3 bucket,
  it could be due to S3 latency issues. You might need to wait up to a few hours before any related S3 connectors
  and workflows begin working without failures.

  Various Amazon S3 operations such as propagating DNS records for new buckets, updating bucket access policies and
  permissions, reusing bucket names after deletion, and using AWS Regions that are not geographically closer
  to your users or applications, can take a few minutes to hours to fully propagate across the Amazon network.
</Warning>

The preceding video does not show how to create an AWS account; enable anonymous access to the bucket (which is supported but
not recommended); or generate AWS STS temporary access credentials if required by your organization's security
requirements. For more information about requirements, see the following:

* An AWS account. [Create an AWS account](https://aws.amazon.com/free).

  <iframe width="560" height="315" src="https://www.youtube.com/embed/lIdh92JmWtg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen />

* An S3 bucket. You can create an S3 bucket by using the S3 console, following the steps [in the S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) or in the following video.
  Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page.

  <iframe width="560" height="315" src="https://www.youtube.com/embed/e6w9LwZJFIA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen />

* Anonymous access to the bucket is supported but not recommended. (Use authenticated bucket read or write access or both instead.) To enable anonymous access, follow the steps
  [in the S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-anonymous-user) or in the following animation.

  <img src="https://mintcdn.com/unstructured-53/vKFDfUfAWhz_siB3/img/connectors/s3-anon-access.gif?s=9d25aa55a82ce91a577a10d28b57043b" alt="Enable anonymous bucket access" data-og-width="1386" width="1386" data-og-height="830" height="830" data-path="img/connectors/s3-anon-access.gif" data-optimize="true" data-opv="3" />

* For authenticated bucket read or write access or both, which is recommended over anonymous access, you should first
  [block all public access to the bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-bucket.html).

  After blocking all public access to the bucket, for read access, the authenticated AWS IAM user must have at minimum the permissions of `s3:ListBucket` and `s3:GetObject` for that bucket.
  For write access, the authenticated AWS IAM user must have at minimum the permission of `s3:PutObject` for that bucket. Permissions
  can be granted in one of the following ways:

  * Attach the appropriate bucket policy to the bucket. See the policy examples later on this page, and [learn about bucket policies for S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html). These permissions remain in effect until the bucket policy is removed from the bucket.
    To apply a bucket policy by using the S3 console, follow the steps [in the S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/add-bucket-policy.html) or in the following video.
    Additional approaches that use AWS CloudFormation or the AWS CLI are in the how-to sections later on this page.

    <iframe width="560" height="315" src="https://www.youtube.com/embed/y4SfQoJpipo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen />

  * Have the IAM user temporarily assume an IAM role that contains the appropriate user policy. See the policy examples later on this page, and [learn about user policies for S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html). These permission remain in effect until the assumed role's time period expires.
    Learn how to use the IAM console to [create a policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html),
    [create a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) that references this policy, and then
    [have the user temporarily assume the role by using the AWS CLI or an AWS SDK](https://docs.aws.amazon.com/code-library/latest/ug/sts_example_sts_AssumeRole_section.html), which produces
    a temporary AWS access key (`AccessKeyId`), AWS secret access key (`SecretAccessKey`), and AWS STS session token (`SessionToken`).

    <Warning>
      AWS STS credentials (consisting of an AWS access key, AWS secret access key, and AWS STS session token) can be valid for as little as 15 minutes or as long as 36 hours, depending on how the credentials were initially
      generated. After the expiry time, the credentials are no longer valid and will no longer work with the corresponding S3 connector.
      You must get a new set of credentials to replace the expired ones by [having the user temporarily assume the role again by using the AWS CLI or an AWS SDK](https://docs.aws.amazon.com/code-library/latest/ug/sts_example_sts_AssumeRole_section.html), which produces
      a new, refreshed temporary AWS access key, AWS secret access key, and AWS STS session token.

      To overwrite the expired credentials with the new set:

      * For the Unstructured user interface (UI), manually update the AWS Key, AWS Secret Key, and STS Token fields in the Unstructured UI
        for the corresponding S3 [source](/ui/sources/s3) or [destination](/ui/destinations/s3) connector.
      * For the Unstructured API, use the Unstructured API's workflow operations to call the
        [update source](/api-reference/workflow/overview#update-a-source-connector) or
        [update destination](/api-reference/workflow/overview#update-a-destination-connector) connector operation
        for the corresponding S3 [source](/api-reference/workflow/sources/s3) or
        [destination](/api-reference/workflow/destinations/s3) connector.
      * For Unstructured Ingest, change the values of `--key`, `--secret`, and `--token` (CLI) or `key`, `secret`, and `token` (Python) in your command or code for the
        corresponding S3 [source](/open-source/ingestion/source-connectors/s3) or [destination](/open-source/ingestion/destination-connectors/s3) connector.
    </Warning>

* If you used a bucket policy intead of having the IAM user temporarily assume an IAM role for authenticated bucket access, you must provide a long-term AWS access key and secret access key for the authenticated AWS IAM user in the account.
  Create an AWS access key and secret access key by following the steps [in the IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) or in the following video.

  <iframe width="560" height="315" src="https://www.youtube.com/embed/MoFTaGJE65Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen />

* If the target files are in the root of the bucket, you will need the path to the bucket, formatted as `protocol://bucket/` (for example, `s3://my-bucket/`).
  If the target files are in a folder, the path to the target folder in the S3 bucket, formatted as `protocol://bucket/path/to/folder/` (for example, `s3://my-bucket/my-folder/`).

* If the target files are in a folder, and authenticated bucket access is enabled, make sure the authenticated AWS IAM user has
  authenticated access to the folder as well. [See examples of authenticated folder access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-folders).

## Add an access policy to an existing bucket

To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the
corresponding AWS account to read and write to an existing S3 bucket, do the following.

<Info>Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.</Info>

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/).

2. Open the [Amazon S3 Console](https://console.aws.amazon.com/s3/home).

3. Browse to the existing bucket and open it.

4. Click the **Permissions** tab.

5. In the **Bucket policy** area, click **Edit**.

6. In the **Policy** text area, copy the following JSON-formatted policy.
   To change the following policy to restrict it to a specific user in the AWS account, change `root` to that
   specific username.

   In this policy, replace the following:

   * Replace `<my-account-id>` with your AWS account ID.
   * Replace `<my-bucket-name>` in two places with the name of your bucket.

   ```json  theme={null}
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Sid": "AllowAuthenticatedUsersInAccountReadWrite",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::<my-account-id>:root"
               },
               "Action": [
                   "s3:GetObject",
                   "s3:PutObject",
                   "s3:ListBucket",
                   "s3:DeleteObject"
               ],
               "Resource": [
                   "arn:aws:s3:::<my-bucket-name>",
                   "arn:aws:s3:::<my-bucket-name>/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:PrincipalType": "IAMUser"
                   }
               }
           }
       ]
   }
   ```

7. Click **Save changes**.

## Create a bucket with AWS CloudFormation

To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users
in the corresponding AWS account to read and write to the bucket, do the following.

<Info>Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.</Info>

1. Save the following YAML to a file on your local machine, for example `create-s3-bucket.yaml`. To change
   the following bucket policy to restrict it to a specific user in the AWS account, change `root` to that
   specific username.

   ```yaml  theme={null}
   AWSTemplateFormatVersion: '2010-09-09'
   Description: 'CloudFormation template to create an S3 bucket with specific permissions for account users.'

   Parameters:
     BucketName:
       Type: String
       Description: 'Name of the S3 bucket to create'

   Resources:
     MyS3Bucket:
       Type: 'AWS::S3::Bucket'
       Properties:
         BucketName: !Ref BucketName
         PublicAccessBlockConfiguration:
           BlockPublicAcls: true
           BlockPublicPolicy: false
           IgnorePublicAcls: true
           RestrictPublicBuckets: true

     BucketPolicy:
       Type: 'AWS::S3::BucketPolicy'
       Properties:
         Bucket: !Ref MyS3Bucket
         PolicyDocument:
           Version: '2012-10-17'
           Statement:
             - Sid: AllowAllAuthenticatedUsersInAccount
               Effect: Allow
               Principal:
                 AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root'
               Action:
                 - 's3:GetObject'
                 - 's3:PutObject'
                 - 's3:ListBucket'
                 - 's3:DeleteObject'
               Resource:
                 - !Sub 'arn:aws:s3:::${BucketName}'
                 - !Sub 'arn:aws:s3:::${BucketName}/*'

   Outputs:
     BucketName:
       Description: 'Name of the created S3 bucket'
       Value: !Ref MyS3Bucket
   ```

2. Sign in to the [AWS Management Console](https://console.aws.amazon.com/).

3. Open the [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/home).

4. Click **Create stack > With new resources (standard)**.

5. On the **Create stack** page, with **Choose an existing template** already selected, select **Upload a template file**.

6. Click **Choose file**, and browse to and select the YAML file from your local machine.

7. Click **Next**.

8. Enter a unique **Stack name** and **BucketName**.

9. Click **Next** two times.

10. Click **Submit**.

11. Wait until the **Status** changes to **CREATE\_COMPLETE**.

12. After the bucket is created, you can delete the YAML file, if you want.

## Create a bucket with the AWS CLI

To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the
corresponding AWS account to read and write to the bucket, do the following.

<Info>Your organization might have stricter bucket policy requirements. Check with your AWS account
administrator if you are unsure.</Info>

1. [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

2. [Set up the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html).

3. Copy the following script to a file on your local machine, for example a file named `create-s3-bucket.sh`.
   To change the following bucket policy to restrict it to a specific user in the AWS account, change `root` to that
   specific username.

   In this script, replace the following:

   * Replace `<my-account-id>` with your AWS account ID.
   * Replace `<my-unique-bucket-name>` with the name of your bucket.
   * Replace `<us-east-1>` with your AWS Region.

   ```bash  theme={null}
   #!/bin/bash

   # Set variables for the AWS account ID, Amazon S3 bucket name, and AWS Region.
   ACCOUNT_ID="<my-account-id>"
   BUCKET_NAME="<my-unique-bucket-name>"
   REGION="<us-east-1>"

   # Temporary filename for the bucket policy.
   # Do not change this variable.
   POLICY_FILE="bucket_policy.json"

   # Create the bucket.
   aws s3api create-bucket --bucket $BUCKET_NAME --region $REGION

   # Wait for the bucket to exist.
   echo "Waiting for bucket '$BUCKET_NAME' to be fully created..."
   aws s3api wait bucket-exists --bucket $BUCKET_NAME

   # Check if the wait command was successful.
   if [ $? -eq 0 ]; then
       echo "The bucket '$BUCKET_NAME' has been fully created."
   else
       echo "Error: Timed out waiting for bucket '$BUCKET_NAME' to be created."
       exit 1
   fi

   # Remove the "block public policy" bucket access setting.
   aws s3api put-public-access-block \
       --bucket $BUCKET_NAME \
       --public-access-block-configuration \
       '{"BlockPublicPolicy": false, "IgnorePublicAcls": false, "BlockPublicAcls": false, "RestrictPublicBuckets": false}'

   # Check if the operation was successful.
   if [ $? -eq 0 ]; then
       echo "The block public policy access setting was removed from '$BUCKET_NAME'."
   else
       echo "Error: Failed to remove the block public policy access setting from '$BUCKET_NAME'."
       exit 1
   fi

   # Create the bucket policy.
   cat << EOF > $POLICY_FILE
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Sid": "AllowAuthenticatedUsersInAccountReadWrite",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::$ACCOUNT_ID:root"
               },
               "Action": [
                   "s3:GetObject",
                   "s3:PutObject",
                   "s3:ListBucket",
                   "s3:DeleteObject"
               ],
               "Resource": [
                   "arn:aws:s3:::$BUCKET_NAME",
                   "arn:aws:s3:::$BUCKET_NAME/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:PrincipalType": "IAMUser"
                   }
               }
           }
       ]
   }
   EOF

   # Apply the bucket policy.
   aws s3api put-bucket-policy --bucket $BUCKET_NAME --policy file://$POLICY_FILE

   # Check if the policy application was successful.
   if [ $? -eq 0 ]; then
       echo "The bucket policy was applied to '$BUCKET_NAME'."
   else
       echo "Error: Failed to apply the bucket policy to '$BUCKET_NAME'."
       exit 1
   fi

   # Verify the applied policy.
   echo "Verifying the applied policy:"
   aws s3api get-bucket-policy --bucket $BUCKET_NAME --query Policy --output text

   # Remove the temporary bucket policy file.
   rm $POLICY_FILE
   ```

4. Run the script, for example:

   ```bash  theme={null}
   sh create-s3-bucket.sh
   ```

5. After the bucket is created, you can delete the script file, if you want.

## FIPS and ambient credentials

<Note>
  The following information applies to [Unstructured Business dedicated instance and in-VPC](/business/overview) deployments only.
</Note>

Unstructured Business accounts support the Federal Information Processing Standard (FIPS) for
Amazon S3. [Learn more about AWS support for FIPS](https://aws.amazon.com/compliance/fips/). Specifically,
when creating an S3 connector with the [Unstructured user interface (UI)](/ui/overview) or
[Unstructured API's workflow operations](/api-reference/workflow/overview), Unstructured Business dedicated instance and in-VPC deployments
support the use of `fips`-based
[S3 endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) and, for authenticated access to
S3 FIPS buckets, S3 ambient credentials.

To use the Unstructured (UI) to set up an S3 source or destination connector
to use an S3 FIPS bucket and S3 ambient credentials, do the following:

1. Create an environment variable named `ALLOW_AMBIENT_CREDENTIALS_S3`, and set its value to `true`.
2. When creating the connector, for the S3 connector's **Bucket URI** field, specify the path to the S3 FIPS bucket, formatted as
   `https://<bucket-name>.<endpoint>`, for example
   `https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com`.
   If the target files are in a folder, specify the path to the target folder in the S3 FIPS bucket instead,
   formatted as `https://<bucket-name>.<endpoint>/path/to/folder` (for example,
   `https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/`).
3. For the **Authentication Method** field, select **Ambient Credentials**.
4. Check the box labelled **Use Ambient Credentials**.
5. Save and test the connector.

To use the Unstructured API's workflow operations to set up an S3 source or
destination connector to use an S3 FIPS bucket and S3 ambient credentials, do the following:

1. Create an environment variable named `ALLOW_AMBIENT_CREDENTIALS_S3`, and set its value to `true`.

2. When creating the connector,
   for the `config` parameter's `remote_url` field, specify the path to the S3 FIPS bucket, formatted as
   `https://<bucket-name>.<endpoint>`, for example
   `https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com`.
   If the target files are in a folder, specify the path to the target folder in the S3 FIPS bucket instead,
   formatted as `https://<bucket-name>.<endpoint>/path/to/folder` (for example,
   `https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/`).

3. For the `config` parameter, add an `ambient_credentials` field, and set its value to true. For example:

   <CodeGroup>
     ```python Python SDK theme={null}
     # ...
     config={
         "...": "...",
         "remote_url": "https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/",
         "ambient_credentials": True
     }
     # ...
     ```

     ```bash curl theme={null}
     # ...
     {
         "...": "...", 
         "config": {
             "...": "...",
             "remote_url": "https://my-bucket-name.s3-fips.us-gov-east-1.amazonaws.com/my-folder/",
             "ambient_credentials": "true"
         }
     }
     # ...
     ```
   </CodeGroup>

4. Run your code to create the connector.

## Create the destination connector

To create the destination connector:

1. On the sidebar, click **Connectors**.
2. Click **Destinations**.
3. Click **New** or **Create Connector**.
4. Give the connector some unique **Name**.
5. In the **Provider** area, click **Amazon S3**.
6. Click **Continue**.
7. Follow the on-screen instructions to fill in the fields as described later on this page.
8. Click **Save and Test**.

Fill in the following fields:

* **Name** (*required*): A unique name for this connector.
* **Bucket URI** (*required*): The URI for the bucket or folder, formatted as `s3://my-bucket/` (if the files are in the bucket's root) or `s3://my-bucket/my-folder/`.
* **Recursive** (source connector only): Check this box to access subfolders within the bucket.
* **AWS Key**: For secret or token authentication, the AWS access key ID for the authenticated AWS IAM user.
* **AWS Secret Key**: For secret or token authentication, the AWS secret access key corresponding to the preceding AWS access key ID.
* **STS Token**: For token authentication, the AWS STS session token for temporary access.
* **Custom URL**: A custom URL, if connecting to a non-AWS S3 bucket.
