This page was recently updated. What do you think about it? Let us know!.

Connect S3 to your preprocessing pipeline, and use the Unstructured Ingest CLI or the Unstructured Ingest Python library to batch process all your documents and store structured outputs locally on your filesystem.

You will need:

The Amazon S3 prerequisites:

The following video shows how to fulfill the minimum set of S3 prerequisites:

The preceding video does not show how to create an AWS account; enable anonymous access to the bucket (which is supported but not recommended); or generate an AWS STS session token for temporary access, if required by your organization’s security requirements. For more information about prerequisites, see the following:

  • An AWS account. Create an AWS account.

  • An S3 bucket. Create an S3 bucket. Additional approaches are in the following video and in the how-to sections at the end of this page.

  • Anonymous (supported but not recommended) or authenticated access to the bucket.

  • For authenticated bucket read access, the authenticated AWS IAM user must have at minimum the permissions of s3:ListBucket and s3:GetObject for that bucket. Learn how.

  • For bucket write access, authenticated access to the bucket must be enabled (anonymous access must not be enabled), and the authenticated AWS IAM user must have at minimum the permission of s3:PutObject for that bucket. Learn how.

  • For authenticated access, an AWS access key and secret access key for the authenticated AWS IAM user in the account. Create an AWS access key and secret access key.

  • For authenticated access in untrusted environments or enhanced security scenarios, an AWS STS session token for temporary access, in addition to an AWS access key and secret access key. Create a session token.

  • If the target files are in the root of the bucket, the path to the bucket, formatted as protocol://bucket/ (for example, s3://my-bucket/). If the target files are in a folder, the path to the target folder in the S3 bucket, formatted as protocol://bucket/path/to/folder/ (for example, s3://my-bucket/my-folder/).

  • If the target files are in a folder, and authenticated bucket access is enabled, make sure the authenticated AWS IAM user has authenticated access to the folder as well. Enable authenticated folder access.

The S3 connector dependencies:

CLI, Python
pip install "unstructured-ingest[s3]"

You might also need to install additional dependencies, depending on your needs. Learn more.

The following environment variables:

  • AWS_S3_URL - The path to the S3 bucket or folder, formatted as s3://my-bucket/ (if the files are in the bucket’s root) or s3://my-bucket/my-folder/.

  • If the bucket does not have anonymous access enabled, provide the AWS credentials:

    • AWS_ACCESS_KEY_ID - The AWS access key ID for the authenticated AWS IAM user, represented by --key (CLI) or key (Python).
    • AWS_SECRET_ACCESS_KEY - The corresponding AWS secret access key, represented by --secret (CLI) or secret (Python).
    • AWS_TOKEN - If required, the AWS STS session token for temporary access, represented by --token (CLI) or token (Python).
  • If the bucket has anonymous access enabled for reading from the bucket, set --anonymous (CLI) or anonymous=True (Python) instead.

These environment variables:

  • UNSTRUCTURED_API_KEY - Your Unstructured API key value.
  • UNSTRUCTURED_API_URL - Your Unstructured API URL.

Now call the Unstructured Ingest CLI or the Unstructured Ingest Python library. The destination connector can be any of the ones supported.

This example uses the local destination connector.

Add an access policy to an existing bucket

To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to an existing S3 bucket, do the following.

Your organization might have stricter bucket policy requirements. Check with your AWS account administrator if you are unsure.
  1. Sign in to the AWS Management Console.

  2. Open the Amazon S3 Console.

  3. Browse to the existing bucket and open it.

  4. Click the Permissions tab.

  5. In the Bucket policy area, click Edit.

  6. In the Policy text area, copy the following JSON-formatted policy. To change the following policy to restrict it to a specific user in the AWS account, change root to that specific username.

    In this policy, replace the following:

    • Replace <my-account-id> with your AWS account ID.
    • Replace <my-bucket-name> in two places with the name of your bucket.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowAuthenticatedUsersInAccountReadWrite",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::<my-account-id>:root"
                },
                "Action": [
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:ListBucket",
                    "s3:DeleteObject"
                ],
                "Resource": [
                    "arn:aws:s3:::<my-bucket-name>",
                    "arn:aws:s3:::<my-bucket-name>/*"
                ],
                "Condition": {
                    "StringEquals": {
                        "aws:PrincipalType": "IAMUser"
                    }
                }
            }
        ]
    }
    
  7. Click Save changes.

Create a bucket with the AWS CLI

To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.

Your organization might have stricter bucket policy requirements. Check with your AWS account administrator if you are unsure.
  1. Install the AWS CLI.

  2. Set up the AWS CLI.

  3. Copy the following script to a file on your local machine, for example a file named create-s3-bucket.sh. To change the following bucket policy to restrict it to a specific user in the AWS account, change root to that specific username.

    In this script, replace the following:

    • Replace <my-account-id> with your AWS account ID.
    • Replace <my-unique-bucket-name> with the name of your bucket.
    • Replace <us-east-1> with your AWS Region.
    #!/bin/bash
    
    # Set variables for the AWS account ID, Amazon S3 bucket name, and AWS Region.
    ACCOUNT_ID="<my-account-id>"
    BUCKET_NAME="<my-unique-bucket-name>"
    REGION="<us-east-1>"
    
    # Temporary filename for the bucket policy.
    # Do not change this variable.
    POLICY_FILE="bucket_policy.json"
    
    # Create the bucket.
    aws s3api create-bucket --bucket $BUCKET_NAME --region $REGION
    
    # Wait for the bucket to exist.
    echo "Waiting for bucket '$BUCKET_NAME' to be fully created..."
    aws s3api wait bucket-exists --bucket $BUCKET_NAME
    
    # Check if the wait command was successful.
    if [ $? -eq 0 ]; then
        echo "The bucket '$BUCKET_NAME' has been fully created."
    else
        echo "Error: Timed out waiting for bucket '$BUCKET_NAME' to be created."
        exit 1
    fi
    
    # Remove the "block public policy" bucket access setting.
    aws s3api put-public-access-block \
        --bucket $BUCKET_NAME \
        --public-access-block-configuration \
        '{"BlockPublicPolicy": false, "IgnorePublicAcls": false, "BlockPublicAcls": false, "RestrictPublicBuckets": false}'
    
    # Check if the operation was successful.
    if [ $? -eq 0 ]; then
        echo "The block public policy access setting was removed from '$BUCKET_NAME'."
    else
        echo "Error: Failed to remove the block public policy access setting from '$BUCKET_NAME'."
        exit 1
    fi
    
    # Create the bucket policy.
    cat << EOF > $POLICY_FILE
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowAuthenticatedUsersInAccountReadWrite",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::$ACCOUNT_ID:root"
                },
                "Action": [
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:ListBucket",
                    "s3:DeleteObject"
                ],
                "Resource": [
                    "arn:aws:s3:::$BUCKET_NAME",
                    "arn:aws:s3:::$BUCKET_NAME/*"
                ],
                "Condition": {
                    "StringEquals": {
                        "aws:PrincipalType": "IAMUser"
                    }
                }
            }
        ]
    }
    EOF
    
    # Apply the bucket policy.
    aws s3api put-bucket-policy --bucket $BUCKET_NAME --policy file://$POLICY_FILE
    
    # Check if the policy application was successful.
    if [ $? -eq 0 ]; then
        echo "The bucket policy was applied to '$BUCKET_NAME'."
    else
        echo "Error: Failed to apply the bucket policy to '$BUCKET_NAME'."
        exit 1
    fi
    
    # Verify the applied policy.
    echo "Verifying the applied policy:"
    aws s3api get-bucket-policy --bucket $BUCKET_NAME --query Policy --output text
    
    # Remove the temporary bucket policy file.
    rm $POLICY_FILE
    
  4. Run the script, for example:

    sh create-s3-bucket.sh
    
  5. After the bucket is created, you can delete the script file, if you want.

Create a bucket with AWS CloudFormation

To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.

Your organization might have stricter bucket policy requirements. Check with your AWS account administrator if you are unsure.
  1. Save the following YAML to a file on your local machine, for example create-s3-bucket.yaml. To change the following bucket policy to restrict it to a specific user in the AWS account, change root to that specific username.

    AWSTemplateFormatVersion: '2010-09-09'
    Description: 'CloudFormation template to create an S3 bucket with specific permissions for account users.'
    
    Parameters:
      BucketName:
        Type: String
        Description: 'Name of the S3 bucket to create'
    
    Resources:
      MyS3Bucket:
        Type: 'AWS::S3::Bucket'
        Properties:
          BucketName: !Ref BucketName
          PublicAccessBlockConfiguration:
            BlockPublicAcls: true
            BlockPublicPolicy: false
            IgnorePublicAcls: true
            RestrictPublicBuckets: true
    
      BucketPolicy:
        Type: 'AWS::S3::BucketPolicy'
        Properties:
          Bucket: !Ref MyS3Bucket
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Sid: AllowAllAuthenticatedUsersInAccount
                Effect: Allow
                Principal:
                  AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root'
                Action:
                  - 's3:GetObject'
                  - 's3:PutObject'
                  - 's3:ListBucket'
                  - 's3:DeleteObject'
                Resource:
                  - !Sub 'arn:aws:s3:::${BucketName}'
                  - !Sub 'arn:aws:s3:::${BucketName}/*'
    
    Outputs:
      BucketName:
        Description: 'Name of the created S3 bucket'
        Value: !Ref MyS3Bucket
    
  2. Sign in to the AWS Management Console.

  3. Open the AWS CloudFormation Console.

  4. Click Create stack > With new resources (standard).

  5. On the Create stack page, with Choose an existing template already selected, select Upload a template file.

  6. Click Choose file, and browse to and select the YAML file from your local machine.

  7. Click Next.

  8. Enter a unique Stack name and BucketName.

  9. Click Next two times.

  10. Click Submit.

  11. Wait until the Status changes to CREATE_COMPLETE.

  12. After the bucket is created, you can delete the YAML file, if you want.