If you’re new to Unstructured, read this note first.Before you can create a destination connector, you must first sign in to your Unstructured account:After you sign in, the Unstructured user interface (UI) appears, which you use to get your Unstructured API key.
  1. After you sign in to your Unstructured Starter account, click API Keys on the sidebar.
    For a Team or Enterprise account, before you click API Keys, make sure you have selected the organizational workspace you want to create an API key for. Each API key works with one and only one organizational workspace. Learn more.
  2. Click Generate API Key.
  3. Follow the on-screen instructions to finish generating the key.
  4. Click the Copy icon next to your new key to add the key to your system’s clipboard. If you lose this key, simply return and click the Copy icon again.
After you create the destination connector, add it along with a source connector to a workflow. Then run the worklow as a job. To learn how, try out the hands-on Workflow Endpoint quickstart, go directly to the quickstart notebook, or watch the two 4-minute video tutorials for the Unstructured Python SDK.You can also create destination connectors with the Unstructured user interface (UI). Learn how.If you need help, email Unstructured Support at support@unstructured.io.You are now ready to start creating a destination connector! Keep reading to learn how.
Send processed data from Unstructured to Kafka. The requirements are as follows. To create a Kafka destination connector, see the following examples.
import os

from unstructured_client import UnstructuredClient
from unstructured_client.models.operations import CreateDestinationRequest
from unstructured_client.models.shared import CreateDestinationConnector

with UnstructuredClient(api_key_auth=os.getenv("UNSTRUCTURED_API_KEY")) as client:
    response = client.destinations.create_destination(
        request=CreateDestinationRequest(
            create_destination_connector=CreateDestinationConnector(
                name="<name>",
                type="kafka-cloud",
                config={
                    "bootstrap_servers": "<bootstrap-server>",
                    "port": <port>,
                    "group_id": "<group-id>",
                    "kafka_api_key": "<kafka-api-key>",
                    "secret": "<secret>",
                    "topic": "<topic>",
                    "batch_size": <batch-size>
                }
            )
        )
    )

    print(response.destination_connector_information)
Replace the preceding placeholders as follows:
  • <name> (required) - A unique name for this connector.
  • <bootstrap-server> - The hostname of the bootstrap Kafka cluster to connect to.
  • <port> - The port number of the bootstrap Kafka cluster to connect to. The default is 9092 if not otherwise specified.
  • <group-id> - The ID of the consumer group. A consumer group is a way to allow a pool of consumers to divide the consumption of data over topics and partitions. The default is default_group_id if not otherwise specified.
  • <kafka-api-key> - For authentication, the API key for access to the cluster.
  • <secret> - For authentication, the secret for access to the cluster.
  • <topic> - The name of the topic to read messages from or write messages to on the cluster.
  • <batch-size> (destination connector only) - The maximum number of messages to send in a single batch. The default is 100 if not otherwise specified.
  • <num-messages-to-consume> (source connector only) - The maximum number of messages that the consumer will try to consume. The default is 100 if not otherwise specified.

Learn more