Kafka
Batch process all your records to store structured outputs in Kafka.
You will need:
The Kafka prerequisites:
- A Kafka cluster, such as ones provided by Confluent Cloud, Amazon Managed Streaming for Apache Kafka (Amazon MSK), or Google Cloud Managed Service for Apache Kafka.
- The hostname of the bootstrap Kafka cluster to connect to.
- The port number of the cluster.
- The name of the topic to read messages from and write messages to on the cluster.
- If you use Kafka API keys and secrets for authentication, the key and secret values.
The Kafka connector dependencies:
You might also need to install additional dependencies, depending on your needs. Learn more.
The following environment variables:
KAFKA_BOOTSTRAP_SERVER
- The hostname of the bootstrap Kafka cluster to connect to, represented by--bootstrap-server
(CLI) orbootstrap_server
(Python).KAFKA_PORT
- The port number of the cluster, represented by--port
(CLI) orport
(Python).KAFKA_TOPIC
- The unique name of the topic to read messages from and write messages to on the cluster, represented by--topic
(CLI) ortopic
(Python).
If you use Kafka API keys and secrets for authentication:
KAFKA_API_KEY
- The Kafka API key value, represented by--kafka-api-key
(CLI) orkafka_api_key
(Python).KAFKA_SECRET
- The secret value for the Kafka API key, represented by--secret
(CLI) orsecret
(Python).
Additional settings include:
--confluent
(CLI) orconfluent
(Python): True to indicate that the cluster is running Confluent Kafka.--num-messages-to-consume
(CLI) ornum_messages_to_consume
(Python): The maximum number of messages to get from the topic. The default is1
.--timeout
(CLI) ortimeout
(Python): The maximum amount of time to wait for the response of a request to the topic, expressed in seconds. The default is1.0
.
Now call the Unstructured CLI or Python. The source connector can be any of the ones supported. This example uses the local source connector:
This example sends files to Unstructured API services for processing by default. To process files locally instead, see the instructions at the end of this page.
For the Unstructured Ingest CLI and the Unstructured Ingest Python library, you can use the --partition-by-api
option (CLI) or partition_by_api
(Python) parameter to specify where files are processed:
-
To do local file processing, omit
--partition-by-api
(CLI) orpartition_by_api
(Python), or explicitly specifypartition_by_api=False
(Python).Local file processing does not use an Unstructured API key or API URL, so you can also omit the following, if they appear:
--api-key $UNSTRUCTURED_API_KEY
(CLI) orapi_key=os.getenv("UNSTRUCTURED_API_KEY")
(Python)--partition-endpoint $UNSTRUCTURED_API_URL
(CLI) orpartition_endpoint=os.getenv("UNSTRUCTURED_API_URL")
(Python)- The environment variables
UNSTRUCTURED_API_KEY
andUNSTRUCTURED_API_URL
-
To send files to Unstructured API services for processing, specify
--partition-by-api
(CLI) orpartition_by_api=True
(Python).Unstructured API services also requires an Unstructured API key and API URL, by adding the following:
--api-key $UNSTRUCTURED_API_KEY
(CLI) orapi_key=os.getenv("UNSTRUCTURED_API_KEY")
(Python)--partition-endpoint $UNSTRUCTURED_API_URL
(CLI) orpartition_endpoint=os.getenv("UNSTRUCTURED_API_URL")
(Python)- The environment variables
UNSTRUCTURED_API_KEY
andUNSTRUCTURED_API_URL
, representing your API key and API URL, respectively.
Was this page helpful?