Skip to main content
POST
/
api
/
v1
/
jobs
curl --request POST \
  --url "${UNSTRUCTURED_API_URL}/api/v1/jobs/" \
  --header "unstructured-api-key: ${UNSTRUCTURED_API_KEY}" \
  --form "request_data={\"template_id\":\"f0a1b2c3-4d5e-6f7a-8b9c-0d1e2f3a4b5c\"}" \
  --form "input_files=@/path/to/document.pdf"
{
  "id": "b2c3d4e5-6f7a-8b9c-0d1e-2f3a4b5c6d7e",
  "workflow_id": "f0a1b2c3-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
  "workflow_name": "My ETL Workflow",
  "status": "SCHEDULED",
  "created_at": "2026-01-01T00:00:00Z",
  "runtime": null,
  "input_file_ids": ["document.pdf"],
  "output_node_files": null,
  "job_type": "ephemeral"
}

Documentation Index

Fetch the complete documentation index at: https://docs.unstructured.io/llms.txt

Use this file to discover all available pages before exploring further.

An on-demand workflow job is a job that takes one or more local files only as input, and whose temporary workflow exists only for the duration of the job’s run
To run a workflow that was already created at some point in the past and still exists (also known as a long-lived workflow), see the run workflow endpoint instead. To run a workflow that takes files and data from remote locations as input (instead of local files), do the following instead:
  1. Create a source connector to the remote source locations.
  2. Create a destination connector to the remote destination location.
  3. Create a long-lived workflow that uses this specific source connector and destination connector.
  4. Run this long-lived workflow manually, if you have not already created the workflow to run on a schedule.

Body

request_data
string
required
Job configuration data.
  • To use a workflow template for a job, include a template_id field that specifies the unique ID of the workflow template. For more information, see list templates.
  • To use a custom workflow definition for a job, include a job_nodes field that specifies the settings for the job’s workflow nodes. For instructions, see Workflow Nodes.
input_files
array
Input files to process. Upload as multipart/form-data, in the following format:
--form "input_files=@</full/path/to/local/filename.extension>;filename=<filename.extension>;type=<local-file-media-type>" \
--form "input_files=@</full/path/to/local/filename.extension>;filename=<filename.extension>;type=<local-file-media-type>" # For each additional file to be uploaded.
For more information, see Unstructured API Quickstart - On-Demand Jobs.

Response

id
string
required
Unique identifier for the job.
workflow_id
string
required
Unique identifier of the workflow that created this job.
workflow_name
string
required
Name of the workflow that created this job.
status
string
required
Job status. One of: SCHEDULED, IN_PROGRESS, COMPLETED, STOPPED, FAILED.
created_at
string
required
ISO 8601 timestamp when the job was created.
runtime
string
ISO 8601 duration of the job run.
input_file_ids
array
IDs of input files for this job.
output_node_files
array
Output file metadata objects. Each object includes node_id, file_id, node_type, and node_subtype.
job_type
string
Job type. Default: ephemeral.
curl --request POST \
  --url "${UNSTRUCTURED_API_URL}/api/v1/jobs/" \
  --header "unstructured-api-key: ${UNSTRUCTURED_API_KEY}" \
  --form "request_data={\"template_id\":\"f0a1b2c3-4d5e-6f7a-8b9c-0d1e2f3a4b5c\"}" \
  --form "input_files=@/path/to/document.pdf"
{
  "id": "b2c3d4e5-6f7a-8b9c-0d1e-2f3a4b5c6d7e",
  "workflow_id": "f0a1b2c3-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
  "workflow_name": "My ETL Workflow",
  "status": "SCHEDULED",
  "created_at": "2026-01-01T00:00:00Z",
  "runtime": null,
  "input_file_ids": ["document.pdf"],
  "output_node_files": null,
  "job_type": "ephemeral"
}