Connect Jira to your preprocessing pipeline, and batch process all your documents using unstructured-ingest to store structured outputs locally on your filesystem.

First, install the Jira dependencies as shown here.

pip install "unstructured-ingest[jira]"

To processes all the issues in projects within your Jira domain, you must specify:

  • url:Atlassian (Jira) domain URL
  • api-token: API token to authenticate into Atlassian (Jira). Check Atlassian documentation for more info.
  • user-email: User email for the domain

Optionally, you can limit what issues should be processed:

  • list-of-projects: explicitly specify which project ids should be processed
  • list-of-boards: explicitly specify which board ids should be processed
  • list-of-issues: explicitly specify which issue ids or keys should be processed

If any of the optional arguments are provided, connector will ingest only those components, and nothing else. When none of the optional arguments are provided, all issues in all projects will be ingested.

Make sure to set the --partition-by-api flag and pass in your API key with --api-key:

Additionally, if you’re using Unstructured Serverless API, your locally deployed Unstructured API, or an Unstructured API deployed on Azure or AWS, you also need to specify the API URL via the --partition-endpoint argument.