Vector database ingestion
In this guide, we demonstrate how to leverage Unstructured.IO, ChromaDB, and LangChain to summarize topics from the front page of CNN Lite. Utilizing the modern LLM stack, including Unstructured, Chroma, and LangChain, this workflow is streamlined to less than two dozen lines of code.
Gather Links with Unstructured
First, we gather links from the CNN Lite homepage using the partition_html function from Unstructured. When Unstructured partitions HTML pages, links are included in the metadata for each element, making link collection straightforward.
Ingest Individual Articles with UnstructuredURLLoader
With the links in hand, we preprocess individual news articles using UnstructuredURLLoader. This loader fetches content from the web and then uses the unstructured partition function to extract content and metadata. Here we preprocess HTML files, but it also works with other response types like application/pdf. The result is a list of LangChain Document objects.
Load Documents into ChromaDB
The next step is to load the preprocessed documents into ChromaDB. This process involves vectorizing the documents using OpenAI embeddings and loading them into Chroma’s vector store. Once in Chroma, similarity search can be performed to retrieve documents related to specific topics.
Summarize the Documents
After retrieving relevant documents from Chroma, we summarize them using LangChain. The load_summarization_chain function allows for easy summarization, simply requiring the selection of an LLM and summarization chain.
Jupyter Notebook
To delve deeper into this example, you can access the full Jupyter Notebook here: News of the Day Notebook
Was this page helpful?