Embedding
After partitioning, chunking, and summarizing, the embedding step creates arrays of numbers known as vectors, representing the text that is extracted by Unstructured. These vectors are stored or embedded next to the text itself. These vector embeddings are generated by an embedding model that is provided by an embedding provider.
You typically save these embeddings in a vector store. When a user queries a retrieval augmented generation (RAG) application, the application can use a vector database to perform a similarity search in that vector store and then return the items whose embeddings are the closest to that user’s query.
Here is an example of a document element generated by Unstructured, along with its vector embeddings generated by the embedding model sentence-transformers/all-MiniLM-L6-v2 on Hugging Face:
Generate embeddings
To generate embeddings, choose one of the following embedding providers and models in the Providers section of an Embedder node in a workflow:
-
OpenAI: Use OpenAI to generate embeddings. Also, choose the model to use:
- text-embedding-3-small, with 1536 dimensions.
- text-embedding-3-large, with 3072 dimensions.
- Ada 002 (Text), with 1536 dimensions.
-
Vertex AI: Use Vertex AI to generate embeddings by using the textembedding-gecko@001 model, with 768 dimensions.
Was this page helpful?