VectorShift
VectorShift is an integrated framework of no-code, low-code, and out of the box generative AI solutions to build AI search engines, assistants, chatbots, and automations.
VectorShift’s platform allows you to design, prototype, build, deploy, and manage generative AI workflows and automation across two interfaces: no-code and code SDK. This hands-on demonstration uses the no-code interface to walk you through creating a VectorShift pipeline project. This project enables you to use GPT-4o-mini to chat in real time with a PDF document that is processed by Unstructured and has its processed data stored in a Pinecone vector database.
This video provides a general introduction to VectorShift pipeline projects:
Prerequisites
-
A Pinecone account. Get an account.
-
A Pinecone API key. Get an API key.
-
A Pinecone serverless index. Create a serverless index.
Unstructured recommends that all records in the target index have a field named
record_id
with a string data type. Unstructured can use this field to do intelligent document overwrites. Without this field, duplicate documents might be written to the index or, in some cases, the operation could fail altogether.
Also:
- Sign up for an OpenAI account, and get your OpenAI API key.
- Sign up for a VectorShift Starter account.
- Sign up for an Unstructured Platform account through the For Developers page.
Create and run the demonstration project
Get source data into Pinecone
Although you can use any supported file type or data in any supported source type for the input into Pinecone, this demonstration uses the text of the United States Constitution in PDF format.
- Sign in to your Unstructured Platform account.
- Create a source connector, if you do not already have one, to connect Unstructured to the source location where the PDF file is stored.
- Create a Pinecone destination connector, if you do not already have one, to connect Unstructured to your Pinecone serverless index.
- Create a workflow that references this source connector and destination connector.
- Run the workflow.
Create the VectorShift project
- Sign in to your VectorShift account dashboard.
- On the sidebar, click Pipelines.
- Click New.
- Click Create Pipeline from Scratch.
Add the Input node
In this step, you add a node to the pipeline. This node takes user-supplied chat messages and sends them as input to Pinecone, and as input to a text-based LLM, for contextual searching.
In the top pipeline node chooser bar, on the General tab, click Input.
Add the Pinecone node
In this step, you add a node that connects to the Pinecone serverless index.
-
In the top pipeline node chooser bar, on the Integrations tab, click Pinecone.
-
In the Pinecone node, for Embedding Model, select openai/text-embedding-3-large.
-
Click Connected Account.
-
In the Select Pinecone Account dialog, click Connect New.
-
Enter the API Key and Region for your Pinecone serverless index, and then click Save.
-
For Index, selet the name of your Pinecone serverless index.
-
Connect the input_1 output from the Input node to the query input in the Pinecone node.
To make the connection, click and hold your mouse pointer inside of the circle next to input_1 in the Input node. While holding your mouse pointer, drag it over into the circle next to query in the Pinecone node. Then release your mouse pointer. A line appears between these two circles.
Add the OpenAI LLM node
In this step, you add a node that builds a prompt and then sends it to a text-based LLM.
-
In the top pipeline node chooser bar, on the LLMs tab, click OpenAI.
-
In the OpenAI LLM node, for System, enter the following text:
-
For Prompt, enter the following text:
-
For Model, select gpt-4o-mini.
-
Check the box titled Use Personal API Key.
-
For API Key, enter your OpenAI API key.
-
Connect the input_1 output from the Input node to the Question input in the OpenAI LLM node.
-
Connect the output output from the Pinecone node to the Context input in the OpenAI LLM node.
Add the Chat Memory node
In this step, you add a node that adds chat memory to the session.
- In the top pipeline node chooser bar, on the Chat tab, click Chat Memory.
- Connect the output from the Chat Memory node to the Memory input in the OpenAI LLM node.
Add the Output node
In this step, you add a node that displays the chat output.
- In the top pipeline node chooser bar, on the General tab, click Output.
- Connect the response output from the OpenAI LLM node to the input in the Output node.
Run the project
-
In the upper corner of the pipeline designer, click the play (Run Pipeline) button.
-
In the chat pane, on the Chatbot tab, enter a question into the Message Assistant box, for example,
What rights does the fifth amendment guarantee?
Then press the send button. -
Wait until the answer appears.
-
Ask as many additional questions as you want to.
Learn more
See the VectorShift documentation.
Was this page helpful?