text
stringlengths
0
1.36k
Embeddings or vectors are lists of floating point numbers that represent semantic meaning and the relationships between entities in data in a format that is efficient for machines and Large Language Models to parse.
For example, in the following log statements, you can see the user's query, 'How can I become a better developer? ', and the query embedding which is an array of floats representing that sentence.
Note the length of this array is `3072`. This is the dimensionality (or number of dimensions) expressed by the embedding model, which is `text-embedding-3-large`, (which I chose over the cheaper text-embedding-3-small model because my primary concern is accuracy).
Since the embedding model I'm using for this application outputs 3072 dimensions, when I created my Pinecone index, I also set it to 3072 dimensions as well.
Let's look at how this works in the server-side API route.
## Server side
We'll step through the server-side API route section by section, building up to the complete route at the end.
### Retrieval phase
When the `/api/chat` route receives a request, I pop the latest user message off the request body and hand it to my context retrieval service:
Here's what the context service looks like:
The `getContext` function's job is to convert the user's message to vectors and retrieve the most relevant items from Pinecone.
It is a wrapper around the `getEmbeddings` and `getMatchesFromEmbeddings` functions, which are also defined in separate 'services' files.
Here's the `getEmbeddings` function, which is a thin wrapper around OpenAI's embeddings endpoint:
So far, we've received the user's query and converted it into a `query vector` that we can send into Pinecone's vector database for similarity search.
The `getMatchesFromEmbeddings` function demonstrates how we use Pinecone to execute our query and return the nearest neighbors:
// Imagine metadata like a JavaScript object { "text": "In this article I reflect back on the year...", "source": "src/app/blog/2023-wins/page.mdx" } javascript { id: 'b10c8904-3cff-4fc5-86fc-eec5b1517dab', score: 0.826505, values: [ [length]: 0 ], sparseValues: undefined, metadata: { source: 'portfolio/src/app/blog/d...
} }, { id: 'b78bcb7c-c1a6-48a3-ac6b-ab58263b6ac1', score: 0.825771391, values: [ [length]: 0 ], sparseValues: undefined, metadata: { source: 'portfolio/src/app/blog/run-your-own-tech-blog/page.mdx', text: 'I wanted the ability to author code blocks of any kind directly in my post and I wanted outstanding image support ...
let docs: string[] = [];
(context as PineconeRecord[]).forEach(match => { const source = (match.metadata as Metadata).source // Ensure source is a blog url, meaning it contains the path src/app/blog if (!source.includes('src/app/blog')) return blogUrls.add((match.metadata as Metadata).source); docs.push((match.metadata as Metadata).text); });
let relatedBlogPosts: ArticleWithSlug[] = []
// Loop through all the blog urls and get the metadata for each for (const blogUrl of blogUrls) { const blogPath = path.basename(blogUrl.replace('page.mdx', '')) const localBlogPath = `${blogPath}/page.mdx` const { slug, ...metadata } = await importArticleMetadata(localBlogPath); relatedBlogPosts.push({ slug, ...metada...
const serializedArticles = Buffer.from(
JSON.stringify(relatedBlogPosts)
).toString('base64')
return new StreamingTextResponse(result.toAIStream(), { headers: { "x-sources": serializedArticles } }); javascript
// Join all the chunks of text together, truncate to the maximum number of tokens, and return the result const contextText = docs.join("\n").substring(0, 3000)
const prompt = ` Zachary Proser is a Staff software engineer, open - source maintainer and technical writer Zachary Proser's traits include expert knowledge, helpfulness, cleverness, and articulateness. Zachary Proser is a well - behaved and well - mannered individual. Zachary Proser is always friendly, kind, and inspi...
const result = await streamText({ model: openai('gpt-4o'), system: prompt, prompt: lastMessage.content, });
javascript
'use client';
...
const { messages, input, setInput, handleInputChange, handleSubmit } = useChat({ onResponse(response) { const sourcesHeader = response.headers.get('x-sources'); const parsedArticles: ArticleWithSlug[] = sourcesHeader ? (JSON.parse(atob(sourcesHeader as string)) as ArticleWithSlug[]) : []; setArticles(parsedArticles); s...
// The questions are defined as an array of strings const prepopulatedQuestions = [ "What is the programming bug? ", "Why do you love Next.js so much? ", "What do you do at Pinecone? ", "How can I become a better developer? ", "What is ggshield and why is it important?" ];
...
// The handler for clicking one of the pre-canned question buttons const handlePrepopulatedQuestion = (question: string) => { handleInputChange({ target: { value: question, }, } as React.ChangeEvent);
gtag("event", "chat_use_precanned_question", {
event_category: "chat",
event_label: question,
});
setIsLoading(true); // Set loading state here to indicate submission is processing
const customSubmitEvent = { preventDefault: () => { }, } as unknown as React.FormEvent;
// Submit immediately after updating the input handleSubmit(customSubmitEvent); }; javascript
'use client';
const prepopulatedQuestions = [ "What is the programming bug? ", "Why do you love Next.js so much? ", "What do you do at Pinecone? ", "How can I become a better developer? ", "What is ggshield and why is it important?" ];
const { messages, input, setInput, handleInputChange, handleSubmit } = useChat({ onResponse(response) { const sourcesHeader = response.headers.get('x-sources'); const parsedArticles: ArticleWithSlug[] = sourcesHeader ? (JSON.parse(atob(sourcesHeader as string)) as ArticleWithSlug[]) : []; console.log(`parsedArticle %o`...
const userFormSubmit = (e: React.FormEvent) => { setIsLoading(true); // Set loading state here handleSubmit(e); };
const handlePrepopulatedQuestion = (question: string) => { handleInputChange({ target: { value: question, }, } as React.ChangeEvent);
gtag("event", "chat_use_precanned_question", {
event_category: "chat",
event_label: question,