| # Annapurna framework | |
| Source: https://docs.rubrik.com/en-us/saas/ai/annapurna_framework.html | |
| --- | |
| # Annapurna framework | |
| Annapurna uses an embedding model to change source data and chatbot questions into vector embeddings. These embeddings are stored in a vector database. The vector embeddings from the data source that match the query vector embeddings are then combined to create the prompt sent to the LLM. | |
| Annapurna is a generative AI application built on a retrieval-augmented generation (RAG) framework. It uses data embedding models to turn the source data and user questions into vector embeddings, which are stored in a vector database. When a user types a question into an Annapurna chatbot, the framework compares the vector embeddings of the question with the source data vector embeddings in the database. This comparison finds a data set that matches the question. Annapurna then combines this relevant data with the user's question to create the prompt sent to the LLM. | |
| The following image represents the flow of source data and prompts through the Annapurna framework. | |
| 1. Rubrik protects your data through backups and snapshots of the data on premises, in the cloud, and in SaaS data sources. The combined data is available in your data lake. | |
| 2. In your RSC domain, an administrator creates a chatbot. | |
| 3. Annapurna generates vector embeddings from your data stored in the data lake and saves them in a vector database in the cloud. The embeddings in the database are refreshed with each source data update. Embeddings for new or modified files are automatically created and stored in the vector database. | |
| 4. An end user enters a query in the chatbot. | |
| 5. The embedding model converts the user prompt into a vector representation and passes it on to the vector database where the source data vectors are stored. Annapurna compares the vector models and determines a data set that matches the context of the query. | |
| 6. Annapurna passes a prompt consisting of the data set and the query to the LLM. | |
| 7. The LLM sends a response and the response appears in the chatbot interface. | |