filetype stringclasses 2
values | content stringlengths 0 75k | filename stringlengths 59 152 |
|---|---|---|
.md |
# cassandra-entomology-rag
This template will perform RAG using Apache Cassandra® or Astra DB through CQL (`Cassandra` vector store class)
## Environment Setup
For the setup, you will require:
- an [Astra](https://astra.datastax.com) Vector Database. You must have a [Database Administrator token](https://awesome-as... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-entomology-rag\README.md |
.txt | # source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419
Order Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.
Order Diplura: Dipl... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-entomology-rag\sources.txt |
.md |
# cassandra-synonym-caching
This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.
## Environment Setup
To set up your environment, you will need the following:
- an [Astra](https://astra.datastax.com) Vector Database (free tier is fi... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cassandra-synonym-caching\README.md |
.md | # Chain-of-Note (Wikipedia)
Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.
Check out the prompt being used here https://smith.langchain.com/hub/bagatur/chain-of-note-wiki.
## Environment Setup
Uses Anthropic claude-2 chat model. Set Anthropi... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\chain-of-note-wiki\README.md |
.md | # Chat Bot Feedback Template
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effective... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\chat-bot-feedback\README.md |
.md |
# cohere-librarian
This template turns Cohere into a librarian.
It demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to t... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\cohere-librarian\README.md |
.md |
# csv-agent
This template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To set up ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\csv-agent\README.md |
.md | # Contributing
Thanks for taking the time to contribute a new template!
We've tried to make this process as simple and painless as possible.
If you need any help at all, please reach out!
To contribute a new template, first fork this repository.
Then clone that fork and pull it down locally.
Set up an appropriate dev... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\CONTRIBUTING.md |
.md | # Templates
Highlighting a few different categories of templates
## ⭐ Popular
These are some of the more popular templates to get started with.
- [Retrieval Augmented Generation Chatbot](../rag-conversation): Build a chatbot over your data. Defaults to OpenAI and PineconeVectorStore.
- [Extraction with OpenAI Funct... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\INDEX.md |
.md | # Launching LangServe from a Package
You can also launch LangServe directly from a package, without having to pull it into a project.
This can be useful when you are developing a package and want to test it quickly.
The downside of this is that it gives you a little less control over how the LangServe APIs are configu... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\docs\LAUNCHING_PACKAGE.md |
.md |
# elastic-query-generator
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
###... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\elastic-query-generator\README.md |
.md |
# extraction-anthropic-functions
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions).
This can be used for various tasks, such as extraction or tagging.
The function output schema can be set in `chain.py`.
## Environment Setup
Set the `ANTH... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\extraction-anthropic-functions\README.md |
.md |
# extraction-openai-functions
This template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.
The extraction output schema can be set in `chain.py`.
## Environment Setup
Set the `OPENAI_API_KEY... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\extraction-openai-functions\README.md |
.md |
# gemini-functions-agent
This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
[See an example LangSmith trace here](https://sm... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\gemini-functions-agent\README.md |
.md |
# guardrails-output-parser
This template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output.
The `GuardrailsOutputParser` is set in `chain.py`.
The default example protects against profanity.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the O... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\guardrails-output-parser\README.md |
.md | # Hybrid Search in Weaviate
This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.
Weaviate uses both sparse and dense vectors to represent the meaning and context of search queries and docume... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\hybrid-search-weaviate\README.md |
.md |
# hyde
This template uses HyDE with RAG.
Hyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query.
The document is then embedded, and that embedding is utilized to look up real documents... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\hyde\README.md |
.md |
# llama2-functions
This template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
The extraction schema can be set in `chain.py`.
## Environment Setup
This will ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\llama2-functions\README.md |
.md | # mongo-parent-document-retrieval
This template performs RAG using MongoDB and OpenAI.
It does a more advanced form of RAG called Parent-Document Retrieval.
In this form of retrieval, a large document is first split into medium sized chunks.
From there, those medium size chunks are split into small chunks.
Embeddings... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\mongo-parent-document-retrieval\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-advanced-rag\dune.txt |
.md | # neo4j-advanced-rag
This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.
## Strategies
1. **Typical RAG**:
- Traditional method where the exact data indexed is the data retrieved.
2. **Parent retriever**:
- Instead of indexing entire docum... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-advanced-rag\README.md |
.md |
# neo4j_cypher
This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher\README.md |
.md |
# neo4j-cypher-ft
This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.
Its main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language respons... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher-ft\README.md |
.md |
# neo4j-cypher-memory
This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-cypher-memory\README.md |
.md |
# neo4j-generation
This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.
You can create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database inst... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-generation\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-parent\dune.txt |
.md |
# neo4j-parent
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
Using a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-parent\README.md |
.md | # neo4j-semantic-layer
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's int... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-semantic-layer\README.md |
.md | # neo4j-semantic-ollama
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the us... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-semantic-ollama\README.md |
.txt | Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-vector-memory\dune.txt |
.md |
# neo4j-vector-memory
This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.
Additionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session.
Having the dialogue history stored as a g... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\neo4j-vector-memory\README.md |
.md |
# nvidia-rag-canonical
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
## Environment Setup
You should export your NVIDIA API Key as an environment variable.
If you do not have an NVIDIA API Key, you can create one by following these steps:
1. Create a free account with ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\nvidia-rag-canonical\README.md |
.md |
# openai-functions-agent
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
## Environment Setup
The following environment variabl... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-agent\README.md |
.md | # OpenAI Functions Agent - Gmail
Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-agent-gmail\README.md |
.md | # openai-functions-tool-retrieval-agent
The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context le... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\openai-functions-tool-retrieval-agent\README.md |
.md | # pii-protected-chatbot
This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first hav... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pii-protected-chatbot\README.md |
.md |
# pirate-speak
This template converts user input into pirate speak.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pirate-speak\README.md |
.md | # pirate-speak-configurable
This template converts user input into pirate speak. It shows how you can allow
`configurable_alternatives` in the Runnable, allowing you to select from
OpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).
## Environment Setup
Set the following environment va... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\pirate-speak-configurable\README.md |
.md |
# plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
## Envi... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\plate-chain\README.md |
.md | # propositional-retrieval
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), di... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\propositional-retrieval\README.md |
.md | # python-lint
This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses `black`, `ruff`, and `mypy` to ensure the code meets standard quality checks.
This streamlines the coding process by integrating and responding to these checks, resulting in reliable and ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\python-lint\README.md |
.md |
# rag-astradb
This template will perform RAG using Astra DB (`AstraDB` vector store class)
## Environment Setup
An [Astra DB](https://astra.datastax.com) database is required; free tier is fine.
- You need the database **API endpoint** (such as `https://0123...-us-east1.apps.astra.datastax.com`) ...
- ... and a **... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-astradb\README.md |
.txt | # source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419
Order Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.
Order Diplura: Dipl... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-astradb\sources.txt |
.md |
# rag-aws-bedrock
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG p... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-aws-bedrock\README.md |
.md | # rag-aws-kendra
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.
It uses the `boto3` library to connect with the Bedro... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-aws-kendra\README.md |
.md |
# rag-chroma
This template performs RAG using Chroma and OpenAI.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to acces... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma\README.md |
.md |
# rag-chroma-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-multi-modal\README.md |
.md |
# rag-chroma-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summa... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-multi-modal-multi-vector\README.md |
.md |
# rag-chroma-private
This template performs RAG with no reliance on external APIs.
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-chroma-private\README.md |
.md |
# rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
## Environment Setup
Set the `FIREWORKS_API_KEY` environment variable to acces... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-codellama-fireworks\README.md |
.md |
# rag-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conver... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-conversation\README.md |
.md | # rag-conversation-zep
This template demonstrates building a RAG conversation app using Zep.
Included in this template:
- Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).
- Using Zep's [integrated... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-conversation-zep\README.md |
.md |
# rag-elasticsearch
This template performs RAG using [ElasticSearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the Op... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-elasticsearch\README.md |
.md |
# rag-fusion
This template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion).
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-fusion\README.md |
.md |
# rag-gemini-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-gemini-multi-modal\README.md |
.md | # rag-google-cloud-sensitive-data-protection
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and
PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
This template is an application that util... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-google-cloud-sensitive-data-protection\README.md |
.md | # rag-google-cloud-vertexai-search
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and
PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
For more context on building RAG applications with ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-google-cloud-vertexai-search\README.md |
.md |
# rag-gpt-crawler
GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).
This template uses [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) to build a RAG app
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Crawling
... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-gpt-crawler\README.md |
.md | # rag-lancedb
This template performs RAG using LanceDB and OpenAI.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain p... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-lancedb\README.md |
.md |
# rag-matching-engine
This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.
It will utilize a previously created index to retrieve relevant documents or contexts based on user-provided questions.
## Environment Setup
An index should be created before running the code.
The ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-matching-engine\README.md |
.md | # rag-momento-vector-index
This template performs RAG using Momento Vector Index (MVI) and OpenAI.
> MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-momento-vector-index\README.md |
.md |
# rag-mongo
This template performs RAG using MongoDB and OpenAI.
## Environment Setup
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY.
If you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.
```shel... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-mongo\README.md |
.md | # RAG with Multiple Indexes (Fusion)
A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.
## Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).
You will need ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-index-fusion\README.md |
.md | # RAG with Multiple Indexes (Routing)
A QA application that routes between different domain-specific retrievers given a user question.
## Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).
You will need to create a free Kay AI account and [get yo... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-index-router\README.md |
.md |
# rag-multi-modal-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
Th... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-modal-local\README.md |
.md |
# rag-multi-modal-mv-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-multi-modal-mv-local\README.md |
.md |
# rag-ollama-multi-query
This template performs RAG using Ollama and OpenAI with a multi-query retriever.
The multi-query retriever is an example of query transformation, generating multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documen... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-ollama-multi-query\README.md |
.txt | [INFO] Initializing machine learning training job. Model: Convolutional Neural Network Dataset: MNIST Hyperparameters: ; - Learning Rate: 0.001; - Batch Size: 64
[INFO] Loading training data. Training data loaded successfully. Number of training samples: 60,000
[INFO] Loading validation data. Validation data loaded... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-opensearch\dummy_data.txt |
.md | # rag-opensearch
This Template performs RAG using [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch).
## Environment Setup
Set the following environment variables.
- `OPENAI_API_KEY` - To access OpenAI Embeddings and Models.
And optionally set the OpenSearch ones if not using de... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-opensearch\README.md |
.md |
# rag-pinecone
This template performs RAG using Pinecone and OpenAI.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
T... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone\README.md |
.md |
# rag-pinecone-multi-query
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all quer... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone-multi-query\README.md |
.md |
# rag-pinecone-rerank
This template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents.
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
## Environment Setup
This template uses Pinecone as... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-pinecone-rerank\README.md |
.md |
# rag-redis
This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.
It relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-redis\README.md |
.md | # rag-self-query
This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the [docs for more on how this works](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query).
## Environment Setup
I... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-self-query\README.md |
.md | # rag-semi-structured
This template performs RAG on semi-structured data, such as a PDF with text and tables.
See [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) as a reference.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-semi-structured\README.md |
.md |
# rag-singlestoredb
This template performs RAG using SingleStoreDB and OpenAI.
## Environment Setup
This template uses SingleStoreDB as a vectorstore and requires that `SINGLESTOREDB_URL` is set. It should take the form `admin:password@svc-xxx.svc.singlestore.com:port/db_name`
Set the `OPENAI_API_KEY` environment ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-singlestoredb\README.md |
.md |
# rag_supabase
This template performs RAG with Supabase.
[Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-supabase\README.md |
.md |
# rag-timescale-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-timescale-conversation\README.md |
.md | # RAG with Timescale Vector using hybrid search
This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.
This is useful any time your data has a strong time-based component. Some examples of such data are:
- News articles (politics, business, etc)
- ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-timescale-hybrid-search-time\README.md |
.md |
# rag-vectara
This template performs RAG with vectara.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use this package, you shou... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-vectara\README.md |
.md |
# rag-vectara-multiquery
This template performs multiquery RAG with vectara.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-vectara-multiquery\README.md |
.md |
# rag-weaviate
This template performs RAG with Weaviate.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
## Usage
To use this package, you should first have the ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rag-weaviate\README.md |
.md | # research-assistant
This template implements a version of
[GPT Researcher](https://github.com/assafelovic/gpt-researcher) that you can use
as a starting point for a research agent.
## Environment Setup
The default template relies on ChatOpenAI and DuckDuckGo, so you will need the
following environment variable:
... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\research-assistant\README.md |
.md | # retrieval-agent
This package uses Azure OpenAI to do retrieval using an agent architecture.
By default, this does retrieval over Arxiv.
## Environment Setup
Since we are using Azure OpenAI, we will need to set the following environment variables:
```shell
export AZURE_OPENAI_ENDPOINT=...
export AZURE_OPENAI_API_V... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\retrieval-agent\README.md |
.md | # retrieval-agent-fireworks
This package uses open source models hosted on FireworksAI to do retrieval using an agent architecture. By default, this does retrieval over Arxiv.
We will use `Mixtral8x7b-instruct-v0.1`, which is shown in this blog to yield reasonable
results with function calling even though it is not f... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\retrieval-agent-fireworks\README.md |
.md |
# rewrite_retrieve_read
This template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the O... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\rewrite-retrieve-read\README.md |
.md | # Langchain - Robocorp Action Server
This template enables using [Robocorp Action Server](https://github.com/robocorp/robocorp) served actions as tools for an Agent.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain ... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\robocorp-action-server\README.md |
.md |
# self-query-qdrant
This template performs [self-querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
using Qdrant and OpenAI. By default, it uses an artificial dataset of 10 documents, but you can replace it with your own dataset.
## Environment Setup
Set the `OPENAI_API_KEY... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\self-query-qdrant\README.md |
.md |
# self-query-supabase
This templates allows natural language structured quering of Supabase.
[Supabase](https://supabase.com/docs) is an open-source alternative to Firebase, built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL).
It uses [pgvector](https://github.com/pgvector/pgvector) to store em... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\self-query-supabase\README.md |
.md | # shopping-assistant
This template creates a shopping assistant that helps users find products that they are looking for.
This template will use `Ionic` to search for products.
## Environment Setup
This template will use `OpenAI` by default.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To us... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\shopping-assistant\README.md |
.md | # skeleton-of-thought
Implements "Skeleton of Thought" from [this](https://sites.google.com/view/sot-llm) paper.
This technique makes it possible to generate longer generations more quickly by first generating a skeleton, then generating each point of the outline.
## Environment Setup
Set the `OPENAI_API_KEY` envir... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\skeleton-of-thought\README.md |
.md | # solo-performance-prompting-agent
This template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.
A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strength... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\solo-performance-prompting-agent\README.md |
.md |
# sql-llama2
This template enables a user to interact with a SQL database using natural language.
It uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integr... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-llama2\README.md |
.md |
# sql-llamacpp
This template enables a user to interact with a SQL database using natural language.
It uses [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) via [llama.cpp](https://github.com/ggerganov/llama.cpp) to run inference locally on a Mac laptop.
## Environment Setup
To set up the environment,... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-llamacpp\README.md |
.md | # sql-ollama
This template enables a user to interact with a SQL database using natural language.
It uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.
## Environment Setup
Before using this template, you n... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-ollama\README.md |
.md | # sql-pgvector
This template enables user to use `pgvector` for combining postgreSQL with semantic search / RAG.
It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)
## E... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-pgvector\README.md |
.md | # sql-research-assistant
This package does research over a SQL database
## Usage
This package relies on multiple models, which have the following dependencies:
- OpenAI: set the `OPENAI_API_KEY` environment variables
- Ollama: [install and run Ollama](https://python.langchain.com/docs/integrations/chat/ollama)
- ll... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\sql-research-assistant\README.md |
.md | # stepback-qa-prompting
This template replicates the "Step-Back" prompting technique that improves performance on complex questions by first asking a "step back" question.
This technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.
Read... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\stepback-qa-prompting\README.md |
.md |
# summarize-anthropic
This template uses Anthropic's `Claude2` to summarize long documents.
It leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages.
You can see the summarization prompt in `chain.py`.
## Environment Setup
Set the `ANTHROPIC_API_KEY` environment... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\summarize-anthropic\README.md |
.md |
# vertexai-chuck-norris
This template makes jokes about Chuck Norris using Vertex AI PaLM2.
## Environment Setup
First, make sure you have a Google Cloud project with
an active billing account, and have the [gcloud CLI installed](https://cloud.google.com/sdk/docs/install).
Configure [application default credentia... | C:\Users\wesla\CodePilotAI\repositories\langchain\templates\vertexai-chuck-norris\README.md |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.