id
stringlengths
14
15
text
stringlengths
17
2.72k
source
stringlengths
47
115
ae8b65ca644d-2
print( chain.run( {"question": """Write a message to remind John to do password reset for his website to stay secure."""}, callbacks=[StdOutCallbackHandler()], ) ) From the output, you can see the following context from user input has sensitive data. # Context from user input During our recent meeting on February 23, ...
https://python.langchain.com/docs/integrations/llms/opaqueprompts
ae8b65ca644d-3
During our recent meeting on DATE_TIME_3, at DATE_TIME_2, PERSON_3 provided me with his personal details. His email is EMAIL_ADDRESS_1 and his contact number is PHONE_NUMBER_1. He lives in LOCATION_3, LOCATION_2, and belongs to the NRP_3 nationality with NRP_2 beliefs and a leaning towards the Democratic party. He ment...
https://python.langchain.com/docs/integrations/llms/opaqueprompts
ae8b65ca644d-4
prompt=PromptTemplate.from_template(prompt_template), llm = OpenAI() pg_chain = ( op.sanitize | RunnableMap( { "response": (lambda x: x["sanitized_input"]) | prompt | llm | StrOutputParser(), "secure_context": lambda x: x["secure_context"], } ) | (lambda x: op.desanitize(x["response"], x["secure_context"])) ) pg_chai...
https://python.langchain.com/docs/integrations/llms/opaqueprompts
11d7b842c5f0-0
OpenAI OpenAI offers a spectrum of models with different levels of power suitable for different tasks. This example goes over how to use LangChain to interact with OpenAI models # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() import os os.environ["OP...
https://python.langchain.com/docs/integrations/llms/openai
a374f9d94021-0
OpenLLM 🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps. Installation​ Install openllm through PyPI Launch OpenLLM server locally​ To start an ...
https://python.langchain.com/docs/integrations/llms/openllm
5020f7e88eea-0
OpenLM OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. This examples goes ov...
https://python.langchain.com/docs/integrations/llms/openlm
2fd916aa0e02-0
PipelineAI PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. This notebook goes over how to use Langchain with PipelineAI. PipelineAI example​ This example shows how PipelineAI integrated with LangChain and it is created by PipelineAI. Setup​ The pipel...
https://python.langchain.com/docs/integrations/llms/pipelineai
0348f2f8eaba-0
Petals Petals runs 100B+ language models at home, BitTorrent-style. This notebook goes over how to use Langchain with Petals. Install petals​ The petals package is required to use the Petals API. Install petals using pip3 install petals. For Apple Silicon(M1/M2) users please follow this guide https://github.com/bigscie...
https://python.langchain.com/docs/integrations/llms/petals
4bdaff350e3d-0
Predibase Predibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model. This example demonstrates using Langchain with models deployed on Predibase Setup To run this notebook, you'll need a Predibase account and an API key. You'll also need to install the Predibase Py...
https://python.langchain.com/docs/integrations/llms/predibase
4bdaff350e3d-1
overall_chain = SimpleSequentialChain( chains=[synopsis_chain, review_chain], verbose=True ) review = overall_chain.run("Tragedy at sunset on the beach") Fine-tuned LLM (Use your own fine-tuned LLM from Predibase)​ from langchain.llms import Predibase model = Predibase( model="my-finetuned-LLM", predibase_api_key=os.e...
https://python.langchain.com/docs/integrations/llms/predibase
84f9e3daaef8-0
Prediction Guard pip install predictionguard langchain import os import predictionguard as pg from langchain.llms import PredictionGuard from langchain import PromptTemplate, LLMChain Basic LLM usage​ # Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows # you to access all the latest open ...
https://python.langchain.com/docs/integrations/llms/predictionguard
84f9e3daaef8-1
Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.predict(question=question) template = """Write a {adjec...
https://python.langchain.com/docs/integrations/llms/predictionguard
14245b7f3829-0
PromptLayer OpenAI PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library. PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLay...
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
fb7831ffb230-0
RELLM RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression. Warning - this module is still experimental pip install rellm > /dev/null Hugging Face...
https://python.langchain.com/docs/integrations/llms/rellm_experimental
fb7831ffb230-1
# We'll choose a regex that matches to a structured json string that looks like: # { # "action": "Final Answer", # "action_input": string or dict # } pattern = regex.compile( r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:' ) from langchain_experimental.llms import RELLM model = RE...
https://python.langchain.com/docs/integrations/llms/rellm_experimental
86562de89390-0
Replicate Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. This example goes over how to use LangChain to interact with Replicate m...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-1
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packa...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-2
from getpass import getpass REPLICATE_API_TOKEN = getpass() import os
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-3
os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN from langchain.llms import Replicate from langchain import PromptTemplate, LLMChain Calling a model​ Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version. For example, here is LLama-V2. llm = R...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-4
As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5 Only the model param is required, but we can add other model params when initializing. For example, if we were running stable diffusion...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-5
image_output = text2image("A cat riding a motorcycle by Picasso") image_output 'https://replicate.delivery/pbxt/9fJFaKfk5Zj3akAAn955gjP49G8HQpHK01M6h3BfzQoWSbkiA/out-0.png' The model spits out a URL. Let's render it. poetry run pip install Pillow Collecting Pillow Using cached Pillow-10.0.0-cp39-cp39-manylinux_2_28_x86...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-6
response = requests.get(image_output) img = Image.open(BytesIO(response.content)) img Streaming Response​ You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information. from langchain.callbac...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-7
prompt = """ User: What is the best way to learn python? Assistant: """ start_time = time.perf_counter() raw_output = llm(prompt) # raw output, no stop end_time = time.perf_counter() print(f"Raw output:\n {raw_output}") print(f"Raw output runtime: {end_time - start_time} seconds") start_time = time.perf_counter() stop...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-8
1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses on Python. These can be a great way to get started, especially if you prefer a self-paced approach. 2. Books: There are many excellent books on Python that can provide a comprehensive introductio...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-9
Please let me know if you have any other questions or if there is anything Raw output runtime: 32.74260359999607 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: Stopped output runtime: 3.235012899996945...
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-10
> Entering new SimpleSequentialChain chain... Colorful socks could be named "Dazzle Socks" A logo featuring bright colorful socks could be named Dazzle Socks https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png > Finished chain. https://replicate.delivery/pbxt/682XgeUlFela7km...
https://python.langchain.com/docs/integrations/llms/replicate
c0ab64047b42-0
Arxiv arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. This notebook shows how to retrieve scientific articles from Arxiv.org into t...
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-1
docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diamet...
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-2
from getpass import getpass OPENAI_API_KEY = getpass() import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationalRetrievalChain model = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4' qa = ConversationalRetrievalChain.fr...
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-3
-> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a str...
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-4
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References:...
https://python.langchain.com/docs/integrations/retrievers/arxiv
8b2aa2e7aa85-0
With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results. import boto3 from langchain.retrievers im...
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever
ddf3d6daa172-0
Azure Cognitive Search Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. Search is foundational to any app that sur...
https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search
f468fc335bbd-0
BM25 BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package. from langchain.retrievers import BM25Retriever /worksp...
https://python.langchain.com/docs/integrations/retrievers/bm25
331c9696c849-0
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain. # STEP 1: Load # Load documents using LangChain's DocumentLoaders # This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html from langchain.document_loaders.csv_loader import CSVLoader loader = CSV...
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
331c9696c849-1
write_json("foo.json", data) # STEP 3: Use # Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json Okay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it? The below code walks through how to do that. We want to use...
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
c7c5f52df5af-0
Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API. First, you...
https://python.langchain.com/docs/integrations/retrievers/chaindesk
c7c5f52df5af-1
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in...
https://python.langchain.com/docs/integrations/retrievers/chaindesk
2e32f06a0cac-0
Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United St...
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-1
So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow ...
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-2
And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never ext...
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-3
Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans...
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-4
So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. Now let's ...
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
713cf8005120-0
DocArray Retriever DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a Doc...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-1
# initialize the index db = InMemoryExactNNIndex[MyDoc]() # index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = {"year": {...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-2
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})] WeaviateDocumentIndex​ WeaviateDocumentIndex is a document index that is built upon Weaviate vector data...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-3
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})] ElasticDocIndex​ ElasticDocIndex is a document index that is built upon ElasticSearch Learn more here: h...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-4
# index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Rang...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-5
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})] Movie Retrieval using HnswDocumentIndex movies = [ { "title": "Inception", "description": "A thief who ...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-6
# define schema for your movie documents class MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: str embeddings = OpenAIEmbeddings() # get "description" embeddings, and create documents docs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_quer...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-7
# find relevant documents docs = retriever.get_relevant_documents("space travel") print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of s...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-8
# find relevant documents docs = retriever.get_relevant_documents("action movies") print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': ...
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
5201c48903c9-0
ElasticSearch BM25 Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function us...
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
5201c48903c9-1
'8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7'] Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
6b31b1a8cd23-0
Google Cloud Enterprise Search Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google’s foundation models, search expertise, and conversation...
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-1
Create and populate an unstructured data store​ Use Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option. ...
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-2
if "google.colab" in sys.modules: from google.colab import auth as google_auth
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-3
google_auth.authenticate_user() Configure and use the Enterprise Search retriever​ The Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of ...
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-4
max_documents - The maximum number of documents used to provide extractive segments or extractive answers get_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured) ...
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-5
PROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project ID SEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store ID retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, ) query = "What are Alphabet's Other Bets?" result = retriever....
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
2d79838bde66-0
Google Drive Retriever This notebook covers how to retrieve documents from Google Drive. Prerequisites​ Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib Inst...
https://python.langchain.com/docs/integrations/retrievers/google_drive
2d79838bde66-1
image/jpeg application/epub+zip application/pdf application/rtf application/vnd.google-apps.document (GDoc) application/vnd.google-apps.presentation (GSlide) application/vnd.google-apps.spreadsheet (GSheet) application/vnd.google.colaboratory (Notebook colab) application/vnd.openxmlformats-officedocument.presentationml...
https://python.langchain.com/docs/integrations/retrievers/google_drive
2d79838bde66-2
"and trashed=false"), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False, ) for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60]+"..."...
https://python.langchain.com/docs/integrations/retrievers/google_drive
f9e5598c592f-0
kNN In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. This notebook goes over how to use a retriever that under the hood uses an...
https://python.langchain.com/docs/integrations/retrievers/knn
a8b350bb4375-0
Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. The MergerR...
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
a8b350bb4375-1
# Define 2 diff retrievers with 2 diff embeddings and diff search type. retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True} ) retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True} ) # The Lord of th...
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
a8b350bb4375-2
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings) reordering = LongContextReorder() pipeline = DocumentCompressorPipeline(transformers=[filter, reordering]) compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr )
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
18c020e88f0b-0
Metal Metal is a managed service for ML Embeddings. This notebook shows how to use Metal's retriever. First, you will need to sign up for Metal and get an API key. You can do so here from metal_sdk.metal import Metal API_KEY = "" CLIENT_ID = "" INDEX_ID = "" metal = Metal(API_KEY, CLIENT_ID, INDEX_ID); Ingest Documen...
https://python.langchain.com/docs/integrations/retrievers/metal
13118753abdf-0
Pinecone Hybrid Search Pinecone is a vector database with broad functionality. This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search. The logic of this retriever is taken from this documentaion To use Pinecone, you must have an API key and an Environment. Here are the instal...
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
13118753abdf-1
embeddings = OpenAIEmbeddings() To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25. For more information about the sparse encoders you can checkout pinecone-text library docs. from pinecone_text.sparse import BM25Encoder # or from pinecone_text.spa...
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
04f5d15eb9db-0
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. [Document(pag...
https://python.langchain.com/docs/integrations/retrievers/pubmed
04f5d15eb9db-1
Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evalu...
https://python.langchain.com/docs/integrations/retrievers/pubmed
04f5d15eb9db-2
Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright Information': ''})]
https://python.langchain.com/docs/integrations/retrievers/pubmed
cd6ab017300a-0
RePhraseQueryRetriever Simple retriever that applies an LLM between the user input and the query pass the to retriever. It can be used to pre-process the user input in any way. The default prompt used in the from_llm classmethod: DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \ query f...
https://python.langchain.com/docs/integrations/retrievers/re_phrase
cd6ab017300a-1
QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with taking a natural languge query from a user and converting it into a query for a vectorstore. In the process, strip out all information that is not relevant for the retrieval task and return a new, simplified quest...
https://python.langchain.com/docs/integrations/retrievers/re_phrase
f2e08b19737d-0
SVM Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm...
https://python.langchain.com/docs/integrations/retrievers/svm
08fd7520ec2c-0
Vespa Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. This notebook shows how to use Vespa.ai as a LangChain retriever. In order to create a retriever, we use pyvespa to create a connection a Vespa servic...
https://python.langchain.com/docs/integrations/retrievers/vespa
5de718f307ac-0
TF-IDF TF-IDF means term-frequency times inverse document-frequency. This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package. For more information on the details of TF-IDF see this blog post. # !pip install scikit-learn from langchain.retrievers import TFIDFRetriever Cr...
https://python.langchain.com/docs/integrations/retrievers/tf_idf
1bc7053edd3f-0
Weaviate Hybrid Search Weaviate is an open source vector database. Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques. The Hybrid search in Weaviate...
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-1
# client.schema.delete_all() from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever from langchain.schema import Document retriever = WeaviateHybridSearchRetriever( client=client, index_name="LangChain", text_key="text", attributes=[], create_schema_if_missing=True, ) Add some data: docs ...
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-2
), ] retriever.add_documents(docs) ['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be', 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907', '7ebbdae7-1061-445f-a046-1989f2343d8f', 'c2ab315b-3cab-467f-b23a-b26ed186318d', 'b83765f2-e5d2-471f-8c02-c3350ade4c4f'] Do a hybrid search: retriever.get_relevant_documents("the ethical implications of AI...
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-3
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})] Do a hybrid search with scores: retriever.get_rel...
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-4
Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document...
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
ab896a0a20fd-0
This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream. First, you need to install wikipedia python package. get_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia {'title': 'Hunter × Hunter',
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-1
{'title': 'Hunter × Hunter', 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently ...
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-2
questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vihāra?", # "How big is Wikipédia en français?", ] chat_history = []
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-3
for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")
https://python.langchain.com/docs/integrations/retrievers/wikipedia
887c59c1d21b-0
OpenAI Let's load the OpenAI Embedding class. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363...
https://python.langchain.com/docs/integrations/text_embedding/openai
5ccb869a618c-0
Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store. NOTE: Unlike other Retrievers, the content returned by the Zep Retriever...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-1
" about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), }, ]
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-2
for msg in test_history: zep_memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) )
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-3
time.sleep(2) # Wait for the messages to be embedded Zep provides native vector search over historical conversation memory. Embedding happens automatically. NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generate...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-4
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596040989115522, 'uuid': '166d9556-2d48-4237-8a84-5d8a1024d5f4', 'created_at': '2023-08-11T20:31:12.434522Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, '...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-5
Document(page_content='Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546476914454683, 'uuid': '7c093a2a-0099-415a-95c5-615a8026a894', 'created_at': '2023-08-11T20:31:12.399979Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PE...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-6
Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7758688965570713, 'uuid': 'b3322d28-f589-48c7-9daf-5eb092d65976', 'created_at': '2023-08-11T20:31:12.3856Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Na...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-7
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596040989115522, 'uuid': '166d9556-2d48-4237-8a84-5d8a1024d5f4', 'created_at': '2023-08-11T20:31:12.434522Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, '...
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
da1d3a44835f-0
Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom in...
https://python.langchain.com/docs/integrations/text_embedding/sagemaker-endpoint
19bf10187d3a-0
Self Hosted Embeddings Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, ) import runhouse as rh # For an on-demand ...
https://python.langchain.com/docs/integrations/text_embedding/self-hosted
0ee100f94808-0
Sentence Transformers Embeddings SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package. SentenceTransformers is a python package that can generate text and ima...
https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers
3a6f3d83aa1b-0
Spacy Embedding Loading the Spacy embedding class to generate and query embeddings​ Import the necessary classes​ from langchain.embeddings.spacy_embeddings import SpacyEmbeddings Initialize SpacyEmbeddings.This will load the Spacy model into memory.​ embedder = SpacyEmbeddings() Define some example texts . These could...
https://python.langchain.com/docs/integrations/text_embedding/spacy_embedding