id stringlengths 14 16 | text stringlengths 44 2.73k | source stringlengths 49 114 |
|---|---|---|
d1ab74b19ad5-0 | .ipynb
.pdf
Milvus
Milvus#
This notebook shows how to use functionality related to the Milvus vector database.
To run, you should have a Milvus instance up and running: https://milvus.io/docs/install_standalone-docker.md
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import Charac... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html |
376cdfb06ecc-0 | .ipynb
.pdf
Annoy
Contents
Create VectorStore from texts
Create VectorStore from docs
Create VectorStore via existing embeddings
Search via embeddings
Search via docstore id
Save and load
Construct from scratch
Annoy#
This notebook shows how to use functionality related to the Annoy vector database.
“Annoy (Approxima... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-1 | vector_store.similarity_search_with_score("food", k=3)
[(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),
(Document(page_content='I love salad', metadata={}), 1.1273186206817627),
(Document(page_content='my car', metadata={}), 1.1580758094787598)]
Create VectorStore from docs#
from langchain... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-2 | Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the Unit... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-3 | Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and ca... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-4 | Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. ... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-5 | Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and tradin... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-6 | (Document(page_content='I love salad', metadata={}), 1.1273186206817627),
(Document(page_content='my car', metadata={}), 1.1580758094787598)]
Search via embeddings#
motorbike_emb = embeddings_func.embed_query("motorbike")
vector_store.similarity_search_by_vector(motorbike_emb, k=3)
[Document(page_content='my car', met... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-7 | Document(page_content='pizza is great', metadata={})
# same document has distance 0
vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3)
[(Document(page_content='pizza is great', metadata={}), 0.0),
(Document(page_content='I love salad', metadata={}), 1.0734446048736572),
(Document(page_content='... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
376cdfb06ecc-8 | index.build(10)
# docstore
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
index_to_docstore_id = {i: str(uuid.uuid4()) for i in range(len(documents))}
docstore = InMemoryDocstore(
{index_to_docstor... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html |
63e46f1d18d7-0 | .ipynb
.pdf
Redis
Contents
RedisVectorStoreRetriever
Redis#
This notebook shows how to use functionality related to the Redis vector database.
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.redis import Redis
from langchain.docum... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
63e46f1d18d7-1 | print(rds.add_texts(["Ankush went to Princeton"]))
['doc:link:d7d02e3faf1b40bbbe29a683ff75b280']
query = "Princeton"
results = rds.similarity_search(query)
print(results[0].page_content)
Ankush went to Princeton
# Load from existing index
rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", i... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
63e46f1d18d7-2 | docs = retriever.get_relevant_documents(query)
We can also use similarity_limit as a search method. This is only return documents if they are similar enough
retriever = rds.as_retriever(search_type="similarity_limit")
# Here we can see it doesn't return any results because there are no relevant documents
retriever.get_... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
cf7792123cf4-0 | .ipynb
.pdf
Weaviate
Weaviate#
This notebook shows how to use functionality related to the Weaviate vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextL... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
cf7792123cf4-1 | },
],
},
]
}
client.schema.create(schema)
vectorstore = Weaviate(client, "Paragraph", "content")
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].page_content)
previous
SupabaseVectorStore
next
Zilliz
By Harrison Chase
... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html |
bfa20afd25bd-0 | .ipynb
.pdf
MyScale
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Deleting your data
MyScale#
This notebook shows how to use functionality related to the MyScale vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterText... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
bfa20afd25bd-1 | docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:18<00:00, 2.21it/s]
print(docs[0].page_content)
As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.
... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
bfa20afd25bd-2 | docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = MyScale.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.69it/s]
meta = docsearch.metadata_column
output = docsearch.similari... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html |
42ee0f7f1f92-0 | .ipynb
.pdf
OpenSearch
Contents
similarity_search using Approximate k-NN Search with Custom Parameters
similarity_search using Script Scoring with Custom Parameters
similarity_search using Painless Scripting with Custom Parameters
Using a preexisting OpenSearch instance
OpenSearch#
This notebook shows how to use func... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
42ee0f7f1f92-1 | query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Script Scoring with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
42ee0f7f1f92-2 | docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata")
previous
MyScale
next
PGVector
Contents
similarity_search using Approximate k-NN Search w... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
87c448b8399e-0 | .ipynb
.pdf
PGVector
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
PGVector#
This notebook shows how to use functionality related to the Postgres vector database (PGVector).
## Loading Environment Variables
from typing import List, Tuple
from dotenv import load_dotenv
load_... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
87c448b8399e-1 | # permission to create a table.
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="state_of_the_union",
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db.similarity_... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
87c448b8399e-2 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President h... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
87c448b8399e-3 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President h... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html |
2910e6085ad7-0 | .ipynb
.pdf
Deep Lake
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
Deep Lake#
T... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-1 | query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compressio... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-2 | docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage
Dataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- -... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-3 | Attribute based filtering in metadata#
import random
for d in docs:
d.metadata['year'] = random.randint(2012, 2014)
db = DeepLake.from_documents(docs, embeddings, dataset_path="./my_deeplake/", overwrite=True)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./m... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-4 | [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justic... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-5 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-6 | [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justic... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-7 | Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-8 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-9 | Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards ... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-10 | [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justic... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-11 | Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards ... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-12 | Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-13 | Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-14 | username = "<username>" # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_test" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake(dataset_path=dataset_path, embedding_function=embedd... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-15 | 'd6d6ccb7-e187-11ed-b66d-41c5f7b85421']
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so ... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-16 | })
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00
\
Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- ----... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-17 | username = "davitbun" # your username on app.activeloop.ai
source = f"hub://{username}/langchain_test" # could be local, s3, gcs, etc.
destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.
deeplake.deepcopy(src=source, dest=destination, overwrite=True)
Copying dataset: 100%|██████████... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
2910e6085ad7-18 | metadata json (4, 1) str None
text text (4, 1) str None
Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00
-
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- -... | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html |
12e2081ef580-0 | .ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
import weaviate
import os
WEAVIATE_URL = "..."
client = weaviate.Client(
url=WEAVIATE_URL,
)
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetrieve... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
4de9b1dd4ec2-0 | .ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
4de9b1dd4ec2-1 | previous
ElasticSearch BM25
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
cfc6c35d5d13-0 | .ipynb
.pdf
Pinecone Hybrid Search
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
Pinecone Hybrid Search#
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from this... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
cfc6c35d5d13-1 | index = pinecone.Index(index_name)
Get embeddings and sparse encoders#
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
To encode the text to sparse values you can either choose SPLADE or BM25. For out of... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
cfc6c35d5d13-2 | Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result[0]
Document(page_content='foo', metadata={})
previous
Metal
next
SVM Retriever
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
3419869cb85e-0 | .ipynb
.pdf
ElasticSearch BM25
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
ElasticSearch BM25#
This notebook goes over how to use a retriever that under the hood uses ElasticSearcha and BM25.
For more information on the details of BM25 see this blog post.
from langchain.retrievers import Elas... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
3419869cb85e-1 | result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})]
previous
Databerry
next
Metal
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
a68c7942261c-0 | .ipynb
.pdf
TF-IDF Retriever
Contents
Create New Retriever with Texts
Use Retriever
TF-IDF Retriever#
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn.
For more information on the details of TF-IDF see this blog post.
from langchain.retrievers import TFIDFRetriever
# !... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf_retriever.html |
fdc67777964e-0 | .ipynb
.pdf
ChatGPT Plugin Retriever
Contents
Create
Using the ChatGPT Retriever Plugin
ChatGPT Plugin Retriever#
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
Create#
First, let’s go over how to create the ChatGPT Retriever Plugin.
To set up the ChatGPT Retriever Plugin, please follow... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
fdc67777964e-1 | The below code walks through how to do that.
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str=... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
fdc67777964e-2 | Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': Non... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
ac68ec9ea98d-0 | .ipynb
.pdf
SVM Retriever
Contents
Create New Retriever with Texts
Use Retriever
SVM Retriever#
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
from langchain.retrievers import SVMRet... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm_retriever.html |
c68d6e7f8ee9-0 | .ipynb
.pdf
Time Weighted VectorStore Retriever
Contents
Low Decay Rate
High Decay Rate
Time Weighted VectorStore Retriever#
This retriever uses a combination of semantic similarity and recency.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ** hours_passed
Notably, hours_passed refers to... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
c68d6e7f8ee9-1 | retriever.add_documents([Document(page_content="hello foo")])
['5c9f7c06-c9eb-45f2-aea5-efce5fb9f2bd']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough
retriever.get_relevant_documents("hello world")
[Document(page_content='hello world', m... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
c68d6e7f8ee9-2 | # "Hello Foo" is returned first because "hello world" is mostly forgotten
retriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
69c0270812a2-0 | .ipynb
.pdf
Contextual Compression Retriever
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
Contextual C... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-1 | texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-2 | We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-3 | Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an LLMChainExtractor#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’l... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-4 | More built-in compressors: filters#
LLMChainFilter#
The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compres... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-5 | from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_doc... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-6 | We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have th... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-7 | Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipe... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
69c0270812a2-8 | previous
ChatGPT Plugin Retriever
next
Databerry
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
By Har... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
6ec9ca2c8b1f-0 | .ipynb
.pdf
VectorStore Retriever
VectorStore Retriever#
The index - and therefore the retriever - that LangChain has the most support for is a VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.
Once you construct a VectorStore, its very easy to construct a retriever. Let’s w... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html |
6ec9ca2c8b1f-1 | next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html |
f4a4db54e1e9-0 | .ipynb
.pdf
Databerry
Contents
Query
Databerry#
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from lang... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
f4a4db54e1e9-1 | Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
f4a4db54e1e9-2 | Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to ... | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
0c5b9a48ced3-0 | .ipynb
.pdf
Getting Started
Getting Started#
The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so fort... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
0c5b9a48ced3-1 | previous
Text Splitters
next
Character Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
395c7efdf406-0 | .ipynb
.pdf
NLTK Text Splitter
NLTK Text Splitter#
Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers.
How the text is split: by NLTK
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../.... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
395c7efdf406-1 | previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
fbcfb59b0fc7-0 | .ipynb
.pdf
Markdown Text Splitter
Markdown Text Splitter#
MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html |
33d617a24633-0 | .ipynb
.pdf
RecursiveCharacterTextSplitter
RecursiveCharacterTextSplitter#
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of tr... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html |
bf8af6c5f44d-0 | .ipynb
.pdf
TiktokenText Splitter
TiktokenText Splitter#
How the text is split: by tiktoken tokens
How the chunk size is measured: by tiktoken tokens
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import TokenT... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html |
23cfef31e6f1-0 | .ipynb
.pdf
Spacy Text Splitter
Spacy Text Splitter#
Another alternative to NLTK is to use Spacy.
How the text is split: by Spacy
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
23cfef31e6f1-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
2139abeae8d6-0 | .ipynb
.pdf
Python Code Text Splitter
Python Code Text Splitter#
PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default.
How the te... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/python.html |
c82ab7c3e5f6-0 | .ipynb
.pdf
Character Text Splitter
Character Text Splitter#
This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters.
How the text is split: by single character
How the chunk size is measured: by length function passed in (defaults to number of ... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
c82ab7c3e5f6-1 | texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally to... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
c82ab7c3e5f6-2 | print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republica... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
566636a32480-0 | .ipynb
.pdf
Hugging Face Length Function
Hugging Face Length Function#
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use Hugging Face tokenizers to count the text length.
How the text is split: ... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html |
63914e4e54f5-0 | .ipynb
.pdf
tiktoken (OpenAI) Length Function
tiktoken (OpenAI) Length Function#
You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken toke... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html |
3440e9cd3eb5-0 | .ipynb
.pdf
Latex Text Splitter
Latex Text Splitter#
LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default.
How the text is ... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
3440e9cd3eb5-1 | docs = latex_splitter.create_documents([latex_text])
docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on v... | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
48c01de466fb-0 | .rst
.pdf
How-To Guides
How-To Guides#
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either prompts, models, arbitrary functions, or other chains.
The examples here are broken up into three sections:
Generic Functionality
Covers both generic chains (that are useful in a ... | https://python.langchain.com/en/latest/modules/chains/how_to_guides.html |
504d9508b6bc-0 | .ipynb
.pdf
Getting Started
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
Getting Started#
In this tutorial, we will learn about creating simple chains in ... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-1 | print(chain.run("colorful socks"))
Cheerful Toes.
You can use a chat model in an LLMChain as well:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-2 | {'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
If the Chain only takes one input key (i.e. only has one element in its input_variables), you can use run method. Note that run outputs a string instead of a dictionary.
llm_chain.run({"adjective":"lame"})
'Why did the tomato turn red? Because... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-3 | 'The next four colors of a rainbow are green, blue, indigo, and violet.'
Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in Memory section.
Deb... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-4 | Combine chains with the SequentialChain#
The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type ... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-5 | "Step into Color with Rainbow Socks Co!"
Create a custom chain with the Chain class#
LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.
In order to cr... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
504d9508b6bc-6 | prompt_2 = PromptTemplate(
input_variables=["product"],
template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)
concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated o... | https://python.langchain.com/en/latest/modules/chains/getting_started.html |
444ee436a932-0 | .ipynb
.pdf
Sequential Chains
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
Sequential Chains#
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to... | https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.