id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
|---|---|---|
fa0e262c586c-2
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.ClickhouseSettings.html
|
fa0e262c586c-3
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ClickhouseSettings¶
ClickHouse Vector Search
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.ClickhouseSettings.html
|
ced8b726304d-0
|
langchain.vectorstores.sklearn.SKLearnVectorStore¶
class langchain.vectorstores.sklearn.SKLearnVectorStore(embedding: Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]¶
A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(embedding, *[, persist_path, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, ids])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-1
|
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
persist()
search(query, search_type, **kwargs)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-2
|
persist()
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query, *[, k])
Run similarity search with distance.
__init__(embedding: Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any) → None[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-3
|
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-4
|
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-5
|
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) → SKLearnVectorStore[source]¶
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-6
|
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]¶
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
ced8b726304d-7
|
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Run similarity search with distance.
Examples using SKLearnVectorStore¶
scikit-learn
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html
|
84638146dd47-0
|
langchain.vectorstores.tair.Tair¶
class langchain.vectorstores.tair.Tair(embedding_function: Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]¶
Wrapper around Tair Vector store.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(embedding_function, url, index_name)
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Add texts data to an existing index.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-1
|
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
create_index_if_not_exist(dim, ...)
delete([ids])
Delete by vector ID or other criteria.
drop_index([index_name])
Drop an existing index.
from_documents(documents, embedding[, ...])
Return VectorStore initialized from documents and embeddings.
from_existing_index(embedding[, index_name, ...])
Connect to an existing Tair index.
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Returns the most similar indexed documents to the query text.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
__init__(embedding_function: Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-2
|
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Add texts data to an existing index.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-3
|
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-4
|
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
create_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) → bool[source]¶
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-5
|
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
static drop_index(index_name: str = 'langchain', **kwargs: Any) → bool[source]¶
Drop an existing index.
Parameters
index_name (str) – Name of the index to drop.
Returns
True if the index is dropped successfully.
Return type
bool
classmethod from_documents(documents: List[Document], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → Tair[source]¶
Return VectorStore initialized from documents and embeddings.
classmethod from_existing_index(embedding: Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → Tair[source]¶
Connect to an existing Tair index.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → Tair[source]¶
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-6
|
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
84638146dd47-7
|
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
Examples using Tair¶
Tair
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html
|
106702130be1-0
|
langchain.vectorstores.milvus.Milvus¶
class langchain.vectorstores.milvus.Milvus(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]¶
Initialize wrapper around the milvus vector database.
In order to use this you need to have pymilvus installed and a
running Milvus
See the following documentation for how to run a Milvus instance:
https://milvus.io/docs/install_standalone-docker.md
If looking for a hosted Milvus, take a look at this documentation:
https://zilliz.com/cloud and make use of the Zilliz vectorstore found in
this project,
IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.
Parameters
embedding_function (Embeddings) – Function used to embed the text.
collection_name (str) – Which Milvus collection to use. Defaults to
“LangChainCollection”.
connection_args (Optional[dict[str, any]]) – The connection args used for
this class comes in the form of a dict.
consistency_level (str) – The consistency level to use for a collection.
Defaults to “Session”.
index_params (Optional[dict]) – Which index params to use. Defaults to
HNSW/AUTOINDEX depending on service.
search_params (Optional[dict]) – Which search params to use. Defaults to
default of index.
drop_old (Optional[bool]) – Whether to drop the current collection. Defaults
to False.
The connection args used for this class comes in the form of a dict,
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-1
|
to False.
The connection args used for this class comes in the form of a dict,
here are a few of the options:
address (str): The actual address of Milvusinstance. Example address: “localhost:19530”
uri (str): The uri of Milvus instance. Example uri:“http://randomwebsite:19530”,
“tcp:foobarsite:19530”,
“https://ok.s3.south.com:19530”.
host (str): The host of Milvus instance. Default at “localhost”,PyMilvus will fill in the default host if only port is provided.
port (str/int): The port of Milvus instance. Default at 19530, PyMilvuswill fill in the default port if only host is provided.
user (str): Use which user to connect to Milvus instance. If user andpassword are provided, we will add related header in every RPC call.
password (str): Required when user is provided. The passwordcorresponding to the user.
secure (bool): Default is false. If set to true, tls will be enabled.
client_key_path (str): If use tls two-way authentication, need to
write the client.key path.
client_pem_path (str): If use tls two-way authentication, need towrite the client.pem path.
ca_pem_path (str): If use tls two-way authentication, need to writethe ca.pem path.
server_pem_path (str): If use tls one-way authentication, need towrite the server.pem path.
server_name (str): If use tls, need to write the common name.
Example
from langchain import Milvus
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
# Connect to a milvus instance on localhost
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-2
|
embedding = OpenAIEmbeddings()
# Connect to a milvus instance on localhost
milvus_store = Milvus(
embedding_function = Embeddings,
collection_name = “LangChainCollection”,
drop_old = True,
)
Raises
ValueError – If the pymilvus python package is not installed.
Initialize the Milvus vector store.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(embedding_function[, ...])
Initialize the Milvus vector store.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, timeout, ...])
Insert text data into Milvus.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-3
|
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Create a Milvus collection, indexes it with HNSW, and insert data.
max_marginal_relevance_search(query[, k, ...])
Perform a search and return results that are reordered by MMR.
max_marginal_relevance_search_by_vector(...)
Perform a search and return results that are reordered by MMR.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, param, expr, ...])
Perform a similarity search against the query string.
similarity_search_by_vector(embedding[, k, ...])
Perform a similarity search against the query string.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, ...])
Perform a search on a query string and return results with score.
similarity_search_with_score_by_vector(embedding)
Perform a search on a query string and return results with score.
__init__(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]¶
Initialize the Milvus vector store.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-4
|
Initialize the Milvus vector store.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) → List[str][source]¶
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) – The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) – Metadata dicts attached to each of
the texts. Defaults to None.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-5
|
the texts. Defaults to None.
timeout (Optional[int]) – Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) – Batch size to use for insertion.
Defaults to 1000.
Raises
MilvusException – Failure to add texts
Returns
The resulting keys for each inserted element.
Return type
List[str]
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-6
|
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-7
|
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) → Milvus[source]¶
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text data.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-8
|
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) – Collection name to use. Defaults to
“LangChainCollection”.
connection_args (dict[str, Any], optional) – Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) – Which consistency level to use. Defaults
to “Session”.
index_params (Optional[dict], optional) – Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) – Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) – Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Milvus Vector Store
Return type
Milvus
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Document][source]¶
Perform a search and return results that are reordered by MMR.
Parameters
query (str) – The text being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-9
|
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Document][source]¶
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) – The embedding vector being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-10
|
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Document][source]¶
Perform a similarity search against the query string.
Parameters
query (str) – The text to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Document][source]¶
Perform a similarity search against the query string.
Parameters
embedding (List[float]) – The embedding vector to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-11
|
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) – The text being searched.
k (int, optional) – The amount of results to return. Defaults to 4.
param (dict) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
106702130be1-12
|
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) – The embedding vector being searched.
k (int, optional) – The amount of results to return. Defaults to 4.
param (dict) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
Examples using Milvus¶
Milvus
Zilliz
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html
|
34a6e981c996-0
|
langchain.vectorstores.scann.dependable_scann_import¶
langchain.vectorstores.scann.dependable_scann_import() → Any[source]¶
Import scann if available, otherwise raise error.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.scann.dependable_scann_import.html
|
fe9cae3f63f0-0
|
langchain.vectorstores.azuresearch.AzureSearch¶
class langchain.vectorstores.azuresearch.AzureSearch(azure_search_endpoint: str, azure_search_key: str, index_name: str, embedding_function: Callable, search_type: str = 'hybrid', semantic_configuration_name: Optional[str] = None, semantic_query_language: str = 'en-us', fields: Optional[List[SearchField]] = None, vector_search: Optional[VectorSearch] = None, semantic_settings: Optional[SemanticSettings] = None, scoring_profiles: Optional[List[ScoringProfile]] = None, default_scoring_profile: Optional[str] = None, **kwargs: Any)[source]¶
Azure Cognitive Search vector store.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(azure_search_endpoint, ...[, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Add texts data to an existing index.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-1
|
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
hybrid_search(query[, k])
Returns the most similar indexed documents to the query text.
hybrid_search_with_score(query[, k, filters])
Return docs most similar to query with an hybrid query.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
semantic_hybrid_search(query[, k])
Returns the most similar indexed documents to the query text.
semantic_hybrid_search_with_score(query[, ...])
Return docs most similar to query with an hybrid query.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-2
|
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
vector_search(query[, k])
Returns the most similar indexed documents to the query text.
vector_search_with_score(query[, k, filters])
Return docs most similar to query.
__init__(azure_search_endpoint: str, azure_search_key: str, index_name: str, embedding_function: Callable, search_type: str = 'hybrid', semantic_configuration_name: Optional[str] = None, semantic_query_language: str = 'en-us', fields: Optional[List[SearchField]] = None, vector_search: Optional[VectorSearch] = None, semantic_settings: Optional[SemanticSettings] = None, scoring_profiles: Optional[List[ScoringProfile]] = None, default_scoring_profile: Optional[str] = None, **kwargs: Any)[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-3
|
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Add texts data to an existing index.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-4
|
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-5
|
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, azure_search_endpoint: str = '', azure_search_key: str = '', index_name: str = 'langchain-index', **kwargs: Any) → AzureSearch[source]¶
Return VectorStore initialized from texts and embeddings.
hybrid_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-6
|
Return type
List[Document]
hybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query with an hybrid query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-7
|
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
semantic_hybrid_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
semantic_hybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query with an hybrid query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-8
|
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
vector_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
vector_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
fe9cae3f63f0-9
|
Returns
List of Documents most similar to the query and score for each
Examples using AzureSearch¶
Azure Cognitive Search
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html
|
1c7e8c70440a-0
|
langchain.vectorstores.pgembedding.EmbeddingStore¶
class langchain.vectorstores.pgembedding.EmbeddingStore(**kwargs)[source]¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Attributes
cmetadata
collection
collection_id
custom_id
document
embedding
metadata
registry
uuid
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
__init__(**kwargs)¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.EmbeddingStore.html
|
8ee4bb0baf06-0
|
langchain.vectorstores.myscale.has_mul_sub_str¶
langchain.vectorstores.myscale.has_mul_sub_str(s: str, *args: Any) → bool[source]¶
Check if a string contains multiple substrings.
:param s: string to check.
:param *args: substrings to check.
Returns
True if all substrings are in the string, False otherwise.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.has_mul_sub_str.html
|
77cb157ff06b-0
|
langchain.vectorstores.pgembedding.BaseModel¶
class langchain.vectorstores.pgembedding.BaseModel(**kwargs: Any)[source]¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Attributes
metadata
registry
uuid
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
__init__(**kwargs: Any) → None¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.BaseModel.html
|
d11e92f7aecd-0
|
langchain.vectorstores.sklearn.BaseSerializer¶
class langchain.vectorstores.sklearn.BaseSerializer(persist_path: str)[source]¶
Abstract base class for saving and loading data.
Methods
__init__(persist_path)
extension()
The file extension suggested by this serializer (without dot).
load()
Loads the data from the persist_path
save(data)
Saves the data to the persist_path
__init__(persist_path: str) → None[source]¶
abstract classmethod extension() → str[source]¶
The file extension suggested by this serializer (without dot).
abstract load() → Any[source]¶
Loads the data from the persist_path
abstract save(data: Any) → None[source]¶
Saves the data to the persist_path
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.BaseSerializer.html
|
1130af3d0cd1-0
|
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch¶
class langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: Embeddings)[source]¶
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install “langchain[docarray]”.
Initialize a vector store from DocArray’s DocIndex.
Attributes
doc_cls
embeddings
Access the query embedding object if available.
Methods
__init__(doc_index, embedding)
Initialize a vector store from DocArray's DocIndex.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-1
|
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_params(embedding[, metric])
Initialize DocArrayInMemorySearch store.
from_texts(texts, embedding[, metadatas])
Create an DocArrayInMemorySearch store and insert data.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Return docs most similar to query.
__init__(doc_index: BaseDocIndex, embedding: Embeddings)¶
Initialize a vector store from DocArray’s DocIndex.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-2
|
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-3
|
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-4
|
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-5
|
Return VectorStore initialized from documents and embeddings.
classmethod from_params(embedding: Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) → DocArrayInMemorySearch[source]¶
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) – Embedding function.
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
**kwargs – Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) → DocArrayInMemorySearch[source]¶
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) – Metadata for each text
if it exists. Defaults to None.
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
Returns
DocArrayInMemorySearch Vector Store
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-6
|
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
1130af3d0cd1-7
|
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text and
cosine distance in float for each.
Lower score represents more similarity.
Examples using DocArrayInMemorySearch¶
DocArrayInMemorySearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
d5fd1bd5e458-0
|
langchain.vectorstores.singlestoredb.SingleStoreDBRetriever¶
class langchain.vectorstores.singlestoredb.SingleStoreDBRetriever[source]¶
Bases: VectorStoreRetriever
Retriever for SingleStoreDB vector stores.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param k: int = 4¶
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
param search_kwargs: dict [Optional]¶
Keyword arguments to pass to the search function.
param search_type: str = 'similarity'¶
Type of search to perform. Defaults to “similarity”.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
param vectorstore: SingleStoreDB [Required]¶
VectorStore to use for retrieval.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Add documents to vectorstore.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html
|
d5fd1bd5e458-1
|
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Add documents to vectorstore.
async aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → List[Document]¶
Asynchronously get documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
:param tags: Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Parameters
metadata – Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Returns
List of relevant documents
async ainvoke(input: str, config: Optional[RunnableConfig] = None) → List[Document]¶
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html
|
d5fd1bd5e458-2
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → List[Document]¶
Retrieve documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
:param tags: Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html
|
d5fd1bd5e458-3
|
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Parameters
metadata – Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Returns
List of relevant documents
invoke(input: str, config: Optional[RunnableConfig] = None) → List[Document]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html
|
d5fd1bd5e458-4
|
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
allowed_search_types: ClassVar[Collection[str]] = ('similarity',)¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html
|
2c079cde2172-0
|
langchain.vectorstores.pgvector.BaseModel¶
class langchain.vectorstores.pgvector.BaseModel(**kwargs: Any)[source]¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Attributes
metadata
registry
uuid
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
__init__(**kwargs: Any) → None¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.BaseModel.html
|
2dc8ee668fc2-0
|
All modules for which code is available
langchain._api.deprecation
langchain.agents.agent
langchain.agents.agent_iterator
langchain.agents.agent_toolkits.amadeus.toolkit
langchain.agents.agent_toolkits.azure_cognitive_services
langchain.agents.agent_toolkits.base
langchain.agents.agent_toolkits.conversational_retrieval.openai_functions
langchain.agents.agent_toolkits.conversational_retrieval.tool
langchain.agents.agent_toolkits.csv.base
langchain.agents.agent_toolkits.file_management.toolkit
langchain.agents.agent_toolkits.github.toolkit
langchain.agents.agent_toolkits.gmail.toolkit
langchain.agents.agent_toolkits.jira.toolkit
langchain.agents.agent_toolkits.json.base
langchain.agents.agent_toolkits.json.toolkit
langchain.agents.agent_toolkits.multion.toolkit
langchain.agents.agent_toolkits.nla.tool
langchain.agents.agent_toolkits.nla.toolkit
langchain.agents.agent_toolkits.office365.toolkit
langchain.agents.agent_toolkits.openapi.base
langchain.agents.agent_toolkits.openapi.planner
langchain.agents.agent_toolkits.openapi.spec
langchain.agents.agent_toolkits.openapi.toolkit
langchain.agents.agent_toolkits.pandas.base
langchain.agents.agent_toolkits.playwright.toolkit
langchain.agents.agent_toolkits.powerbi.base
langchain.agents.agent_toolkits.powerbi.chat_base
langchain.agents.agent_toolkits.powerbi.toolkit
langchain.agents.agent_toolkits.python.base
langchain.agents.agent_toolkits.spark.base
langchain.agents.agent_toolkits.spark_sql.base
langchain.agents.agent_toolkits.spark_sql.toolkit
langchain.agents.agent_toolkits.sql.base
langchain.agents.agent_toolkits.sql.toolkit
langchain.agents.agent_toolkits.vectorstore.base
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-1
|
langchain.agents.agent_toolkits.vectorstore.base
langchain.agents.agent_toolkits.vectorstore.toolkit
langchain.agents.agent_toolkits.xorbits.base
langchain.agents.agent_toolkits.zapier.toolkit
langchain.agents.agent_types
langchain.agents.chat.base
langchain.agents.chat.output_parser
langchain.agents.conversational.base
langchain.agents.conversational.output_parser
langchain.agents.conversational_chat.base
langchain.agents.conversational_chat.output_parser
langchain.agents.initialize
langchain.agents.load_tools
langchain.agents.loading
langchain.agents.mrkl.base
langchain.agents.mrkl.output_parser
langchain.agents.openai_functions_agent.agent_token_buffer_memory
langchain.agents.openai_functions_agent.base
langchain.agents.openai_functions_multi_agent.base
langchain.agents.react.base
langchain.agents.react.output_parser
langchain.agents.schema
langchain.agents.self_ask_with_search.base
langchain.agents.self_ask_with_search.output_parser
langchain.agents.structured_chat.base
langchain.agents.structured_chat.output_parser
langchain.agents.tools
langchain.agents.utils
langchain.agents.xml.base
langchain.cache
langchain.callbacks.aim_callback
langchain.callbacks.argilla_callback
langchain.callbacks.arize_callback
langchain.callbacks.arthur_callback
langchain.callbacks.base
langchain.callbacks.clearml_callback
langchain.callbacks.comet_ml_callback
langchain.callbacks.context_callback
langchain.callbacks.file
langchain.callbacks.flyte_callback
langchain.callbacks.human
langchain.callbacks.infino_callback
langchain.callbacks.manager
langchain.callbacks.mlflow_callback
langchain.callbacks.openai_info
langchain.callbacks.promptlayer_callback
langchain.callbacks.sagemaker_callback
langchain.callbacks.stdout
langchain.callbacks.streaming_aiter
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-2
|
langchain.callbacks.stdout
langchain.callbacks.streaming_aiter
langchain.callbacks.streaming_aiter_final_only
langchain.callbacks.streaming_stdout
langchain.callbacks.streaming_stdout_final_only
langchain.callbacks.streamlit.mutable_expander
langchain.callbacks.streamlit.streamlit_callback_handler
langchain.callbacks.tracers.base
langchain.callbacks.tracers.evaluation
langchain.callbacks.tracers.langchain
langchain.callbacks.tracers.langchain_v1
langchain.callbacks.tracers.run_collector
langchain.callbacks.tracers.schemas
langchain.callbacks.tracers.stdout
langchain.callbacks.tracers.wandb
langchain.callbacks.utils
langchain.callbacks.wandb_callback
langchain.callbacks.whylabs_callback
langchain.chains.api.base
langchain.chains.api.openapi.chain
langchain.chains.api.openapi.requests_chain
langchain.chains.api.openapi.response_chain
langchain.chains.base
langchain.chains.combine_documents.base
langchain.chains.combine_documents.map_reduce
langchain.chains.combine_documents.map_rerank
langchain.chains.combine_documents.reduce
langchain.chains.combine_documents.refine
langchain.chains.combine_documents.stuff
langchain.chains.constitutional_ai.base
langchain.chains.constitutional_ai.models
langchain.chains.conversation.base
langchain.chains.conversational_retrieval.base
langchain.chains.elasticsearch_database.base
langchain.chains.example_generator
langchain.chains.flare.base
langchain.chains.flare.prompts
langchain.chains.graph_qa.arangodb
langchain.chains.graph_qa.base
langchain.chains.graph_qa.cypher
langchain.chains.graph_qa.hugegraph
langchain.chains.graph_qa.kuzu
langchain.chains.graph_qa.nebulagraph
langchain.chains.graph_qa.neptune_cypher
langchain.chains.graph_qa.sparql
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-3
|
langchain.chains.graph_qa.sparql
langchain.chains.hyde.base
langchain.chains.llm
langchain.chains.llm_bash.base
langchain.chains.llm_bash.prompt
langchain.chains.llm_checker.base
langchain.chains.llm_math.base
langchain.chains.llm_requests
langchain.chains.llm_summarization_checker.base
langchain.chains.llm_symbolic_math.base
langchain.chains.loading
langchain.chains.mapreduce
langchain.chains.moderation
langchain.chains.natbot.base
langchain.chains.natbot.crawler
langchain.chains.openai_functions.base
langchain.chains.openai_functions.citation_fuzzy_match
langchain.chains.openai_functions.extraction
langchain.chains.openai_functions.openapi
langchain.chains.openai_functions.qa_with_structure
langchain.chains.openai_functions.tagging
langchain.chains.openai_functions.utils
langchain.chains.prompt_selector
langchain.chains.qa_generation.base
langchain.chains.qa_with_sources.base
langchain.chains.qa_with_sources.loading
langchain.chains.qa_with_sources.retrieval
langchain.chains.qa_with_sources.vector_db
langchain.chains.query_constructor.base
langchain.chains.query_constructor.ir
langchain.chains.query_constructor.parser
langchain.chains.query_constructor.schema
langchain.chains.retrieval_qa.base
langchain.chains.router.base
langchain.chains.router.embedding_router
langchain.chains.router.llm_router
langchain.chains.router.multi_prompt
langchain.chains.router.multi_retrieval_qa
langchain.chains.sequential
langchain.chains.sql_database.query
langchain.chains.transform
langchain.chat_models.anthropic
langchain.chat_models.anyscale
langchain.chat_models.azure_openai
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-4
|
langchain.chat_models.anyscale
langchain.chat_models.azure_openai
langchain.chat_models.azureml_endpoint
langchain.chat_models.base
langchain.chat_models.fake
langchain.chat_models.google_palm
langchain.chat_models.human
langchain.chat_models.jinachat
langchain.chat_models.mlflow_ai_gateway
langchain.chat_models.openai
langchain.chat_models.promptlayer_openai
langchain.chat_models.vertexai
langchain.docstore.arbitrary_fn
langchain.docstore.base
langchain.docstore.in_memory
langchain.docstore.wikipedia
langchain.document_loaders.acreom
langchain.document_loaders.airbyte
langchain.document_loaders.airbyte_json
langchain.document_loaders.airtable
langchain.document_loaders.apify_dataset
langchain.document_loaders.arxiv
langchain.document_loaders.async_html
langchain.document_loaders.azlyrics
langchain.document_loaders.azure_blob_storage_container
langchain.document_loaders.azure_blob_storage_file
langchain.document_loaders.base
langchain.document_loaders.bibtex
langchain.document_loaders.bigquery
langchain.document_loaders.bilibili
langchain.document_loaders.blackboard
langchain.document_loaders.blob_loaders.file_system
langchain.document_loaders.blob_loaders.schema
langchain.document_loaders.blob_loaders.youtube_audio
langchain.document_loaders.blockchain
langchain.document_loaders.brave_search
langchain.document_loaders.browserless
langchain.document_loaders.chatgpt
langchain.document_loaders.college_confidential
langchain.document_loaders.concurrent
langchain.document_loaders.confluence
langchain.document_loaders.conllu
langchain.document_loaders.csv_loader
langchain.document_loaders.cube_semantic
langchain.document_loaders.datadog_logs
langchain.document_loaders.dataframe
langchain.document_loaders.diffbot
langchain.document_loaders.directory
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-5
|
langchain.document_loaders.diffbot
langchain.document_loaders.directory
langchain.document_loaders.discord
langchain.document_loaders.docugami
langchain.document_loaders.dropbox
langchain.document_loaders.duckdb_loader
langchain.document_loaders.email
langchain.document_loaders.embaas
langchain.document_loaders.epub
langchain.document_loaders.etherscan
langchain.document_loaders.evernote
langchain.document_loaders.excel
langchain.document_loaders.facebook_chat
langchain.document_loaders.fauna
langchain.document_loaders.figma
langchain.document_loaders.gcs_directory
langchain.document_loaders.gcs_file
langchain.document_loaders.generic
langchain.document_loaders.geodataframe
langchain.document_loaders.git
langchain.document_loaders.gitbook
langchain.document_loaders.github
langchain.document_loaders.googledrive
langchain.document_loaders.gutenberg
langchain.document_loaders.helpers
langchain.document_loaders.hn
langchain.document_loaders.html
langchain.document_loaders.html_bs
langchain.document_loaders.hugging_face_dataset
langchain.document_loaders.ifixit
langchain.document_loaders.image
langchain.document_loaders.image_captions
langchain.document_loaders.imsdb
langchain.document_loaders.iugu
langchain.document_loaders.joplin
langchain.document_loaders.json_loader
langchain.document_loaders.larksuite
langchain.document_loaders.markdown
langchain.document_loaders.mastodon
langchain.document_loaders.max_compute
langchain.document_loaders.mediawikidump
langchain.document_loaders.merge
langchain.document_loaders.mhtml
langchain.document_loaders.modern_treasury
langchain.document_loaders.news
langchain.document_loaders.notebook
langchain.document_loaders.notion
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-6
|
langchain.document_loaders.notebook
langchain.document_loaders.notion
langchain.document_loaders.notiondb
langchain.document_loaders.nuclia
langchain.document_loaders.obs_directory
langchain.document_loaders.obs_file
langchain.document_loaders.obsidian
langchain.document_loaders.odt
langchain.document_loaders.onedrive
langchain.document_loaders.onedrive_file
langchain.document_loaders.open_city_data
langchain.document_loaders.org_mode
langchain.document_loaders.parsers.audio
langchain.document_loaders.parsers.generic
langchain.document_loaders.parsers.grobid
langchain.document_loaders.parsers.html.bs4
langchain.document_loaders.parsers.language.code_segmenter
langchain.document_loaders.parsers.language.javascript
langchain.document_loaders.parsers.language.language_parser
langchain.document_loaders.parsers.language.python
langchain.document_loaders.parsers.pdf
langchain.document_loaders.parsers.registry
langchain.document_loaders.parsers.txt
langchain.document_loaders.pdf
langchain.document_loaders.powerpoint
langchain.document_loaders.psychic
langchain.document_loaders.pubmed
langchain.document_loaders.pyspark_dataframe
langchain.document_loaders.python
langchain.document_loaders.readthedocs
langchain.document_loaders.recursive_url_loader
langchain.document_loaders.reddit
langchain.document_loaders.roam
langchain.document_loaders.rocksetdb
langchain.document_loaders.rss
langchain.document_loaders.rst
langchain.document_loaders.rtf
langchain.document_loaders.s3_directory
langchain.document_loaders.s3_file
langchain.document_loaders.sitemap
langchain.document_loaders.slack_directory
langchain.document_loaders.snowflake_loader
langchain.document_loaders.spreedly
langchain.document_loaders.srt
langchain.document_loaders.stripe
langchain.document_loaders.telegram
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-7
|
langchain.document_loaders.stripe
langchain.document_loaders.telegram
langchain.document_loaders.tencent_cos_directory
langchain.document_loaders.tencent_cos_file
langchain.document_loaders.tensorflow_datasets
langchain.document_loaders.text
langchain.document_loaders.tomarkdown
langchain.document_loaders.toml
langchain.document_loaders.trello
langchain.document_loaders.tsv
langchain.document_loaders.twitter
langchain.document_loaders.unstructured
langchain.document_loaders.url
langchain.document_loaders.url_playwright
langchain.document_loaders.url_selenium
langchain.document_loaders.weather
langchain.document_loaders.web_base
langchain.document_loaders.whatsapp_chat
langchain.document_loaders.wikipedia
langchain.document_loaders.word_document
langchain.document_loaders.xml
langchain.document_loaders.xorbits
langchain.document_loaders.youtube
langchain.document_transformers.doctran_text_extract
langchain.document_transformers.doctran_text_qa
langchain.document_transformers.doctran_text_translate
langchain.document_transformers.embeddings_redundant_filter
langchain.document_transformers.html2text
langchain.document_transformers.long_context_reorder
langchain.document_transformers.nuclia_text_transform
langchain.document_transformers.openai_functions
langchain.embeddings.aleph_alpha
langchain.embeddings.awa
langchain.embeddings.base
langchain.embeddings.bedrock
langchain.embeddings.clarifai
langchain.embeddings.cohere
langchain.embeddings.dashscope
langchain.embeddings.deepinfra
langchain.embeddings.edenai
langchain.embeddings.elasticsearch
langchain.embeddings.embaas
langchain.embeddings.fake
langchain.embeddings.google_palm
langchain.embeddings.gpt4all
langchain.embeddings.huggingface
langchain.embeddings.huggingface_hub
langchain.embeddings.jina
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-8
|
langchain.embeddings.huggingface_hub
langchain.embeddings.jina
langchain.embeddings.llamacpp
langchain.embeddings.localai
langchain.embeddings.minimax
langchain.embeddings.mlflow_gateway
langchain.embeddings.modelscope_hub
langchain.embeddings.mosaicml
langchain.embeddings.nlpcloud
langchain.embeddings.octoai_embeddings
langchain.embeddings.openai
langchain.embeddings.sagemaker_endpoint
langchain.embeddings.self_hosted
langchain.embeddings.self_hosted_hugging_face
langchain.embeddings.spacy_embeddings
langchain.embeddings.tensorflow_hub
langchain.embeddings.vertexai
langchain.embeddings.xinference
langchain.evaluation.agents.trajectory_eval_chain
langchain.evaluation.comparison.eval_chain
langchain.evaluation.criteria.eval_chain
langchain.evaluation.embedding_distance.base
langchain.evaluation.loading
langchain.evaluation.qa.eval_chain
langchain.evaluation.qa.generate_chain
langchain.evaluation.schema
langchain.evaluation.string_distance.base
langchain.graphs.arangodb_graph
langchain.graphs.hugegraph
langchain.graphs.kuzu_graph
langchain.graphs.memgraph_graph
langchain.graphs.nebula_graph
langchain.graphs.neo4j_graph
langchain.graphs.neptune_graph
langchain.graphs.networkx_graph
langchain.graphs.rdf_graph
langchain.indexes.graph
langchain.indexes.vectorstore
langchain.llms.ai21
langchain.llms.aleph_alpha
langchain.llms.amazon_api_gateway
langchain.llms.anthropic
langchain.llms.anyscale
langchain.llms.aviary
langchain.llms.azureml_endpoint
langchain.llms.bananadev
langchain.llms.base
langchain.llms.baseten
langchain.llms.beam
langchain.llms.bedrock
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-9
|
langchain.llms.beam
langchain.llms.bedrock
langchain.llms.cerebriumai
langchain.llms.chatglm
langchain.llms.clarifai
langchain.llms.cohere
langchain.llms.ctransformers
langchain.llms.databricks
langchain.llms.deepinfra
langchain.llms.edenai
langchain.llms.fake
langchain.llms.fireworks
langchain.llms.forefrontai
langchain.llms.google_palm
langchain.llms.gooseai
langchain.llms.gpt4all
langchain.llms.huggingface_endpoint
langchain.llms.huggingface_hub
langchain.llms.huggingface_pipeline
langchain.llms.huggingface_text_gen_inference
langchain.llms.human
langchain.llms.koboldai
langchain.llms.llamacpp
langchain.llms.loading
langchain.llms.manifest
langchain.llms.minimax
langchain.llms.mlflow_ai_gateway
langchain.llms.modal
langchain.llms.mosaicml
langchain.llms.nlpcloud
langchain.llms.octoai_endpoint
langchain.llms.ollama
langchain.llms.openai
langchain.llms.openllm
langchain.llms.openlm
langchain.llms.petals
langchain.llms.pipelineai
langchain.llms.predibase
langchain.llms.predictionguard
langchain.llms.promptlayer_openai
langchain.llms.replicate
langchain.llms.rwkv
langchain.llms.sagemaker_endpoint
langchain.llms.self_hosted
langchain.llms.self_hosted_hugging_face
langchain.llms.stochasticai
langchain.llms.symblai_nebula
langchain.llms.textgen
langchain.llms.tongyi
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-10
|
langchain.llms.textgen
langchain.llms.tongyi
langchain.llms.utils
langchain.llms.vertexai
langchain.llms.vllm
langchain.llms.writer
langchain.llms.xinference
langchain.load.dump
langchain.load.load
langchain.load.serializable
langchain.memory.buffer
langchain.memory.buffer_window
langchain.memory.chat_memory
langchain.memory.chat_message_histories.cassandra
langchain.memory.chat_message_histories.cosmos_db
langchain.memory.chat_message_histories.dynamodb
langchain.memory.chat_message_histories.file
langchain.memory.chat_message_histories.firestore
langchain.memory.chat_message_histories.in_memory
langchain.memory.chat_message_histories.momento
langchain.memory.chat_message_histories.mongodb
langchain.memory.chat_message_histories.postgres
langchain.memory.chat_message_histories.redis
langchain.memory.chat_message_histories.rocksetdb
langchain.memory.chat_message_histories.sql
langchain.memory.chat_message_histories.streamlit
langchain.memory.chat_message_histories.zep
langchain.memory.combined
langchain.memory.entity
langchain.memory.kg
langchain.memory.motorhead_memory
langchain.memory.readonly
langchain.memory.simple
langchain.memory.summary
langchain.memory.summary_buffer
langchain.memory.token_buffer
langchain.memory.utils
langchain.memory.vectorstore
langchain.memory.zep_memory
langchain.model_laboratory
langchain.output_parsers.boolean
langchain.output_parsers.combining
langchain.output_parsers.datetime
langchain.output_parsers.enum
langchain.output_parsers.fix
langchain.output_parsers.json
langchain.output_parsers.list
langchain.output_parsers.loading
langchain.output_parsers.openai_functions
langchain.output_parsers.pydantic
langchain.output_parsers.rail_parser
langchain.output_parsers.regex
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-11
|
langchain.output_parsers.rail_parser
langchain.output_parsers.regex
langchain.output_parsers.regex_dict
langchain.output_parsers.retry
langchain.output_parsers.structured
langchain.prompts.base
langchain.prompts.chat
langchain.prompts.example_selector.base
langchain.prompts.example_selector.length_based
langchain.prompts.example_selector.ngram_overlap
langchain.prompts.example_selector.semantic_similarity
langchain.prompts.few_shot
langchain.prompts.few_shot_with_templates
langchain.prompts.loading
langchain.prompts.pipeline
langchain.prompts.prompt
langchain.retrievers.arxiv
langchain.retrievers.azure_cognitive_search
langchain.retrievers.bm25
langchain.retrievers.chaindesk
langchain.retrievers.chatgpt_plugin_retriever
langchain.retrievers.contextual_compression
langchain.retrievers.databerry
langchain.retrievers.docarray
langchain.retrievers.document_compressors.base
langchain.retrievers.document_compressors.chain_extract
langchain.retrievers.document_compressors.chain_filter
langchain.retrievers.document_compressors.cohere_rerank
langchain.retrievers.document_compressors.embeddings_filter
langchain.retrievers.elastic_search_bm25
langchain.retrievers.ensemble
langchain.retrievers.google_cloud_enterprise_search
langchain.retrievers.kendra
langchain.retrievers.knn
langchain.retrievers.llama_index
langchain.retrievers.merger_retriever
langchain.retrievers.metal
langchain.retrievers.milvus
langchain.retrievers.multi_query
langchain.retrievers.parent_document_retriever
langchain.retrievers.pinecone_hybrid_search
langchain.retrievers.pubmed
langchain.retrievers.re_phraser
langchain.retrievers.remote_retriever
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-12
|
langchain.retrievers.re_phraser
langchain.retrievers.remote_retriever
langchain.retrievers.self_query.base
langchain.retrievers.self_query.chroma
langchain.retrievers.self_query.deeplake
langchain.retrievers.self_query.myscale
langchain.retrievers.self_query.pinecone
langchain.retrievers.self_query.qdrant
langchain.retrievers.self_query.weaviate
langchain.retrievers.svm
langchain.retrievers.tfidf
langchain.retrievers.time_weighted_retriever
langchain.retrievers.vespa_retriever
langchain.retrievers.weaviate_hybrid_search
langchain.retrievers.web_research
langchain.retrievers.wikipedia
langchain.retrievers.zep
langchain.retrievers.zilliz
langchain.schema.agent
langchain.schema.document
langchain.schema.exceptions
langchain.schema.language_model
langchain.schema.memory
langchain.schema.messages
langchain.schema.output
langchain.schema.output_parser
langchain.schema.prompt
langchain.schema.prompt_template
langchain.schema.retriever
langchain.schema.runnable
langchain.schema.storage
langchain.server
langchain.smith.evaluation.config
langchain.smith.evaluation.runner_utils
langchain.smith.evaluation.string_run_evaluator
langchain.storage.encoder_backed
langchain.storage.exceptions
langchain.storage.file_system
langchain.storage.in_memory
langchain.text_splitter
langchain.tools.amadeus.base
langchain.tools.amadeus.closest_airport
langchain.tools.amadeus.flight_search
langchain.tools.amadeus.utils
langchain.tools.arxiv.tool
langchain.tools.azure_cognitive_services.form_recognizer
langchain.tools.azure_cognitive_services.image_analysis
langchain.tools.azure_cognitive_services.speech2text
langchain.tools.azure_cognitive_services.text2speech
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-13
|
langchain.tools.azure_cognitive_services.text2speech
langchain.tools.azure_cognitive_services.utils
langchain.tools.base
langchain.tools.bing_search.tool
langchain.tools.brave_search.tool
langchain.tools.convert_to_openai
langchain.tools.dataforseo_api_search.tool
langchain.tools.ddg_search.tool
langchain.tools.file_management.copy
langchain.tools.file_management.delete
langchain.tools.file_management.file_search
langchain.tools.file_management.list_dir
langchain.tools.file_management.move
langchain.tools.file_management.read
langchain.tools.file_management.utils
langchain.tools.file_management.write
langchain.tools.github.tool
langchain.tools.gmail.base
langchain.tools.gmail.create_draft
langchain.tools.gmail.get_message
langchain.tools.gmail.get_thread
langchain.tools.gmail.search
langchain.tools.gmail.send_message
langchain.tools.gmail.utils
langchain.tools.golden_query.tool
langchain.tools.google_places.tool
langchain.tools.google_search.tool
langchain.tools.google_serper.tool
langchain.tools.graphql.tool
langchain.tools.human.tool
langchain.tools.ifttt
langchain.tools.interaction.tool
langchain.tools.jira.tool
langchain.tools.json.tool
langchain.tools.metaphor_search.tool
langchain.tools.multion.create_session
langchain.tools.multion.update_session
langchain.tools.nuclia.tool
langchain.tools.office365.base
langchain.tools.office365.create_draft_message
langchain.tools.office365.events_search
langchain.tools.office365.messages_search
langchain.tools.office365.send_event
langchain.tools.office365.send_message
langchain.tools.office365.utils
langchain.tools.openapi.utils.api_models
langchain.tools.openweathermap.tool
langchain.tools.playwright.base
langchain.tools.playwright.click
langchain.tools.playwright.current_page
langchain.tools.playwright.extract_hyperlinks
langchain.tools.playwright.extract_text
langchain.tools.playwright.get_elements
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-14
|
langchain.tools.playwright.extract_text
langchain.tools.playwright.get_elements
langchain.tools.playwright.navigate
langchain.tools.playwright.navigate_back
langchain.tools.playwright.utils
langchain.tools.plugin
langchain.tools.powerbi.tool
langchain.tools.pubmed.tool
langchain.tools.python.tool
langchain.tools.requests.tool
langchain.tools.scenexplain.tool
langchain.tools.searx_search.tool
langchain.tools.shell.tool
langchain.tools.sleep.tool
langchain.tools.spark_sql.tool
langchain.tools.sql_database.tool
langchain.tools.steamship_image_generation.tool
langchain.tools.steamship_image_generation.utils
langchain.tools.vectorstore.tool
langchain.tools.wikipedia.tool
langchain.tools.wolfram_alpha.tool
langchain.tools.youtube.search
langchain.tools.zapier.tool
langchain.utilities.arxiv
langchain.utilities.awslambda
langchain.utilities.bash
langchain.utilities.bibtex
langchain.utilities.bing_search
langchain.utilities.brave_search
langchain.utilities.dalle_image_generator
langchain.utilities.dataforseo_api_search
langchain.utilities.duckduckgo_search
langchain.utilities.github
langchain.utilities.golden_query
langchain.utilities.google_places_api
langchain.utilities.google_search
langchain.utilities.google_serper
langchain.utilities.graphql
langchain.utilities.jira
langchain.utilities.loading
langchain.utilities.max_compute
langchain.utilities.metaphor_search
langchain.utilities.openapi
langchain.utilities.openweathermap
langchain.utilities.portkey
langchain.utilities.powerbi
langchain.utilities.pubmed
langchain.utilities.python
langchain.utilities.redis
langchain.utilities.requests
langchain.utilities.scenexplain
langchain.utilities.searx_search
langchain.utilities.serpapi
langchain.utilities.spark_sql
langchain.utilities.sql_database
langchain.utilities.tensorflow_datasets
langchain.utilities.twilio
langchain.utilities.vertexai
langchain.utilities.wikipedia
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-15
|
langchain.utilities.twilio
langchain.utilities.vertexai
langchain.utilities.wikipedia
langchain.utilities.wolfram_alpha
langchain.utilities.zapier
langchain.utils.env
langchain.utils.formatting
langchain.utils.input
langchain.utils.math
langchain.utils.strings
langchain.utils.utils
langchain.vectorstores.alibabacloud_opensearch
langchain.vectorstores.analyticdb
langchain.vectorstores.annoy
langchain.vectorstores.atlas
langchain.vectorstores.awadb
langchain.vectorstores.azuresearch
langchain.vectorstores.base
langchain.vectorstores.cassandra
langchain.vectorstores.chroma
langchain.vectorstores.clarifai
langchain.vectorstores.clickhouse
langchain.vectorstores.deeplake
langchain.vectorstores.docarray.base
langchain.vectorstores.docarray.hnsw
langchain.vectorstores.docarray.in_memory
langchain.vectorstores.elastic_vector_search
langchain.vectorstores.faiss
langchain.vectorstores.hologres
langchain.vectorstores.lancedb
langchain.vectorstores.marqo
langchain.vectorstores.matching_engine
langchain.vectorstores.meilisearch
langchain.vectorstores.milvus
langchain.vectorstores.mongodb_atlas
langchain.vectorstores.myscale
langchain.vectorstores.opensearch_vector_search
langchain.vectorstores.pgembedding
langchain.vectorstores.pgvector
langchain.vectorstores.pinecone
langchain.vectorstores.qdrant
langchain.vectorstores.redis
langchain.vectorstores.rocksetdb
langchain.vectorstores.scann
langchain.vectorstores.singlestoredb
langchain.vectorstores.sklearn
langchain.vectorstores.starrocks
langchain.vectorstores.supabase
langchain.vectorstores.tair
langchain.vectorstores.tigris
langchain.vectorstores.typesense
langchain.vectorstores.usearch
langchain.vectorstores.utils
langchain.vectorstores.vectara
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-16
|
langchain.vectorstores.utils
langchain.vectorstores.vectara
langchain.vectorstores.weaviate
langchain.vectorstores.xata
langchain.vectorstores.zilliz
langchain_experimental.autonomous_agents.autogpt.agent
langchain_experimental.autonomous_agents.autogpt.memory
langchain_experimental.autonomous_agents.autogpt.output_parser
langchain_experimental.autonomous_agents.autogpt.prompt
langchain_experimental.autonomous_agents.autogpt.prompt_generator
langchain_experimental.autonomous_agents.baby_agi.baby_agi
langchain_experimental.autonomous_agents.baby_agi.task_creation
langchain_experimental.autonomous_agents.baby_agi.task_execution
langchain_experimental.autonomous_agents.baby_agi.task_prioritization
langchain_experimental.autonomous_agents.hugginggpt.hugginggpt
langchain_experimental.autonomous_agents.hugginggpt.repsonse_generator
langchain_experimental.autonomous_agents.hugginggpt.task_executor
langchain_experimental.autonomous_agents.hugginggpt.task_planner
langchain_experimental.cpal.constants
langchain_experimental.generative_agents.generative_agent
langchain_experimental.generative_agents.memory
langchain_experimental.llms.anthropic_functions
langchain_experimental.llms.jsonformer_decoder
langchain_experimental.llms.llamaapi
langchain_experimental.llms.rellm_decoder
langchain_experimental.pal_chain.base
langchain_experimental.plan_and_execute.agent_executor
langchain_experimental.plan_and_execute.executors.agent_executor
langchain_experimental.plan_and_execute.executors.base
langchain_experimental.plan_and_execute.planners.base
langchain_experimental.plan_and_execute.planners.chat_planner
langchain_experimental.plan_and_execute.schema
langchain_experimental.sql.base
langchain_experimental.tot.base
langchain_experimental.tot.checker
langchain_experimental.tot.controller
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
2dc8ee668fc2-17
|
langchain_experimental.tot.checker
langchain_experimental.tot.controller
langchain_experimental.tot.memory
langchain_experimental.tot.prompts
langchain_experimental.tot.thought
langchain_experimental.tot.thought_generation
pydantic.main
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
92c94153315f-0
|
Source code for langchain.server
"""Script to run langchain-server locally using docker-compose."""
import subprocess
from pathlib import Path
from langsmith.cli.main import get_docker_compose_command
[docs]def main() -> None:
"""Run the langchain server locally."""
p = Path(__file__).absolute().parent / "docker-compose.yaml"
docker_compose_command = get_docker_compose_command()
subprocess.run([*docker_compose_command, "-f", str(p), "pull"])
subprocess.run([*docker_compose_command, "-f", str(p), "up"])
if __name__ == "__main__":
main()
|
https://api.python.langchain.com/en/latest/_modules/langchain/server.html
|
3e17c05b1921-0
|
Source code for langchain.text_splitter
"""**Text Splitters** are classes for splitting text.
**Class hierarchy:**
.. code-block::
BaseDocumentTransformer --> TextSplitter --> <name>TextSplitter # Example: CharacterTextSplitter
RecursiveCharacterTextSplitter --> <name>TextSplitter
Note: **MarkdownHeaderTextSplitter** does not derive from TextSplitter.
**Main helpers:**
.. code-block::
Document, Tokenizer, Language, LineType, HeaderType
""" # noqa: E501
from __future__ import annotations
import copy
import logging
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Iterable,
List,
Literal,
Optional,
Sequence,
Tuple,
Type,
TypedDict,
TypeVar,
Union,
cast,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
def _make_spacy_pipeline_for_splitting(pipeline: str) -> Any: # avoid importing spacy
try:
import spacy
except ImportError:
raise ImportError(
"Spacy is not installed, please install it with `pip install spacy`."
)
if pipeline == "sentencizer":
from spacy.lang.en import English
sentencizer = English()
sentencizer.add_pipe("sentencizer")
else:
sentencizer = spacy.load(pipeline, exclude=["ner", "tagger"])
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-1
|
sentencizer = spacy.load(pipeline, exclude=["ner", "tagger"])
return sentencizer
def _split_text_with_regex(
text: str, separator: str, keep_separator: bool
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = re.split(separator, text)
else:
splits = list(text)
return [s for s in splits if s != ""]
[docs]class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
[docs] def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: bool = False,
add_start_index: bool = False,
) -> None:
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether to keep the separator in the chunks
add_start_index: If `True`, includes chunk's start index in metadata
"""
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-2
|
add_start_index: If `True`, includes chunk's start index in metadata
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
self._add_start_index = add_start_index
[docs] @abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
[docs] def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = -1
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
index = text.find(chunk, index + 1)
metadata["start_index"] = index
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-3
|
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-4
|
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
[docs] @classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
[docs] @classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-5
|
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
[docs]class CharacterTextSplitter(TextSplitter):
"""Splitting text that looks at characters."""
[docs] def __init__(
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-6
|
"""Splitting text that looks at characters."""
[docs] def __init__(
self, separator: str = "\n\n", is_separator_regex: bool = False, **kwargs: Any
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
self._is_separator_regex = is_separator_regex
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
separator = (
self._separator if self._is_separator_regex else re.escape(self._separator)
)
splits = _split_text_with_regex(text, separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
[docs]class LineType(TypedDict):
"""Line type as typed dict."""
metadata: Dict[str, str]
content: str
[docs]class HeaderType(TypedDict):
"""Header type as typed dict."""
level: int
name: str
data: str
[docs]class MarkdownHeaderTextSplitter:
"""Splitting markdown files based on specified headers."""
[docs] def __init__(
self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False
):
"""Create a new MarkdownHeaderTextSplitter.
Args:
headers_to_split_on: Headers we want to track
return_each_line: Return each line w/ associated headers
"""
# Output line-by-line or aggregated into chunks w/ common headers
self.return_each_line = return_each_line
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-7
|
self.return_each_line = return_each_line
# Given the headers we want to split on,
# (e.g., "#, ##, etc") order by length
self.headers_to_split_on = sorted(
headers_to_split_on, key=lambda split: len(split[0]), reverse=True
)
[docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:
"""Combine lines with common metadata into chunks
Args:
lines: Line of text / associated header metadata
"""
aggregated_chunks: List[LineType] = []
for line in lines:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == line["metadata"]
):
# If the last line in the aggregated list
# has the same metadata as the current line,
# append the current content to the last lines's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
else:
# Otherwise, append the current line to the aggregated list
aggregated_chunks.append(line)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
[docs] def split_text(self, text: str) -> List[Document]:
"""Split markdown file
Args:
text: Markdown file"""
# Split the input text by newline character ("\n").
lines = text.split("\n")
# Final output
lines_with_metadata: List[LineType] = []
# Content and metadata of the chunk currently being processed
current_content: List[str] = []
current_metadata: Dict[str, str] = {}
# Keep track of the nested header structure
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-8
|
# Keep track of the nested header structure
# header_stack: List[Dict[str, Union[int, str]]] = []
header_stack: List[HeaderType] = []
initial_metadata: Dict[str, str] = {}
for line in lines:
stripped_line = line.strip()
# Check each line against each of the header types (e.g., #, ##)
for sep, name in self.headers_to_split_on:
# Check if line starts with a header that we intend to split on
if stripped_line.startswith(sep) and (
# Header with no text OR header is followed by space
# Both are valid conditions that sep is being used a header
len(stripped_line) == len(sep)
or stripped_line[len(sep)] == " "
):
# Ensure we are tracking the header as metadata
if name is not None:
# Get the current header level
current_header_level = sep.count("#")
# Pop out headers of lower or same level from the stack
while (
header_stack
and header_stack[-1]["level"] >= current_header_level
):
# We have encountered a new header
# at the same or higher level
popped_header = header_stack.pop()
# Clear the metadata for the
# popped header in initial_metadata
if popped_header["name"] in initial_metadata:
initial_metadata.pop(popped_header["name"])
# Push the current header to the stack
header: HeaderType = {
"level": current_header_level,
"name": name,
"data": stripped_line[len(sep) :].strip(),
}
header_stack.append(header)
# Update initial_metadata with the current header
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-9
|
}
header_stack.append(header)
# Update initial_metadata with the current header
initial_metadata[name] = header["data"]
# Add the previous line to the lines_with_metadata
# only if current_content is not empty
if current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
break
else:
if stripped_line:
current_content.append(stripped_line)
elif current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
current_metadata = initial_metadata.copy()
if current_content:
lines_with_metadata.append(
{"content": "\n".join(current_content), "metadata": current_metadata}
)
# lines_with_metadata has each line with associated header metadata
# aggregate these into chunks based on common metadata
if not self.return_each_line:
return self.aggregate_lines_to_chunks(lines_with_metadata)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in lines_with_metadata
]
# should be in newer Python versions (3.10+)
# @dataclass(frozen=True, kw_only=True, slots=True)
[docs]@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int
tokens_per_chunk: int
decode: Callable[[list[int]], str]
encode: Callable[[str], List[int]]
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-10
|
encode: Callable[[str], List[int]]
[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:
"""Split incoming text and return chunks using tokenizer."""
splits: List[str] = []
input_ids = tokenizer.encode(text)
start_idx = 0
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(tokenizer.decode(chunk_ids))
start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
[docs]class TokenTextSplitter(TextSplitter):
"""Splitting text to tokens using model tokenizer."""
[docs] def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-11
|
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
[docs] def split_text(self, text: str) -> List[str]:
def _encode(_text: str) -> List[int]:
return self._tokenizer.encode(
_text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self._chunk_size,
decode=self._tokenizer.decode,
encode=_encode,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Splitting text to tokens using sentence model tokenizer."""
[docs] def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
try:
from sentence_transformers import SentenceTransformer
except ImportError:
raise ImportError(
"Could not import sentence_transformer python package. "
"This is needed in order to for SentenceTransformersTokenTextSplitter. "
"Please install it with `pip install sentence-transformers`."
)
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-12
|
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
self.tokenizer = self._model.tokenizer
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration(
self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
[docs] def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs] def count_tokens(self, *, text: str) -> int:
return len(self._encode(text))
_max_length_equal_32_bit_integer = 2**32
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-13
|
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
[docs]class Language(str, Enum):
"""Enum of the programming languages."""
CPP = "cpp"
GO = "go"
JAVA = "java"
JS = "js"
PHP = "php"
PROTO = "proto"
PYTHON = "python"
RST = "rst"
RUBY = "ruby"
RUST = "rust"
SCALA = "scala"
SWIFT = "swift"
MARKDOWN = "markdown"
LATEX = "latex"
HTML = "html"
SOL = "sol"
[docs]class RecursiveCharacterTextSplitter(TextSplitter):
"""Splitting text by recursively look at characters.
Recursively tries to split by different characters to find one
that works.
"""
[docs] def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: bool = True,
is_separator_regex: bool = False,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
self._is_separator_regex = is_separator_regex
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-14
|
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
separator = separators[-1]
new_separators = []
for i, _s in enumerate(separators):
_separator = _s if self._is_separator_regex else re.escape(_s)
if _s == "":
separator = _s
break
if re.search(_separator, text):
separator = _s
new_separators = separators[i + 1 :]
break
_separator = separator if self._is_separator_regex else re.escape(separator)
splits = _split_text_with_regex(text, _separator, self._keep_separator)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if not new_separators:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
[docs] def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
[docs] @classmethod
def from_language(
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-15
|
[docs] @classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, is_separator_regex=True, **kwargs)
[docs] @staticmethod
def get_separators_for_language(language: Language) -> List[str]:
if language == Language.CPP:
return [
# Split along class definitions
"\nclass ",
# Split along function definitions
"\nvoid ",
"\nint ",
"\nfloat ",
"\ndouble ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.GO:
return [
# Split along function definitions
"\nfunc ",
"\nvar ",
"\nconst ",
"\ntype ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JAVA:
return [
# Split along class definitions
"\nclass ",
# Split along method definitions
"\npublic ",
"\nprotected ",
"\nprivate ",
"\nstatic ",
# Split along control flow statements
"\nif ",
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-16
|
"\nstatic ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JS:
return [
# Split along function definitions
"\nfunction ",
"\nconst ",
"\nlet ",
"\nvar ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\ndefault ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PHP:
return [
# Split along function definitions
"\nfunction ",
# Split along class definitions
"\nclass ",
# Split along control flow statements
"\nif ",
"\nforeach ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PROTO:
return [
# Split along message definitions
"\nmessage ",
# Split along service definitions
"\nservice ",
# Split along enum definitions
"\nenum ",
# Split along option definitions
"\noption ",
# Split along import statements
"\nimport ",
# Split along syntax declarations
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-17
|
# Split along import statements
"\nimport ",
# Split along syntax declarations
"\nsyntax ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PYTHON:
return [
# First, try to split along class definitions
"\nclass ",
"\ndef ",
"\n\tdef ",
# Now split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RST:
return [
# Split along section titles
"\n=+\n",
"\n-+\n",
"\n\\*+\n",
# Split along directive markers
"\n\n.. *\n\n",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUBY:
return [
# Split along method definitions
"\ndef ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nunless ",
"\nwhile ",
"\nfor ",
"\ndo ",
"\nbegin ",
"\nrescue ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUST:
return [
# Split along function definitions
"\nfn ",
"\nconst ",
"\nlet ",
# Split along control flow statements
"\nif ",
"\nwhile ",
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-18
|
# Split along control flow statements
"\nif ",
"\nwhile ",
"\nfor ",
"\nloop ",
"\nmatch ",
"\nconst ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SCALA:
return [
# Split along class definitions
"\nclass ",
"\nobject ",
# Split along method definitions
"\ndef ",
"\nval ",
"\nvar ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nmatch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SWIFT:
return [
# Split along function definitions
"\nfunc ",
# Split along class definitions
"\nclass ",
"\nstruct ",
"\nenum ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.MARKDOWN:
return [
# First, try to split along Markdown headings (starting with level 2)
"\n#{1,6} ",
# Note the alternative syntax for headings (below) is not handled here
# Heading level 2
# ---------------
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-19
|
# Heading level 2
# ---------------
# End of code block
"```\n",
# Horizontal lines
"\n\\*\\*\\*+\n",
"\n---+\n",
"\n___+\n",
# Note that this splitter doesn't handle horizontal lines defined
# by *three or more* of ***, ---, or ___, but this is not handled
"\n\n",
"\n",
" ",
"",
]
elif language == Language.LATEX:
return [
# First, try to split along Latex sections
"\n\\\\chapter{",
"\n\\\\section{",
"\n\\\\subsection{",
"\n\\\\subsubsection{",
# Now split by environments
"\n\\\\begin{enumerate}",
"\n\\\\begin{itemize}",
"\n\\\\begin{description}",
"\n\\\\begin{list}",
"\n\\\\begin{quote}",
"\n\\\\begin{quotation}",
"\n\\\\begin{verse}",
"\n\\\\begin{verbatim}",
# Now split by math environments
"\n\\\begin{align}",
"$$",
"$",
# Now split by the normal type of lines
" ",
"",
]
elif language == Language.HTML:
return [
# First, try to split along HTML tags
"<body",
"<div",
"<p",
"<br",
"<li",
"<h1",
"<h2",
"<h3",
"<h4",
"<h5",
"<h6",
"<span",
"<table",
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
3e17c05b1921-20
|
"<h6",
"<span",
"<table",
"<tr",
"<td",
"<th",
"<ul",
"<ol",
"<header",
"<footer",
"<nav",
# Head
"<head",
"<style",
"<script",
"<meta",
"<title",
"",
]
elif language == Language.SOL:
return [
# Split along compiler information definitions
"\npragma ",
"\nusing ",
# Split along contract definitions
"\ncontract ",
"\ninterface ",
"\nlibrary ",
# Split along method definitions
"\nconstructor ",
"\ntype ",
"\nfunction ",
"\nevent ",
"\nmodifier ",
"\nerror ",
"\nstruct ",
"\nenum ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo while ",
"\nassembly ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
else:
raise ValueError(
f"Language {language} is not supported! "
f"Please choose from {list(Language)}"
)
[docs]class NLTKTextSplitter(TextSplitter):
"""Splitting text using NLTK package."""
[docs] def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None:
"""Initialize the NLTK splitter."""
super().__init__(**kwargs)
try:
from nltk.tokenize import sent_tokenize
self._tokenizer = sent_tokenize
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.