id
stringlengths
14
15
text
stringlengths
49
2.47k
source
stringlengths
61
166
50fa38b2bdcb-6
database (str, optional) – Database name. the (Additional optional arguments provide further customization over) – connection – pure_python (bool, optional) – Toggles the connector mode. If True, operates in pure Python mode. local_infile (bool, optional) – Allows local file uploads. charset (str, optional) – Specifies the character set for string values. ssl_key (str, optional) – Specifies the path of the file containing the SSL key. ssl_cert (str, optional) – Specifies the path of the file containing the SSL certificate. ssl_ca (str, optional) – Specifies the path of the file containing the SSL certificate authority. ssl_cipher (str, optional) – Sets the SSL cipher list. ssl_disabled (bool, optional) – Disables SSL usage. ssl_verify_cert (bool, optional) – Verifies the server’s certificate. Automatically enabled if ssl_ca is specified. ssl_verify_identity (bool, optional) – Verifies the server’s identity. conv (dict[int, Callable], optional) – A dictionary of data conversion functions. credential_type (str, optional) – Specifies the type of authentication to use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO. autocommit (bool, optional) – Enables autocommits. results_type (str, optional) – Determines the structure of the query results: tuples, namedtuples, dicts. results_format (str, optional) – Deprecated. This option has been renamed to results_type. Examples Basic Usage: from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import SingleStoreDB vectorstore = SingleStoreDB( OpenAIEmbeddings(), host="https://user:password@127.0.0.1:3306/database"
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-7
) Advanced Usage: from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import SingleStoreDB vectorstore = SingleStoreDB( OpenAIEmbeddings(), distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE, host="127.0.0.1", port=3306, user="user", password="password", database="db", table_name="my_custom_table", pool_size=10, timeout=60, ) Using environment variables: from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import SingleStoreDB os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db' vectorstore = SingleStoreDB(OpenAIEmbeddings()) async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str]
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-8
Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) → List[str][source]¶ Add more texts to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None. Returns empty list Return type List[str] async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → SingleStoreDBRetriever[source]¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-9
Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever(
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-10
docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶ Delete by vector ID or other criteria. Parameters ids – List of ids to delete. **kwargs – Other keyword arguments that subclasses might use. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-11
Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, distance_strategy: DistanceStrategy = DistanceStrategy.DOT_PRODUCT, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any) → SingleStoreDB[source]¶ Create a SingleStoreDB vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new table for the embeddings in SingleStoreDB. Adds the documents to the newly created table. This is intended to be a quick way to get started. .. rubric:: Example max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-12
Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶ Returns the most similar indexed documents to the query text. Uses cosine similarity. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. filter (dict) – A dictionary of metadata fields and values to filter by. Returns A list of documents that are most similar to the query text. Return type List[Document] Examples similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
50fa38b2bdcb-13
Return docs most similar to embedding vector. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query vector. similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs Returns List of Tuples of (doc, similarity_score) similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) → List[Tuple[Document, float]][source]¶ Return docs most similar to query. Uses cosine similarity. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – A dictionary of metadata fields and values to filter by. Defaults to None. Returns List of Documents most similar to the query and score for each Examples using SingleStoreDB¶ SingleStoreDB
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
bfeaaf090136-0
langchain.vectorstores.myscale.MyScaleSettings¶ class langchain.vectorstores.myscale.MyScaleSettings[source]¶ Bases: BaseSettings MyScale Client Configuration Attribute: myscale_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’. myscale_port (int) : URL port to connect with HTTP. Defaults to 8443. username (str) : Username to login. Defaults to None. password (str) : Password to login. Defaults to None. index_type (str): index type string. index_param (dict): index build parameter. database (str) : Database name to find the table. Defaults to ‘default’. table (str) : Table name to operate on. Defaults to ‘vector_table’. metric (str)Metric to compute distance,supported are (‘l2’, ‘cosine’, ‘ip’). Defaults to ‘cosine’. column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, must be same size to number of columns. For example: .. code-block:: python {‘id’: ‘text_id’, ‘vector’: ‘text_embedding’, ‘text’: ‘text_plain’, ‘metadata’: ‘metadata_dictionary_in_json’, } Defaults to identity map. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}¶ param database: str = 'default'¶ param host: str = 'localhost'¶ param index_param: Optional[Dict[str, str]] = None¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html
bfeaaf090136-1
param index_param: Optional[Dict[str, str]] = None¶ param index_type: str = 'IVFFLAT'¶ param metric: str = 'cosine'¶ param password: Optional[str] = None¶ param port: int = 8443¶ param table: str = 'langchain'¶ param username: Optional[str] = None¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html
bfeaaf090136-2
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using MyScaleSettings¶ MyScale
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html
7f8a61dc6611-0
langchain.vectorstores.pinecone.Pinecone¶ class langchain.vectorstores.pinecone.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None, distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE)[source]¶ Wrapper around Pinecone vector database. To use, you should have the pinecone-client python package installed. Example from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") Initialize with Pinecone client. Attributes embeddings Access the query embedding object if available. Methods __init__(index, embedding_function, text_key) Initialize with Pinecone client. aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, ids, ...]) Run more texts through the embeddings and add to the vectorstore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...])
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-1
amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. delete([ids, delete_all, namespace, filter]) Delete by vector IDs or filter. from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_existing_index(index_name, embedding[, ...]) Load pinecone vectorstore from index name. from_texts(texts, embedding[, metadatas, ...]) Construct Pinecone wrapper from raw documents. max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k, filter, namespace]) Return pinecone documents most similar to query. similarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query) Return docs and relevance scores in the range [0, 1].
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-2
Return docs and relevance scores in the range [0, 1]. similarity_search_with_score(query[, k, ...]) Return pinecone documents most similar to query, along with scores. __init__(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None, distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE)[source]¶ Initialize with Pinecone client. async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Parameters texts – Iterable of strings to add to the vectorstore. metadatas – Optional list of metadatas associated with the texts.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-3
metadatas – Optional list of metadatas associated with the texts. ids – Optional list of ids to associate with the texts. namespace – Optional pinecone namespace to add the texts to. Returns List of ids from adding the texts into the vectorstore. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-4
score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-5
Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. delete(ids: Optional[List[str]] = None, delete_all: Optional[bool] = None, namespace: Optional[str] = None, filter: Optional[dict] = None, **kwargs: Any) → None[source]¶ Delete by vector IDs or filter. :param ids: List of ids to delete. :param filter: Dictionary of conditions to filter vectors to delete. classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_existing_index(index_name: str, embedding: Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → Pinecone[source]¶ Load pinecone vectorstore from index name. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, upsert_kwargs: Optional[dict] = None, **kwargs: Any) → Pinecone[source]¶ Construct Pinecone wrapper from raw documents. This is a user friendly interface that: Embeds documents. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started. Example from langchain import Pinecone
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-6
Example from langchain import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts( texts, embeddings, index_name="langchain-demo" ) max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-7
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Return pinecone documents most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Dictionary of argument(s) to filter on metadata namespace – Namespace to search in. Default will search in ‘’ namespace. Returns List of Documents most similar to the query and score for each similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query vector. similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
7f8a61dc6611-8
Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs Returns List of Tuples of (doc, similarity_score) similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[Document, float]][source]¶ Return pinecone documents most similar to query, along with scores. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Dictionary of argument(s) to filter on metadata namespace – Namespace to search in. Default will search in ‘’ namespace. Returns List of Documents most similar to the query and score for each Examples using Pinecone¶ Pinecone Self-querying with Pinecone
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
09757a8c2b51-0
langchain.vectorstores.starrocks.debug_output¶ langchain.vectorstores.starrocks.debug_output(s: Any) → None[source]¶ Print a debug message if DEBUG is True. :param s: The message to print
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.debug_output.html
6f7121690a88-0
langchain.vectorstores.clickhouse.Clickhouse¶ class langchain.vectorstores.clickhouse.Clickhouse(embedding: Embeddings, config: Optional[ClickhouseSettings] = None, **kwargs: Any)[source]¶ Wrapper around ClickHouse vector database You need a clickhouse-connect python package, and a valid account to connect to ClickHouse. ClickHouse can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse) ClickHouse Wrapper to LangChain embedding_function (Embeddings): config (ClickHouseSettings): Configuration to ClickHouse Client Other keyword arguments will pass into [clickhouse-connect](https://docs.clickhouse.com/) Attributes embeddings Access the query embedding object if available. metadata_column Methods __init__(embedding[, config]) ClickHouse Wrapper to LangChain aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, batch_size, ids]) Insert more texts through the embeddings and add to the VectorStore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...)
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-1
amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. delete([ids]) Delete by vector ID or other criteria. drop() Helper function: Drop data escape_str(value) from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts, embedding[, metadatas, ...]) Create ClickHouse wrapper with existing texts max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k, where_str]) Perform a similarity search with ClickHouse similarity_search_by_vector(embedding[, k, ...]) Perform a similarity search with ClickHouse by vectors similarity_search_with_relevance_scores(query) Perform a similarity search with ClickHouse similarity_search_with_score(*args, **kwargs) Run similarity search with distance. __init__(embedding: Embeddings, config: Optional[ClickhouseSettings] = None, **kwargs: Any) → None[source]¶ ClickHouse Wrapper to LangChain
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-2
ClickHouse Wrapper to LangChain embedding_function (Embeddings): config (ClickHouseSettings): Configuration to ClickHouse Client Other keyword arguments will pass into [clickhouse-connect](https://docs.clickhouse.com/) async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) → List[str][source]¶ Insert more texts through the embeddings and add to the VectorStore. Parameters texts – Iterable of strings to add to the VectorStore. ids – Optional list of ids to associate with the texts. batch_size – Batch size of insertion metadata – Optional column data to be inserted Returns List of ids from adding the texts into the VectorStore. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-3
Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-4
Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-5
Return docs most similar to query. delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶ Delete by vector ID or other criteria. Parameters ids – List of ids to delete. **kwargs – Other keyword arguments that subclasses might use. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] drop() → None[source]¶ Helper function: Drop data escape_str(value: str) → str[source]¶ classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[ClickhouseSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) → Clickhouse[source]¶ Create ClickHouse wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (ClickHouseSettings, Optional) – ClickHouse configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to ClickHouse. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) Returns ClickHouse Index
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-6
Returns ClickHouse Index max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-7
Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Perform a similarity search with ClickHouse by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Perform a similarity search with ClickHouse Parameters
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
6f7121690a88-8
Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of documents Return type List[Document] similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶ Run similarity search with distance. Examples using Clickhouse¶ ClickHouse Vector Search
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html
352c93eddeb0-0
langchain.vectorstores.qdrant.Qdrant¶ class langchain.vectorstores.qdrant.Qdrant(client: Any, collection_name: str, embeddings: Optional[Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', distance_strategy: str = 'COSINE', vector_name: Optional[str] = None, embedding_function: Optional[Callable] = None)[source]¶ Wrapper around Qdrant vector database. To use you should have the qdrant-client package installed. Example from qdrant_client import QdrantClient from langchain import Qdrant client = QdrantClient() collection_name = "MyCollection" qdrant = Qdrant(client, collection_name, embedding_function) Initialize with necessary components. Attributes CONTENT_KEY METADATA_KEY VECTOR_NAME embeddings Access the query embedding object if available. Methods __init__(client, collection_name[, ...]) Initialize with necessary components. aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas, ids, batch_size]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, ids, batch_size]) Run more texts through the embeddings and add to the vectorstore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas, ...]) Construct Qdrant wrapper from a list of texts.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-1
Construct Qdrant wrapper from a list of texts. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. amax_marginal_relevance_search_with_score_by_vector(...) Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k, filter]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k, ...]) Return docs most similar to embedding vector.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-2
Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. asimilarity_search_with_score(query[, k, ...]) Return docs most similar to query. asimilarity_search_with_score_by_vector(...) Return docs most similar to embedding vector. delete([ids]) Delete by vector ID or other criteria. from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts, embedding[, metadatas, ...]) Construct Qdrant wrapper from a list of texts. max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_with_score_by_vector(...) Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k, filter, ...]) Return docs most similar to query. similarity_search_by_vector(embedding[, k, ...]) Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query)
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-3
Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query) Return docs and relevance scores in the range [0, 1]. similarity_search_with_score(query[, k, ...]) Return docs most similar to query. similarity_search_with_score_by_vector(embedding) Return docs most similar to embedding vector. __init__(client: Any, collection_name: str, embeddings: Optional[Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', distance_strategy: str = 'COSINE', vector_name: Optional[str] = None, embedding_function: Optional[Callable] = None)[source]¶ Initialize with necessary components. async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Parameters texts – Iterable of strings to add to the vectorstore. metadatas – Optional list of metadatas associated with the texts. ids – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. batch_size – How many vectors upload per-request. Default: 64 Returns List of ids from adding the texts into the vectorstore.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-4
Default: 64 Returns List of ids from adding the texts into the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Parameters texts – Iterable of strings to add to the vectorstore. metadatas – Optional list of metadatas associated with the texts. ids – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. batch_size – How many vectors upload per-request. Default: 64 Returns List of ids from adding the texts into the vectorstore. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-5
Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, on_disk: Optional[bool] = None, force_recreate: bool = False, **kwargs: Any) → Qdrant[source]¶ Construct Qdrant wrapper from a list of texts. Parameters texts – A list of texts to be indexed in Qdrant. embedding – A subclass of Embeddings, responsible for text vectorization.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-6
embedding – A subclass of Embeddings, responsible for text vectorization. metadatas – An optional list of metadata. If provided it has to be of the same length as a list of texts. ids – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. location – If :memory: - use in-memory Qdrant instance. If str - use it as a url parameter. If None - fallback to relying on host and port parameters. url – either host or str of “Optional[scheme], host, Optional[port], Optional[prefix]”. Default: None port – Port of the REST API interface. Default: 6333 grpc_port – Port of the gRPC interface. Default: 6334 prefer_grpc – If true - use gPRC interface whenever possible in custom methods. Default: False https – If true - use HTTPS(SSL) protocol. Default: None api_key – API key for authentication in Qdrant Cloud. Default: None prefix – If not None - add prefix to the REST URL path. Example: service/v1 will result in http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. Default: None timeout – Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC host – Host name of Qdrant service. If url and host are None, set to ‘localhost’. Default: None path – Path in which the vectors will be stored while using local mode. Default: None collection_name – Name of the Qdrant collection to be used. If not provided, it will be created randomly. Default: None distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-7
Default: “Cosine” content_payload_key – A payload key used to store the content of the document. Default: “page_content” metadata_payload_key – A payload key used to store the metadata of the document. Default: “metadata” vector_name – Name of the vector to be used internally in Qdrant. Default: None batch_size – How many vectors upload per-request. Default: 64 shard_number – Number of shards in collection. Default is 1, minimum is 1. replication_factor – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode. write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode. on_disk_payload – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM. hnsw_config – Params for HNSW index optimizers_config – Params for optimizer wal_config – Params for Write-Ahead-Log quantization_config – Params for quantization, if None - quantization will be disabled init_from – Use data stored in another collection to initialize this collection force_recreate – Force recreating the collection **kwargs – Additional arguments passed directly into REST client initialization This is a user-friendly interface that:
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-8
This is a user-friendly interface that: 1. Creates embeddings, one for each text 2. Initializes the Qdrant database as an in-memory docstore by default (and overridable to a remote docstore) Adds the text embeddings to the Qdrant database This is intended to be a quick way to get started. Example from langchain import Qdrant from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() qdrant = await Qdrant.afrom_texts(texts, embeddings, "localhost") async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-9
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. Parameters lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance and distance for each. async amax_marginal_relevance_search_with_score_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. Parameters lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance and distance for each. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-10
the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} )
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-11
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, **kwargs: Any) → List[Document][source]¶ Return docs most similar to query. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param filter: Filter by metadata. Defaults to None. Returns List of Documents most similar to the query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Document][source]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding vector to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-12
E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of Documents most similar to the query. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. async asimilarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-13
threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of documents most similar to the query text and distance for each. async asimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding vector to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-14
consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of documents most similar to the query text and distance for each. delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool][source]¶ Delete by vector ID or other criteria. Parameters ids – List of ids to delete. **kwargs – Other keyword arguments that subclasses might use. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-15
Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, on_disk: Optional[bool] = None, force_recreate: bool = False, **kwargs: Any) → Qdrant[source]¶ Construct Qdrant wrapper from a list of texts. Parameters texts – A list of texts to be indexed in Qdrant. embedding – A subclass of Embeddings, responsible for text vectorization.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-16
embedding – A subclass of Embeddings, responsible for text vectorization. metadatas – An optional list of metadata. If provided it has to be of the same length as a list of texts. ids – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. location – If :memory: - use in-memory Qdrant instance. If str - use it as a url parameter. If None - fallback to relying on host and port parameters. url – either host or str of “Optional[scheme], host, Optional[port], Optional[prefix]”. Default: None port – Port of the REST API interface. Default: 6333 grpc_port – Port of the gRPC interface. Default: 6334 prefer_grpc – If true - use gPRC interface whenever possible in custom methods. Default: False https – If true - use HTTPS(SSL) protocol. Default: None api_key – API key for authentication in Qdrant Cloud. Default: None prefix – If not None - add prefix to the REST URL path. Example: service/v1 will result in http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. Default: None timeout – Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC host – Host name of Qdrant service. If url and host are None, set to ‘localhost’. Default: None path – Path in which the vectors will be stored while using local mode. Default: None collection_name – Name of the Qdrant collection to be used. If not provided, it will be created randomly. Default: None distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-17
Default: “Cosine” content_payload_key – A payload key used to store the content of the document. Default: “page_content” metadata_payload_key – A payload key used to store the metadata of the document. Default: “metadata” vector_name – Name of the vector to be used internally in Qdrant. Default: None batch_size – How many vectors upload per-request. Default: 64 shard_number – Number of shards in collection. Default is 1, minimum is 1. replication_factor – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode. write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode. on_disk_payload – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM. hnsw_config – Params for HNSW index optimizers_config – Params for optimizer wal_config – Params for Write-Ahead-Log quantization_config – Params for quantization, if None - quantization will be disabled init_from – Use data stored in another collection to initialize this collection force_recreate – Force recreating the collection **kwargs – Additional arguments passed directly into REST client initialization This is a user-friendly interface that:
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-18
This is a user-friendly interface that: 1. Creates embeddings, one for each text 2. Initializes the Qdrant database as an in-memory docstore by default (and overridable to a remote docstore) Adds the text embeddings to the Qdrant database This is intended to be a quick way to get started. Example from langchain import Qdrant from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() qdrant = Qdrant.from_texts(texts, embeddings, "localhost") max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-19
among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_with_score_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. Parameters lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance and distance for each. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-20
Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Document][source]¶ Return docs most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of Documents most similar to the query.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-21
Returns List of Documents most similar to the query. similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Document][source]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding vector to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of Documents most similar to the query. similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-22
Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs Returns List of Tuples of (doc, similarity_score) similarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-23
’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of documents most similar to the query text and distance for each. similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding vector to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to None. search_params – Additional search params offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
352c93eddeb0-24
’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas Returns List of documents most similar to the query text and distance for each. Examples using Qdrant¶ Qdrant Qdrant self-querying
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
d45550c66a1b-0
langchain.vectorstores.sklearn.ParquetSerializer¶ class langchain.vectorstores.sklearn.ParquetSerializer(persist_path: str)[source]¶ Serializes data in Apache Parquet format using the pyarrow package. Methods __init__(persist_path) extension() The file extension suggested by this serializer (without dot). load() Loads the data from the persist_path save(data) Saves the data to the persist_path __init__(persist_path: str) → None[source]¶ classmethod extension() → str[source]¶ The file extension suggested by this serializer (without dot). load() → Any[source]¶ Loads the data from the persist_path save(data: Any) → None[source]¶ Saves the data to the persist_path
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.ParquetSerializer.html
0c69cf23b2e1-0
langchain.vectorstores.elastic_vector_search.ElasticVectorSearch¶ class langchain.vectorstores.elastic_vector_search.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]¶ Wrapper around Elasticsearch as a vector database. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the “Deployments” page. To obtain your Elastic Cloud password for the default “elastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to “Security” > “Users” Locate the “elastic” user and click “Edit” Click “Reset password” Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example from langchain import ElasticVectorSearch
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-1
Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) Parameters elasticsearch_url (str) – The URL for the Elasticsearch instance. index_name (str) – The name of the Elasticsearch index for the embeddings. embedding (Embeddings) – An object that provides the ability to embed text. It should be an instance of a class that subclasses the Embeddings abstract base class, such as OpenAIEmbeddings() Raises ValueError – If the elasticsearch python package is not installed. Initialize with necessary components. Attributes embeddings Access the query embedding object if available. Methods __init__(elasticsearch_url, index_name, ...) Initialize with necessary components. aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, ids, ...]) Run more texts through the embeddings and add to the vectorstore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-2
Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. client_search(client, index_name, ...) create_index(client, index_name, mapping) delete([ids]) Delete by vector IDs. from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts, embedding[, metadatas, ...]) Construct ElasticVectorSearch wrapper from raw documents. max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k, filter]) Return docs most similar to query. similarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query) Return docs and relevance scores in the range [0, 1]. similarity_search_with_score(query[, k, filter])
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-3
similarity_search_with_score(query[, k, filter]) Return docs most similar to query. __init__(elasticsearch_url: str, index_name: str, embedding: Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]¶ Initialize with necessary components. async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh_indices: bool = True, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Parameters texts – Iterable of strings to add to the vectorstore. metadatas – Optional list of metadatas associated with the texts. ids – Optional list of unique IDs. refresh_indices – bool to refresh ElasticSearch indices Returns List of ids from adding the texts into the vectorstore.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-4
Returns List of ids from adding the texts into the vectorstore. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR;
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-5
lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-6
Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. client_search(client: Any, index_name: str, script_query: Dict, size: int) → Any[source]¶ create_index(client: Any, index_name: str, mapping: Dict) → None[source]¶ delete(ids: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Delete by vector IDs. Parameters ids – List of ids to delete. classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) → ElasticVectorSearch[source]¶ Construct ElasticVectorSearch wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in the Elasticsearch instance. Adds the documents to the newly created Elasticsearch index. This is intended to be a quick way to get started. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch.from_texts( texts, embeddings, elasticsearch_url="http://localhost:9200" )
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-7
embeddings, elasticsearch_url="http://localhost:9200" ) max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-8
Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶ Return docs most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query. similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query vector. similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs Returns List of Tuples of (doc, similarity_score) similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Return docs most similar to query. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. Returns
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
0c69cf23b2e1-9
:param k: Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query. Examples using ElasticVectorSearch¶ ElasticSearch How to add memory to a Multi-Input Chain
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
36cb1b48bb72-0
langchain.vectorstores.sklearn.BsonSerializer¶ class langchain.vectorstores.sklearn.BsonSerializer(persist_path: str)[source]¶ Serializes data in binary json using the bson python package. Methods __init__(persist_path) extension() The file extension suggested by this serializer (without dot). load() Loads the data from the persist_path save(data) Saves the data to the persist_path __init__(persist_path: str) → None[source]¶ classmethod extension() → str[source]¶ The file extension suggested by this serializer (without dot). load() → Any[source]¶ Loads the data from the persist_path save(data: Any) → None[source]¶ Saves the data to the persist_path
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.BsonSerializer.html
df60de688f7a-0
langchain.vectorstores.myscale.MyScale¶ class langchain.vectorstores.myscale.MyScale(embedding: Embeddings, config: Optional[MyScaleSettings] = None, **kwargs: Any)[source]¶ Wrapper around MyScale vector database You need a clickhouse-connect python package, and a valid account to connect to MyScale. MyScale can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/) MyScale Wrapper to LangChain embedding (Embeddings): config (MyScaleSettings): Configuration to MyScale Client Other keyword arguments will pass into [clickhouse-connect](https://docs.myscale.com/) Attributes embeddings Access the query embedding object if available. metadata_column Methods __init__(embedding[, config]) MyScale Wrapper to LangChain aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, batch_size, ids]) Run more texts through the embeddings and add to the vectorstore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...)
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-1
amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. delete([ids]) Delete by vector ID or other criteria. drop() Helper function: Drop data escape_str(value) from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts, embedding[, metadatas, ...]) Create Myscale wrapper with existing texts max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k, where_str]) Perform a similarity search with MyScale similarity_search_by_vector(embedding[, k, ...]) Perform a similarity search with MyScale by vectors similarity_search_with_relevance_scores(query) Perform a similarity search with MyScale similarity_search_with_score(*args, **kwargs) Run similarity search with distance. __init__(embedding: Embeddings, config: Optional[MyScaleSettings] = None, **kwargs: Any) → None[source]¶ MyScale Wrapper to LangChain embedding (Embeddings):
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-2
MyScale Wrapper to LangChain embedding (Embeddings): config (MyScaleSettings): Configuration to MyScale Client Other keyword arguments will pass into [clickhouse-connect](https://docs.myscale.com/) async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Parameters texts – Iterable of strings to add to the vectorstore. ids – Optional list of ids to associate with the texts. batch_size – Batch size of insertion metadata – Optional column data to be inserted Returns List of ids from adding the texts into the vectorstore. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-3
Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-4
Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-5
Return docs most similar to query. delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶ Delete by vector ID or other criteria. Parameters ids – List of ids to delete. **kwargs – Other keyword arguments that subclasses might use. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] drop() → None[source]¶ Helper function: Drop data escape_str(value: str) → str[source]¶ classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: Iterable[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) → MyScale[source]¶ Create Myscale wrapper with existing texts Parameters texts (Iterable[str]) – List or tuple of strings to be added embedding (Embeddings) – Function to extract text embedding config (MyScaleSettings, Optional) – Myscale configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to MyScale. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) Returns MyScale Index
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-6
Returns MyScale Index max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-7
Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Perform a similarity search with MyScale by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Perform a similarity search with MyScale Parameters
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
df60de688f7a-8
Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Document] similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶ Run similarity search with distance. Examples using MyScale¶ MyScale Self-querying with MyScale
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html
cd15b20e4ffb-0
langchain.vectorstores.deeplake.DeepLake¶ class langchain.vectorstores.deeplake.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding: Optional[Embeddings] = None, embedding_function: Optional[Embeddings] = None, read_only: bool = False, ingestion_batch_size: int = 1000, num_workers: int = 0, verbose: bool = True, exec_option: Optional[str] = None, **kwargs: Any)[source]¶ Wrapper around Deep Lake, a data lake for deep learning applications. We integrated deeplake’s similarity search and filtering for fast prototyping, Now, it supports Tensor Query Language (TQL) for production use cases over billion rows. Why Deep Lake? Not only stores embeddings, but also the original data with version control. Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.) More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models. To use, you should have the deeplake python package installed. Example from langchain.vectorstores import DeepLake from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = DeepLake("langchain_store", embeddings.embed_query) Creates an empty DeepLakeVectorStore or loads an existing one. The DeepLakeVectorStore is located at the specified path. Examples >>> # Create a vector store with default tensors >>> deeplake_vectorstore = DeepLake( ... path = <path_for_storing_Data>, ... ) >>> >>> # Create a vector store in the Deep Lake Managed Tensor Database >>> data = DeepLake( ... path = "hub://org_id/dataset_name",
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-1
... path = "hub://org_id/dataset_name", ... exec_option = "tensor_db", ... ) Parameters dataset_path (str) – Path to existing dataset or where to create a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH. token (str, optional) – Activeloop token, for fetching credentials to the dataset at path if it is a Deep Lake dataset. Tokens are normally autogenerated. Optional. embedding (Embeddings, optional) – Function to convert either documents or query. Optional. embedding_function (Embeddings, optional) – Function to convert either documents or query. Optional. Deprecated: keeping this parameter for backwards compatibility. read_only (bool) – Open dataset in read-only mode. Default is False. ingestion_batch_size (int) – During data ingestion, data is divided into batches. Batch size is the size of each batch. Default is 1000. num_workers (int) – Number of workers to use during data ingestion. Default is 0. verbose (bool) – Print dataset summary after each operation. Default is True. exec_option (str, optional) – DeepLakeVectorStore supports 3 ways to perform searching - “python”, “compute_engine”, “tensor_db” and auto. Default is None. - auto- Selects the best execution method based on the storage location of the Vector Store. It is the default option. python - Pure-python implementation that runs on the client.WARNING: using this with big datasets can lead to memory issues. Data can be stored anywhere. compute_engine - C++ implementation of the Deep Lake ComputeEngine that runs on the client. Can be used for any data stored in or connected to Deep Lake. Not for in-memory or local datasets.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-2
or connected to Deep Lake. Not for in-memory or local datasets. tensor_db - Hosted Managed Tensor Database that isresponsible for storage and query execution. Only for data stored in the Deep Lake Managed Database. Use runtime = {“db_engine”: True} during dataset creation. **kwargs – Other optional keyword arguments. Raises ValueError – If some condition is not met. Attributes embeddings Access the query embedding object if available. Methods __init__([dataset_path, token, embedding, ...]) Creates an empty DeepLakeVectorStore or loads an existing one. aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, ids]) Run more texts through the embeddings and add to the vectorstore. afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-3
asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. delete([ids]) Delete the entities in the dataset. delete_dataset() Delete the collection. ds() force_delete_by_path(path) Force delete dataset by path. from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts[, embedding, metadatas, ...]) Create a Deep Lake dataset from a raw documents. max_marginal_relevance_search(query[, k, ...]) Return docs selected using maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k]) Return docs most similar to query. similarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query) Return docs and relevance scores in the range [0, 1]. similarity_search_with_score(query[, k]) Run similarity search with Deep Lake with distance returned. __init__(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding: Optional[Embeddings] = None, embedding_function: Optional[Embeddings] = None, read_only: bool = False, ingestion_batch_size: int = 1000, num_workers: int = 0, verbose: bool = True, exec_option: Optional[str] = None, **kwargs: Any) → None[source]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-4
Creates an empty DeepLakeVectorStore or loads an existing one. The DeepLakeVectorStore is located at the specified path. Examples >>> # Create a vector store with default tensors >>> deeplake_vectorstore = DeepLake( ... path = <path_for_storing_Data>, ... ) >>> >>> # Create a vector store in the Deep Lake Managed Tensor Database >>> data = DeepLake( ... path = "hub://org_id/dataset_name", ... exec_option = "tensor_db", ... ) Parameters dataset_path (str) – Path to existing dataset or where to create a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH. token (str, optional) – Activeloop token, for fetching credentials to the dataset at path if it is a Deep Lake dataset. Tokens are normally autogenerated. Optional. embedding (Embeddings, optional) – Function to convert either documents or query. Optional. embedding_function (Embeddings, optional) – Function to convert either documents or query. Optional. Deprecated: keeping this parameter for backwards compatibility. read_only (bool) – Open dataset in read-only mode. Default is False. ingestion_batch_size (int) – During data ingestion, data is divided into batches. Batch size is the size of each batch. Default is 1000. num_workers (int) – Number of workers to use during data ingestion. Default is 0. verbose (bool) – Print dataset summary after each operation. Default is True. exec_option (str, optional) – DeepLakeVectorStore supports 3 ways to perform searching - “python”, “compute_engine”, “tensor_db” and auto. Default is None. - auto- Selects the best execution method based on the storage
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-5
Default is None. - auto- Selects the best execution method based on the storage location of the Vector Store. It is the default option. python - Pure-python implementation that runs on the client.WARNING: using this with big datasets can lead to memory issues. Data can be stored anywhere. compute_engine - C++ implementation of the Deep Lake ComputeEngine that runs on the client. Can be used for any data stored in or connected to Deep Lake. Not for in-memory or local datasets. tensor_db - Hosted Managed Tensor Database that isresponsible for storage and query execution. Only for data stored in the Deep Lake Managed Database. Use runtime = {“db_engine”: True} during dataset creation. **kwargs – Other optional keyword arguments. Raises ValueError – If some condition is not met. async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str]
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-6
Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶ Run more texts through the embeddings and add to the vectorstore. Examples >>> ids = deeplake_vectorstore.add_texts( ... texts = <list_of_texts>, ... metadatas = <list_of_metadata_jsons>, ... ids = <list_of_ids>, ... ) Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. embedding_function (Optional[Embeddings], optional) – Embedding function to use to convert the text into embeddings. **kwargs (Any) – Any additional keyword arguments passed is not supported by this method. Returns List of IDs of the added texts. Return type List[str] async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-7
Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} )
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-8
search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. delete(ids: Optional[List[str]] = None, **kwargs: Any) → bool[source]¶ Delete the entities in the dataset. Parameters ids (Optional[List[str]], optional) – The document_ids to delete. Defaults to None. **kwargs – Other keyword arguments that subclasses might use. - filter (Optional[Dict[str, str]], optional): The filter to delete by. - delete_all (Optional[bool], optional): Whether to drop the dataset. Returns
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-9
- delete_all (Optional[bool], optional): Whether to drop the dataset. Returns Whether the delete operation was successful. Return type bool delete_dataset() → None[source]¶ Delete the collection. ds() → Any[source]¶ classmethod force_delete_by_path(path: str) → None[source]¶ Force delete dataset by path. Parameters path (str) – path of the dataset to delete. Raises ValueError – if deeplake is not installed. classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) → DeepLake[source]¶ Create a Deep Lake dataset from a raw documents. If a dataset_path is specified, the dataset will be persisted in that location, otherwise by default at ./deeplake Examples: >>> # Search using an embedding >>> vector_store = DeepLake.from_texts( … texts = <the_texts_that_you_want_to_embed>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>, … ) Parameters dataset_path (str) – The full path to the dataset. Can be: Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login’ from command line)
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-10
(use ‘activeloop login’ from command line) AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required in either the environment Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset. In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead. Should be used only for testing as it does not persist. texts (List[Document]) – List of documents to add. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. Note, in other places, it is called embedding_function. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – List of document IDs. Defaults to None. **kwargs – Additional keyword arguments. Returns Deep Lake dataset. Return type DeepLake max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Return docs selected using maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Examples: >>> # Search using an embedding >>> data = vector_store.max_marginal_relevance_search( … query = <query_to_search>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>, … ) Parameters
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-11
… exec_option = <preferred_exec_option>, … ) Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents for MMR algorithm. lambda_mult – Value between 0 and 1. 0 corresponds to maximum diversity and 1 to minimum. Defaults to 0.5. exec_option (str) – Supports 3 ways to perform searching. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. **kwargs – Additional keyword arguments Returns List of Documents selected by maximal marginal relevance. Raises ValueError – when MRR search is on but embedding function is not specified. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) → List[Document][source]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected docs. Examples:
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-12
relevance optimizes for similarity to query AND diversity among selected docs. Examples: >>> data = vector_store.max_marginal_relevance_search_by_vector( … embedding=<your_embedding>, … fetch_k=<elements_to_fetch_before_mmr_search>, … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters embedding – Embedding to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch for MMR algorithm. lambda_mult – Number between 0 and 1 determining the degree of diversity. 0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5. exec_option (str) – DeepLakeVectorStore supports 3 ways for searching. Could be “python”, “compute_engine” or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. **kwargs – Additional keyword arguments. Returns List[Documents] - A list of documents. search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-13
Return docs most similar to query using specified search type. similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶ Return docs most similar to query. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search( ... query=<your_query>, ... k=<num_items>, ... exec_option=<preferred_exec_option>, ... ) >>> # Run tql search: >>> data = vector_store.similarity_search( ... query=None, ... tql="SELECT * WHERE id == <id>", ... exec_option="compute_engine", ... ) Parameters k (int) – Number of Documents to return. Defaults to 4. query (str) – Text to look up similar documents. **kwargs – Additional keyword arguments include: embedding (Callable): Embedding function to use. Defaults to None. distance_metric (str): ‘L2’ for Euclidean, ‘L1’ for Nuclear, ‘max’ for L-infinity, ‘cos’ for cosine, ‘dot’ for dot product. Defaults to ‘L2’. filter (Union[Dict, Callable], optional): Additional filterbefore embedding search. - Dict: Key-value search on tensors of htype json, (sample must satisfy all key-value filters) Dict = {“tensor_1”: {“key”: value}, “tensor_2”: {“key”: value}} Function: Compatible with deeplake.filter. Defaults to None. exec_option (str): Supports 3 ways to perform searching.’python’, ‘compute_engine’, or ‘tensor_db’. Defaults to ‘python’. - ‘python’: Pure-python implementation for the client. WARNING: not recommended for big datasets.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-14
WARNING: not recommended for big datasets. ’compute_engine’: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets. ’tensor_db’: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database. Use runtime = {“db_engine”: True} during dataset creation. Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_by_vector(embedding: Union[List[float], ndarray], k: int = 4, **kwargs: Any) → List[Document][source]¶ Return docs most similar to embedding vector. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search_by_vector( ... embedding=<your_embedding>, ... k=<num_items_to_return>, ... exec_option=<preferred_exec_option>, ... ) Parameters embedding (Union[List[float], np.ndarray]) – Embedding to find similar docs. k (int) – Number of Documents to return. Defaults to 4. **kwargs – Additional keyword arguments including: filter (Union[Dict, Callable], optional): Additional filter before embedding search. - Dict - Key-value search on tensors of htype json. True if all key-value filters are satisfied. Dict = {“tensor_name_1”: {“key”: value}, ”tensor_name_2”: {“key”: value}} Function - Any function compatible withdeeplake.filter. Defaults to None. exec_option (str): Options for search execution include”python”, “compute_engine”, or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-15
option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity, ‘dot’ for dot product. Defaults to L2. Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs Returns List of Tuples of (doc, similarity_score) similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶ Run similarity search with Deep Lake with distance returned. Examples: >>> data = vector_store.similarity_search_with_score( … query=<your_query>, … embedding=<your_embedding_function>
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-16
… query=<your_query>, … embedding=<your_embedding_function> … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. **kwargs – Additional keyword arguments. Some of these arguments are: distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity distance, cos for cosine similarity, ‘dot’ for dot product. Defaults to L2. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults to None. exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either “python”, “compute_engine” or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. Returns List of documents most similar to the querytext with distance in float. Return type List[Tuple[Document, float]] Examples using DeepLake¶ Deep Lake Activeloop’s Deep Lake
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
cd15b20e4ffb-17
Examples using DeepLake¶ Deep Lake Activeloop’s Deep Lake Question answering over a group chat messages using Activeloop’s DeepLake QA using Activeloop’s DeepLake Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake Use LangChain, GPT and Activeloop’s Deep Lake to work with code base DeepLake self-querying
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
56d932d3e8ab-0
langchain.vectorstores.lancedb.LanceDB¶ class langchain.vectorstores.lancedb.LanceDB(connection: Any, embedding: Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]¶ Wrapper around LanceDB vector database. To use, you should have lancedb python package installed. Example db = lancedb.connect('./lancedb') table = db.open_table('my_table') vectorstore = LanceDB(table, embedding_function) vectorstore.add_texts(['text1', 'text2']) result = vectorstore.similarity_search('text1') Initialize with Lance DB connection Attributes embeddings Access the query embedding object if available. Methods __init__(connection, embedding[, ...]) Initialize with Lance DB connection aadd_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. aadd_texts(texts[, metadatas]) Run more texts through the embeddings and add to the vectorstore. add_documents(documents, **kwargs) Run more documents through the embeddings and add to the vectorstore. add_texts(texts[, metadatas, ids]) Turn texts into embedding and add it to the database afrom_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. afrom_texts(texts, embedding[, metadatas]) Return VectorStore initialized from texts and embeddings. amax_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. amax_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. as_retriever(**kwargs)
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html
56d932d3e8ab-1
Return docs selected using the maximal marginal relevance. as_retriever(**kwargs) Return VectorStoreRetriever initialized from this VectorStore. asearch(query, search_type, **kwargs) Return docs most similar to query using specified search type. asimilarity_search(query[, k]) Return docs most similar to query. asimilarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. asimilarity_search_with_relevance_scores(query) Return docs most similar to query. delete([ids]) Delete by vector ID or other criteria. from_documents(documents, embedding, **kwargs) Return VectorStore initialized from documents and embeddings. from_texts(texts, embedding[, metadatas, ...]) Return VectorStore initialized from texts and embeddings. max_marginal_relevance_search(query[, k, ...]) Return docs selected using the maximal marginal relevance. max_marginal_relevance_search_by_vector(...) Return docs selected using the maximal marginal relevance. search(query, search_type, **kwargs) Return docs most similar to query using specified search type. similarity_search(query[, k]) Return documents most similar to the query similarity_search_by_vector(embedding[, k]) Return docs most similar to embedding vector. similarity_search_with_relevance_scores(query) Return docs and relevance scores in the range [0, 1]. similarity_search_with_score(*args, **kwargs) Run similarity search with distance. __init__(connection: Any, embedding: Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]¶ Initialize with Lance DB connection
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html
56d932d3e8ab-2
Initialize with Lance DB connection async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ Run more texts through the embeddings and add to the vectorstore. add_documents(documents: List[Document], **kwargs: Any) → List[str]¶ Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. Returns List of IDs of the added texts. Return type List[str] add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶ Turn texts into embedding and add it to the database Parameters texts – Iterable of strings to add to the vectorstore. metadatas – Optional list of metadatas associated with the texts. ids – Optional list of ids to associate with the texts. Returns List of ids of the added texts. async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶ Return VectorStore initialized from texts and embeddings.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html
56d932d3e8ab-3
Return VectorStore initialized from texts and embeddings. async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. as_retriever(**kwargs: Any) → VectorStoreRetriever¶ Return VectorStoreRetriever initialized from this VectorStore. Parameters search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. search_kwargs (Optional[Dict]) – Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata Returns Retriever class for VectorStore. Return type VectorStoreRetriever Examples: # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} )
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html
56d932d3e8ab-4
) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶ Return docs most similar to query using specified search type. async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to query. async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶ Return docs most similar to query. delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶ Delete by vector ID or other criteria. Parameters ids – List of ids to delete. **kwargs – Other keyword arguments that subclasses might use. Returns True if deletion is successful,
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html
56d932d3e8ab-5
Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶ Return VectorStore initialized from documents and embeddings. classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) → LanceDB[source]¶ Return VectorStore initialized from texts and embeddings. max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶ Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html