id
stringlengths 14
15
| text
stringlengths 35
2.51k
| source
stringlengths 61
154
|
|---|---|---|
c6c7bda1ab37-1
|
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts: List[str], chunk_size: int = 1) → List[List[float]][source]¶
Compute doc embeddings using a Bedrock model.
Parameters
texts – The list of texts to embed.
chunk_size – Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a Bedrock model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that AWS credentials to and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
|
fabd81efd202-0
|
langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings¶
class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = <function load_embedding_model>, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'InstructorEmbedding', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'hkunlp/instructor-large', embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]¶
Bases: SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
|
fabd81efd202-1
|
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction to use for embedding documents.
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param model_id: str = 'hkunlp/instructor-large'¶
Model name to use.
param model_load_fn: Callable = <function load_embedding_model>¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']¶
Requirements to install on hardware to inference the model.
param pipeline_ref: Any = None¶
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction to use for embedding query.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
|
fabd81efd202-2
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
|
fabd81efd202-3
|
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
|
fabd81efd202-4
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
|
f21bd823cd73-0
|
langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings¶
class langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings(*, client: Any = None, endpoint_name: str = '', region_name: str = '', credentials_profile_name: Optional[str] = None, content_handler: EmbeddingsContentHandler, model_kwargs: Optional[Dict] = None, endpoint_kwargs: Optional[Dict] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]¶
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
|
f21bd823cd73-1
|
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param endpoint_kwargs: Optional[Dict] = None¶
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
param endpoint_name: str = ''¶
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: str = ''¶
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]¶
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that AWS credentials to and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
|
fad843cbf1ac-0
|
langchain.embeddings.self_hosted_hugging_face.load_embedding_model¶
langchain.embeddings.self_hosted_hugging_face.load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) → Any[source]¶
Load the embedding model.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.load_embedding_model.html
|
a4e15aa9d934-0
|
langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings¶
class langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings(*, endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ', retry_sleep: float = 1.0, mosaicml_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction used to embed documents.
param endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'¶
Endpoint URL to use.
param mosaicml_api_token: Optional[str] = None¶
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction used to embed the query.
param retry_sleep: float = 1.0¶
How long to try sleeping for if a rate limit is encountered
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings.html
|
a4e15aa9d934-1
|
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings.html
|
5546d7127a56-0
|
langchain.embeddings.llamacpp.LlamaCppEmbeddings¶
class langchain.embeddings.llamacpp.LlamaCppEmbeddings(*, client: Any = None, model_path: str, n_ctx: int = 512, n_parts: int = - 1, seed: int = - 1, f16_kv: bool = False, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, n_threads: Optional[int] = None, n_batch: Optional[int] = 8, n_gpu_layers: Optional[int] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param f16_kv: bool = False¶
Use half-precision for key/value cache.
param logits_all: bool = False¶
Return logits for all tokens, not just the last token.
param model_path: str [Required]¶
param n_batch: Optional[int] = 8¶
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
param n_ctx: int = 512¶
Token context window.
param n_gpu_layers: Optional[int] = None¶
Number of layers to be loaded into gpu memory. Default None.
param n_parts: int = -1¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
|
5546d7127a56-1
|
param n_parts: int = -1¶
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
param n_threads: Optional[int] = None¶
Number of threads to use. If None, the number
of threads is automatically determined.
param seed: int = -1¶
Seed. If -1, a random seed is used.
param use_mlock: bool = False¶
Force system to keep model in RAM.
param vocab_only: bool = False¶
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that llama-cpp-python library is installed.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
|
69847bcc1761-0
|
langchain.embeddings.elasticsearch.ElasticsearchEmbeddings¶
class langchain.embeddings.elasticsearch.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]¶
Bases: Embeddings
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Initialize the ElasticsearchEmbeddings instance.
Parameters
client (MlClient) – An Elasticsearch ML client object.
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
Methods
__init__(client, model_id, *[, input_field])
Initialize the ElasticsearchEmbeddings instance.
aembed_documents(texts)
Embed search docs.
aembed_query(text)
Embed query text.
embed_documents(texts)
Generate embeddings for a list of documents.
embed_query(text)
Generate an embedding for a single query text.
from_credentials(model_id, *[, es_cloud_id, ...])
Instantiate embeddings from Elasticsearch credentials.
from_es_connection(model_id, es_connection)
Instantiate embeddings from an existing Elasticsearch connection.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Embed search docs.
async aembed_query(text: str) → List[float]¶
Embed query text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html
|
69847bcc1761-1
|
async aembed_query(text: str) → List[float]¶
Embed query text.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Generate embeddings for a list of documents.
Parameters
texts (List[str]) – A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text: str) → List[float][source]¶
Generate an embedding for a single query text.
Parameters
text (str) – The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
es_cloud_id – (str, optional): The Elasticsearch cloud ID to connect to.
es_user – (str, optional): Elasticsearch username.
es_password – (str, optional): Elasticsearch password.
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html
|
69847bcc1761-2
|
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to ‘text_field’.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html
|
69847bcc1761-3
|
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html
|
b806c2a06fdc-0
|
langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings¶
class langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings(*, embed: Any = None, model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]¶
Bases: BaseModel, Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Initialize the tensorflow_hub and tensorflow_text.
param model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html
|
8ee4eddad07b-0
|
langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory¶
class langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Bases: BaseChatMessageHistory
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/
Instantiate a chat message history cache that uses Momento as a backend.
Note: to instantiate the cache client passed to MomentoChatMessageHistory,
you must have a Momento account at https://gomomento.com/.
Parameters
session_id (str) – The session ID to use for this chat session.
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the messages.
key_prefix (str, optional) – The prefix to apply to the cache key.
Defaults to “message_store:”.
ttl (Optional[timedelta], optional) – The TTL to use for the messages.
Defaults to None, ie the default TTL of the cache will be used.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist.
Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
Methods
__init__(session_id, cache_client, cache_name, *)
Instantiate a chat message history cache that uses Momento as a backend.
add_ai_message(message)
Add an AI message to the store
add_message(message)
Store a message in the cache.
add_user_message(message)
Add a user message to the store
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html
|
8ee4eddad07b-1
|
add_user_message(message)
Add a user message to the store
clear()
Remove the session's messages from the cache.
from_client_params(session_id, cache_name, ...)
Construct cache from CacheClient parameters.
Attributes
messages
Retrieve the messages from Momento.
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Store a message in the cache.
Parameters
message (BaseMessage) – The message object to store.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Remove the session’s messages from the cache.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
classmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoChatMessageHistory[source]¶
Construct cache from CacheClient parameters.
property messages: list[langchain.schema.BaseMessage]¶
Retrieve the messages from Momento.
Raises
SdkException – Momento service or network error
Exception – Unexpected response
Returns
List of cached messages
Return type
list[BaseMessage]
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html
|
a4026fcbb6e4-0
|
langchain.memory.chat_message_histories.sql.create_message_model¶
langchain.memory.chat_message_histories.sql.create_message_model(table_name, DynamicBase)[source]¶
Create a message model for a given table name.
:param table_name: The name of the table to use.
:param DynamicBase: The base class to use for the model.
Returns
The model class.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.create_message_model.html
|
b4134028ea37-0
|
langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory¶
class langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(table_name: str, session_id: str, endpoint_url: Optional[str] = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name – name of the DynamoDB table
session_id – arbitrary key that is used to store the messages
of a single chat session.
endpoint_url – URL of the AWS endpoint to connect to. This argument
is optional and useful for test purposes, like using Localstack.
If you plan to use AWS cloud service, you normally don’t have to
worry about setting the endpoint_url.
Methods
__init__(table_name, session_id[, endpoint_url])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in DynamoDB
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from DynamoDB
Attributes
messages
Retrieve the messages from DynamoDB
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in DynamoDB
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from DynamoDB
property messages: List[langchain.schema.BaseMessage]¶
Retrieve the messages from DynamoDB
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory.html
|
d53698abc748-0
|
langchain.memory.chat_memory.BaseChatMemory¶
class langchain.memory.chat_memory.BaseChatMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False)[source]¶
Bases: BaseMemory, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param chat_memory: langchain.schema.BaseChatMessageHistory [Optional]¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None[source]¶
Clear memory contents.
abstract load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]¶
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
abstract property memory_variables: List[str]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html
|
d53698abc748-1
|
abstract property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html
|
236783563787-0
|
langchain.memory.chat_message_histories.sql.SQLChatMessageHistory¶
class langchain.memory.chat_message_histories.sql.SQLChatMessageHistory(session_id: str, connection_string: str, table_name: str = 'message_store')[source]¶
Bases: BaseChatMessageHistory
Chat message history stored in an SQL database.
Methods
__init__(session_id, connection_string[, ...])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in db
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from db
Attributes
messages
Retrieve all messages from db
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in db
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from db
property messages: List[langchain.schema.BaseMessage]¶
Retrieve all messages from db
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.SQLChatMessageHistory.html
|
f9bfb8f6016a-0
|
langchain.memory.summary.ConversationSummaryMemory¶
class langchain.memory.summary.ConversationSummaryMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.base_language.BaseLanguageModel, prompt: ~langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>, chat_memory: ~langchain.schema.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, buffer: str = '', memory_key: str = 'history')[source]¶
Bases: BaseChatMemory, SummarizerMixin
Conversation summarizer to memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param buffer: str = ''¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
f9bfb8f6016a-1
|
param ai_prefix: str = 'AI'¶
param buffer: str = ''¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param return_messages: bool = False¶
param summary_message_cls: Type[BaseMessage] = <class 'langchain.schema.SystemMessage'>¶
clear() → None[source]¶
Clear memory contents.
classmethod from_messages(llm: BaseLanguageModel, chat_memory: BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any) → ConversationSummaryMemory[source]¶
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
f9bfb8f6016a-2
|
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_prompt_input_variables » all fields[source]¶
Validate that prompt input variables are consistent.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
e1c416b20b19-0
|
langchain.memory.entity.InMemoryEntityStore¶
class langchain.memory.entity.InMemoryEntityStore(*, store: Dict[str, Optional[str]] = {})[source]¶
Bases: BaseEntityStore
Basic in-memory entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param store: Dict[str, Optional[str]] = {}¶
clear() → None[source]¶
Delete all entities from store.
delete(key: str) → None[source]¶
Delete entity value from store.
exists(key: str) → bool[source]¶
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html
|
ae30200ec75a-0
|
langchain.memory.summary_buffer.ConversationSummaryBufferMemory¶
class langchain.memory.summary_buffer.ConversationSummaryBufferMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.base_language.BaseLanguageModel, prompt: ~langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>, chat_memory: ~langchain.schema.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, max_token_limit: int = 2000, moving_summary_buffer: str = '', memory_key: str = 'history')[source]¶
Bases: BaseChatMemory, SummarizerMixin
Buffer with summarizer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
ae30200ec75a-1
|
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param moving_summary_buffer: str = ''¶
param output_key: Optional[str] = None¶
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param return_messages: bool = False¶
param summary_message_cls: Type[BaseMessage] = <class 'langchain.schema.SystemMessage'>¶
clear() → None[source]¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
ae30200ec75a-2
|
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
prune() → None[source]¶
Prune buffer if it exceeds max token limit
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_prompt_input_variables » all fields[source]¶
Validate that prompt input variables are consistent.
property buffer: List[langchain.schema.BaseMessage]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
f433d2b5baf9-0
|
langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory¶
class langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in MongoDB.
Parameters
connection_string – connection string to connect to MongoDB
session_id – arbitrary key that is used to store the messages
of a single chat session.
database_name – name of the database to use
collection_name – name of the collection to use
Methods
__init__(connection_string, session_id[, ...])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in MongoDB
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from MongoDB
Attributes
messages
Retrieve the messages from MongoDB
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in MongoDB
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from MongoDB
property messages: List[langchain.schema.BaseMessage]¶
Retrieve the messages from MongoDB
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory.html
|
4e8711697a4a-0
|
langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory¶
class langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory(collection_name: str, session_id: str, user_id: str)[source]¶
Bases: BaseChatMessageHistory
Chat history backed by Google Firestore.
Initialize a new instance of the FirestoreChatMessageHistory class.
Parameters
collection_name – The name of the collection to use.
session_id – The session ID for the chat..
user_id – The user ID for the chat.
Methods
__init__(collection_name, session_id, user_id)
Initialize a new instance of the FirestoreChatMessageHistory class.
add_ai_message(message)
Add an AI message to the store
add_message(message)
Add a self-created message to the store
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from this memory and Firestore.
load_messages()
Retrieve the messages from Firestore
prepare_firestore()
Prepare the Firestore client.
upsert_messages([new_message])
Update the Firestore document.
Attributes
messages
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from this memory and Firestore.
load_messages() → None[source]¶
Retrieve the messages from Firestore
prepare_firestore() → None[source]¶
Prepare the Firestore client.
Use this function to make sure your database is ready.
upsert_messages(new_message: Optional[BaseMessage] = None) → None[source]¶
Update the Firestore document.
messages: List[BaseMessage]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html
|
7a541fd5a8e5-0
|
langchain.memory.chat_message_histories.redis.RedisChatMessageHistory¶
class langchain.memory.chat_message_histories.redis.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history stored in a Redis database.
Methods
__init__(session_id[, url, key_prefix, ttl])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in Redis
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from Redis
Attributes
key
Construct the record key to use
messages
Retrieve the messages from Redis
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in Redis
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from Redis
property key: str¶
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]¶
Retrieve the messages from Redis
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.redis.RedisChatMessageHistory.html
|
763e03a8469b-0
|
langchain.memory.entity.SQLiteEntityStore¶
class langchain.memory.entity.SQLiteEntityStore(session_id: str = 'default', db_file: str = 'entities.db', table_name: str = 'memory_store', *args: Any)[source]¶
Bases: BaseEntityStore
SQLite-backed Entity store
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param session_id: str = 'default'¶
param table_name: str = 'memory_store'¶
clear() → None[source]¶
Delete all entities from store.
delete(key: str) → None[source]¶
Delete entity value from store.
exists(key: str) → bool[source]¶
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
property full_table_name: str¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.SQLiteEntityStore.html
|
f124b6b74dc0-0
|
langchain.memory.token_buffer.ConversationTokenBufferMemory¶
class langchain.memory.token_buffer.ConversationTokenBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: BaseLanguageModel, memory_key: str = 'history', max_token_limit: int = 2000)[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: langchain.base_language.BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer. Pruned.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: List[langchain.schema.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html
|
f124b6b74dc0-1
|
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html
|
4aa99c5b8210-0
|
langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory¶
class langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory(session_id: str, session: Session, keyspace: str, table_name: str = 'message_store', ttl_seconds: int | None = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in Cassandra.
Parameters
session_id – arbitrary key that is used to store the messages
of a single chat session.
session – a Cassandra Session object (an open DB connection)
keyspace – name of the keyspace to use.
table_name – name of the table to use.
ttl_seconds – time-to-live (seconds) for automatic expiration
of stored entries. None (default) for no expiration.
Methods
__init__(session_id, session, keyspace[, ...])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Write a message to the table
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from DB
Attributes
messages
Retrieve all session messages from DB
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Write a message to the table
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from DB
property messages: List[langchain.schema.BaseMessage]¶
Retrieve all session messages from DB
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html
|
35e95f2b09b4-0
|
langchain.memory.buffer.ConversationBufferMemory¶
class langchain.memory.buffer.ConversationBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history')[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: Any¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
35e95f2b09b4-1
|
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
3c8c0b3fd14e-0
|
langchain.memory.buffer.ConversationStringBufferMemory¶
class langchain.memory.buffer.ConversationStringBufferMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', buffer: str = '', output_key: Optional[str] = None, input_key: Optional[str] = None, memory_key: str = 'history')[source]¶
Bases: BaseMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
Prefix to use for AI generated responses.
param buffer: str = ''¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
clear() → None[source]¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_chains » all fields[source]¶
Validate that return messages is not True.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html
|
3c8c0b3fd14e-1
|
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Will always return list of memory variables.
:meta private:
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html
|
b5762e49925e-0
|
langchain.memory.vectorstore.VectorStoreRetrieverMemory¶
class langchain.memory.vectorstore.VectorStoreRetrieverMemory(*, retriever: VectorStoreRetriever, memory_key: str = 'history', input_key: Optional[str] = None, return_docs: bool = False)[source]¶
Bases: BaseMemory
Class for a VectorStore-backed memory object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_key: Optional[str] = None¶
Key name to index the inputs to load_memory_variables.
param memory_key: str = 'history'¶
Key name to locate the memories in the result of load_memory_variables.
param retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]¶
VectorStoreRetriever object to connect to.
param return_docs: bool = False¶
Whether or not to return the result of querying the database directly.
clear() → None[source]¶
Nothing to clear.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Union[List[Document], str]][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
b5762e49925e-1
|
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
The list of keys emitted from the load_memory_variables method.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
396dc64c1a5b-0
|
langchain.memory.entity.RedisEntityStore¶
class langchain.memory.entity.RedisEntityStore(session_id: str = 'default', url: str = 'redis://localhost:6379/0', key_prefix: str = 'memory_store', ttl: Optional[int] = 86400, recall_ttl: Optional[int] = 259200, *args: Any, redis_client: Any = None)[source]¶
Bases: BaseEntityStore
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param key_prefix: str = 'memory_store'¶
param recall_ttl: Optional[int] = 259200¶
param redis_client: Any = None¶
param session_id: str = 'default'¶
param ttl: Optional[int] = 86400¶
clear() → None[source]¶
Delete all entities from store.
delete(key: str) → None[source]¶
Delete entity value from store.
exists(key: str) → bool[source]¶
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
property full_key_prefix: str¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.RedisEntityStore.html
|
df27593112b3-0
|
langchain.memory.entity.BaseEntityStore¶
class langchain.memory.entity.BaseEntityStore[source]¶
Bases: BaseModel, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract clear() → None[source]¶
Delete all entities from store.
abstract delete(key: str) → None[source]¶
Delete entity value from store.
abstract exists(key: str) → bool[source]¶
Check if entity exists in store.
abstract get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
abstract set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html
|
b717369bb821-0
|
langchain.memory.readonly.ReadOnlySharedMemory¶
class langchain.memory.readonly.ReadOnlySharedMemory(*, memory: BaseMemory)[source]¶
Bases: BaseMemory
A memory wrapper that is read-only and cannot be changed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memory: langchain.schema.BaseMemory [Required]¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Load memory variables from memory.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Return memory variables.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html
|
2c7115e4c493-0
|
langchain.memory.chat_message_histories.zep.ZepChatMessageHistory¶
class langchain.memory.chat_message_histories.zep.ZepChatMessageHistory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None)[source]¶
Bases: BaseChatMessageHistory
A ChatMessageHistory implementation that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
api_key=<your_api_key>,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Methods
__init__(session_id[, url, api_key])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the Zep memory history
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from Zep.
search(query[, metadata, limit])
Search Zep memory for messages matching the query
Attributes
messages
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html
|
2c7115e4c493-1
|
Search Zep memory for messages matching the query
Attributes
messages
Retrieve messages from Zep memory
zep_messages
Retrieve summary from Zep memory
zep_summary
Retrieve summary from Zep memory
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the Zep memory history
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
search(query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None) → List[MemorySearchResult][source]¶
Search Zep memory for messages matching the query
property messages: List[langchain.schema.BaseMessage]¶
Retrieve messages from Zep memory
property zep_messages: List[Message]¶
Retrieve summary from Zep memory
property zep_summary: Optional[str]¶
Retrieve summary from Zep memory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html
|
3d484d637fb4-0
|
langchain.memory.entity.ConversationEntityMemory¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-1
|
class langchain.memory.entity.ConversationEntityMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: BaseLanguageModel, entity_extraction_prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-2
|
going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt: BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache:
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-3
|
{input}\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache: List[str] = [], k: int = 3, chat_history_key: str = 'history', entity_store: BaseEntityStore = None)[source]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-4
|
Bases: BaseChatMemory
Entity extractor & summarizer memory.
Extracts named entities from the recent chat history and generates summaries.
With a swapable entity store, persisting entities across conversations.
Defaults to an in-memory entity store, and can be swapped out for a Redis,
SQLite, or other entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_history_key: str = 'history'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param entity_cache: List[str] = []¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-5
|
param entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-6
|
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-7
|
param entity_store: langchain.memory.entity.BaseEntityStore [Optional]¶
param entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 3¶
param llm: langchain.base_language.BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None[source]¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Returns chat history and all generated entities with summaries if available,
and updates or clears the recent entity cache.
New entity name can be found when calling this method, before the entity
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
3d484d637fb4-8
|
New entity name can be found when calling this method, before the entity
summaries are generated, so the entity cache values may be empty if no entity
descriptions are generated yet.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation history to the entity store.
Generates a summary for each entity in the entity cache by prompting
the model, and saves these summaries to the entity store.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: List[langchain.schema.BaseMessage]¶
Access chat memory messages.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
0fc6475cbd9a-0
|
langchain.memory.summary.SummarizerMixin¶
class langchain.memory.summary.SummarizerMixin(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.base_language.BaseLanguageModel, prompt: ~langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>)[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param human_prefix: str = 'Human'¶
param llm: langchain.base_language.BaseLanguageModel [Required]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html
|
0fc6475cbd9a-1
|
param llm: langchain.base_language.BaseLanguageModel [Required]¶
param prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>¶
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str[source]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html
|
949b30e09507-0
|
langchain.memory.combined.CombinedMemory¶
class langchain.memory.combined.CombinedMemory(*, memories: List[BaseMemory])[source]¶
Bases: BaseMemory
Class for combining multiple memories’ data together.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memories: List[langchain.schema.BaseMemory] [Required]¶
For tracking all the memories that should be accessed.
validator check_input_key » memories[source]¶
Check that if memories are of type BaseChatMemory that input keys exist.
validator check_repeated_memory_variable » memories[source]¶
clear() → None[source]¶
Clear context from this session for every memory.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Load all vars from sub-memories.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this session for every memory.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html
|
949b30e09507-1
|
Return whether or not the class is serializable.
property memory_variables: List[str]¶
All the memory variables that this instance provides.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html
|
200577257f84-0
|
langchain.memory.chat_message_histories.file.FileChatMessageHistory¶
class langchain.memory.chat_message_histories.file.FileChatMessageHistory(file_path: str)[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in a local file.
Parameters
file_path – path of the local file to store the messages.
Methods
__init__(file_path)
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in the local file
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from the local file
Attributes
messages
Retrieve the messages from the local file
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in the local file
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from the local file
property messages: List[langchain.schema.BaseMessage]¶
Retrieve the messages from the local file
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.file.FileChatMessageHistory.html
|
5a900525e4b1-0
|
langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory¶
class langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
Bases: BaseChatMessageHistory
Chat message history stored in a Postgres database.
Methods
__init__(session_id[, connection_string, ...])
add_ai_message(message)
Add an AI message to the store
add_message(message)
Append the message to the record in PostgreSQL
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from PostgreSQL
Attributes
messages
Retrieve the messages from PostgreSQL
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in PostgreSQL
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from PostgreSQL
property messages: List[langchain.schema.BaseMessage]¶
Retrieve the messages from PostgreSQL
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory.html
|
34d3ba88af00-0
|
langchain.memory.simple.SimpleMemory¶
class langchain.memory.simple.SimpleMemory(*, memories: Dict[str, Any] = {})[source]¶
Bases: BaseMemory
Simple memory for storing context or other bits of information that shouldn’t
ever change between prompts.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memories: Dict[str, Any] = {}¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed, my memory is set in stone.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html
|
8212f2c5f762-0
|
langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory¶
class langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Bases: BaseChatMessageHistory
Chat history backed by Azure CosmosDB.
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
Methods
__init__(cosmos_endpoint, cosmos_database, ...)
Initializes a new instance of the CosmosDBChatMessageHistory class.
add_ai_message(message)
Add an AI message to the store
add_message(message)
Add a self-created message to the store
add_user_message(message)
Add a user message to the store
clear()
Clear session memory from this memory and cosmos.
load_messages()
Retrieve the messages from Cosmos
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html
|
8212f2c5f762-1
|
Clear session memory from this memory and cosmos.
load_messages()
Retrieve the messages from Cosmos
prepare_cosmos()
Prepare the CosmosDB client.
upsert_messages()
Update the cosmosdb item.
Attributes
messages
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Clear session memory from this memory and cosmos.
load_messages() → None[source]¶
Retrieve the messages from Cosmos
prepare_cosmos() → None[source]¶
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages() → None[source]¶
Update the cosmosdb item.
messages: List[BaseMessage]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html
|
5416e0286729-0
|
langchain.memory.kg.ConversationKGMemory¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-1
|
class langchain.memory.kg.ConversationKGMemory(*, chat_memory: ~langchain.schema.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, k: int = 2, human_prefix: str = 'Human', ai_prefix: str = 'AI', kg: ~langchain.graphs.networkx_graph.NetworkxEntityGraph = None, knowledge_extraction_prompt: ~langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-2
|
history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt: ~langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-3
|
know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), llm: ~langchain.base_language.BaseLanguageModel, summary_message_cls:
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-4
|
validate_template=True), llm: ~langchain.base_language.BaseLanguageModel, summary_message_cls: ~typing.Type[~langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>, memory_key: str = 'history')[source]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-5
|
Bases: BaseChatMemory
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-6
|
param entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-7
|
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-8
|
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 2¶
param kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-9
|
param knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-10
|
Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
5416e0286729-11
|
param llm: langchain.base_language.BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>¶
Number of previous utterances to include in the context.
clear() → None[source]¶
Clear memory contents.
get_current_entities(input_string: str) → List[str][source]¶
get_knowledge_triplets(input_string: str) → List[KnowledgeTriple][source]¶
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
808593b2de63-0
|
langchain.memory.utils.get_prompt_input_key¶
langchain.memory.utils.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) → str[source]¶
Get the prompt input key.
Parameters
inputs – Dict[str, Any]
memory_variables – List[str]
Returns
A prompt input key.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.utils.get_prompt_input_key.html
|
b9ad322aee18-0
|
langchain.memory.buffer_window.ConversationBufferWindowMemory¶
class langchain.memory.buffer_window.ConversationBufferWindowMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history', k: int = 5)[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 5¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: List[langchain.schema.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html
|
b9ad322aee18-1
|
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html
|
776dcf7c3c9c-0
|
langchain.memory.chat_message_histories.in_memory.ChatMessageHistory¶
class langchain.memory.chat_message_histories.in_memory.ChatMessageHistory(*, messages: List[BaseMessage] = [])[source]¶
Bases: BaseChatMessageHistory, BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param messages: List[langchain.schema.BaseMessage] = []¶
add_ai_message(message: str) → None¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Add a user message to the store
clear() → None[source]¶
Remove all messages from the store
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html
|
dac838408dae-0
|
langchain.memory.motorhead_memory.MotorheadMemory¶
class langchain.memory.motorhead_memory.MotorheadMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, url: str = 'https://api.getmetal.io/v1/motorhead', session_id: str, context: Optional[str] = None, api_key: Optional[str] = None, client_id: Optional[str] = None, timeout: int = 3000, memory_key: str = 'history')[source]¶
Bases: BaseChatMemory
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_key: Optional[str] = None¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param client_id: Optional[str] = None¶
param context: Optional[str] = None¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param session_id: str [Required]¶
param url: str = 'https://api.getmetal.io/v1/motorhead'¶
clear() → None¶
Clear memory contents.
delete_session() → None[source]¶
Delete a session
async init() → None[source]¶
load_memory_variables(values: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html
|
dac838408dae-1
|
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html
|
c612be33966a-0
|
langchain.load.serializable.BaseSerialized¶
class langchain.load.serializable.BaseSerialized[source]¶
Bases: TypedDict
Base class for serialized objects.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
lc
id
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html
|
c612be33966a-1
|
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html
|
22136762829a-0
|
langchain.load.serializable.SerializedSecret¶
class langchain.load.serializable.SerializedSecret[source]¶
Bases: dict
Serialized secret.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html
|
22136762829a-1
|
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶
type: Literal['secret']¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html
|
03e330235daa-0
|
langchain.load.load.loads¶
langchain.load.load.loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None) → Any[source]¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.load.loads.html
|
8cd2b4403e44-0
|
langchain.load.dump.dumpd¶
langchain.load.dump.dumpd(obj: Any) → Dict[str, Any][source]¶
Return a json dict representation of an object.
|
https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumpd.html
|
b0b5240e42df-0
|
langchain.load.serializable.SerializedNotImplemented¶
class langchain.load.serializable.SerializedNotImplemented[source]¶
Bases: dict
Serialized not implemented.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html
|
b0b5240e42df-1
|
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶
type: Literal['not_implemented']¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html
|
853df98e8480-0
|
langchain.load.dump.dumps¶
langchain.load.dump.dumps(obj: Any, *, pretty: bool = False) → str[source]¶
Return a json string representation of an object.
|
https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumps.html
|
706973398b7f-0
|
langchain.load.serializable.Serializable¶
class langchain.load.serializable.Serializable[source]¶
Bases: BaseModel, ABC
Serializable base class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
to_json() → Union[SerializedConstructor, SerializedNotImplemented][source]¶
to_json_not_implemented() → SerializedNotImplemented[source]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html
|
de6485c19df8-0
|
langchain.load.serializable.SerializedConstructor¶
class langchain.load.serializable.SerializedConstructor[source]¶
Bases: dict
Serialized constructor.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
kwargs
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html
|
de6485c19df8-1
|
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
kwargs: Dict[str, Any]¶
lc: int¶
type: Literal['constructor']¶
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html
|
207907f64343-0
|
langchain.load.dump.default¶
langchain.load.dump.default(obj: Any) → Any[source]¶
Return a default value for a Serializable object or
a SerializedNotImplemented object.
|
https://api.python.langchain.com/en/latest/load/langchain.load.dump.default.html
|
4ffcdaec657f-0
|
langchain.load.serializable.to_json_not_implemented¶
langchain.load.serializable.to_json_not_implemented(obj: object) → SerializedNotImplemented[source]¶
Serialize a “not implemented” object.
Parameters
obj – object to serialize
Returns
SerializedNotImplemented
|
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.to_json_not_implemented.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.