id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
|---|---|---|
0b5c55164a55-2
|
Return a full header of the agent’s status, summary, and current time.
get_summary(force_refresh: bool = False, now: Optional[datetime] = None) → str[source]¶
Return a descriptive summary of the agent.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
summarize_related_memories(observation: str) → str[source]¶
Summarize memories that are most relevant to an observation.
classmethod update_forward_refs(**localns: Any) → None¶
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
|
0b5c55164a55-3
|
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
|
8e75f318102f-0
|
langchain_experimental.generative_agents.memory.GenerativeAgentMemory¶
class langchain_experimental.generative_agents.memory.GenerativeAgentMemory[source]¶
Bases: BaseMemory
Memory for the generative agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param add_memory_key: str = 'add_memory'¶
param aggregate_importance: float = 0.0¶
Track the sum of the ‘importance’ of recent memories.
Triggers reflection when it reaches reflection_threshold.
param current_plan: List[str] = []¶
The current plan of the agent.
param importance_weight: float = 0.15¶
How much weight to assign the memory importance.
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
The core language model.
param max_tokens_limit: int = 1200¶
param memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]¶
The retriever to fetch related memories.
param most_recent_memories_key: str = 'most_recent_memories'¶
param most_recent_memories_token_key: str = 'recent_memories_token'¶
param now_key: str = 'now'¶
param queries_key: str = 'queries'¶
param reflecting: bool = False¶
param reflection_threshold: Optional[float] = None¶
When aggregate_importance exceeds reflection_threshold, stop to reflect.
param relevant_memories_key: str = 'relevant_memories'¶
param relevant_memories_simple_key: str = 'relevant_memories_simple'¶
param verbose: bool = False¶
add_memories(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
|
8e75f318102f-1
|
Add an observations or memories to the agent’s memory.
add_memory(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
Add an observation or memory to the agent’s memory.
chain(prompt: PromptTemplate) → LLMChain[source]¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
|
8e75f318102f-2
|
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
fetch_memories(observation: str, now: Optional[datetime] = None) → List[Document][source]¶
Fetch related memories.
format_memories_detail(relevant_memories: List[Document]) → str[source]¶
format_memories_simple(relevant_memories: List[Document]) → str[source]¶
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
pause_to_reflect(now: Optional[datetime] = None) → List[str][source]¶
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
|
8e75f318102f-3
|
Reflect on recent observations and generate ‘insights’.
save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) → None[source]¶
Save the context of this model run to memory.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
|
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
|
20b596aa5161-0
|
langchain.utils.utils.guard_import¶
langchain.utils.utils.guard_import(module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None) → Any[source]¶
Dynamically imports a module and raises a helpful exception if the module is not
installed.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.guard_import.html
|
7165189d5b71-0
|
langchain.utils.strings.stringify_value¶
langchain.utils.strings.stringify_value(val: Any) → str[source]¶
Stringify a value.
Parameters
val – The value to stringify.
Returns
The stringified value.
Return type
str
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_value.html
|
82ebf17b45d6-0
|
langchain.utils.math.cosine_similarity¶
langchain.utils.math.cosine_similarity(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray]) → ndarray[source]¶
Row-wise cosine similarity between two equal-width matrices.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity.html
|
96ba6bcf9ee0-0
|
langchain.utils.input.get_bolded_text¶
langchain.utils.input.get_bolded_text(text: str) → str[source]¶
Get bolded text.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_bolded_text.html
|
4017cbe5104b-0
|
langchain.utils.formatting.StrictFormatter¶
class langchain.utils.formatting.StrictFormatter[source]¶
A subclass of formatter that checks for extra keys.
Methods
__init__()
check_unused_args(used_args, args, kwargs)
Check to see if extra parameters are passed.
convert_field(value, conversion)
format(format_string, /, *args, **kwargs)
format_field(value, format_spec)
get_field(field_name, args, kwargs)
get_value(key, args, kwargs)
parse(format_string)
validate_input_variables(format_string, ...)
vformat(format_string, args, kwargs)
Check that no arguments are provided.
__init__()¶
check_unused_args(used_args: Sequence[Union[int, str]], args: Sequence, kwargs: Mapping[str, Any]) → None[source]¶
Check to see if extra parameters are passed.
convert_field(value, conversion)¶
format(format_string, /, *args, **kwargs)¶
format_field(value, format_spec)¶
get_field(field_name, args, kwargs)¶
get_value(key, args, kwargs)¶
parse(format_string)¶
validate_input_variables(format_string: str, input_variables: List[str]) → None[source]¶
vformat(format_string: str, args: Sequence, kwargs: Mapping[str, Any]) → str[source]¶
Check that no arguments are provided.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.formatting.StrictFormatter.html
|
6c0d1403a505-0
|
langchain.utils.input.get_color_mapping¶
langchain.utils.input.get_color_mapping(items: List[str], excluded_colors: Optional[List] = None) → Dict[str, str][source]¶
Get mapping for items to a support color.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_color_mapping.html
|
8618c562a532-0
|
langchain.utils.utils.check_package_version¶
langchain.utils.utils.check_package_version(package: str, lt_version: Optional[str] = None, lte_version: Optional[str] = None, gt_version: Optional[str] = None, gte_version: Optional[str] = None) → None[source]¶
Check the version of a package.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.check_package_version.html
|
81e403e302f3-0
|
langchain.utils.utils.get_pydantic_field_names¶
langchain.utils.utils.get_pydantic_field_names(pydantic_cls: Any) → Set[str][source]¶
Get field names, including aliases, for a pydantic class.
Parameters
pydantic_cls – Pydantic class.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.get_pydantic_field_names.html
|
f3ce42742a9d-0
|
langchain.utils.input.get_colored_text¶
langchain.utils.input.get_colored_text(text: str, color: str) → str[source]¶
Get colored text.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_colored_text.html
|
5e121653c77f-0
|
langchain.utils.math.cosine_similarity_top_k¶
langchain.utils.math.cosine_similarity_top_k(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray], top_k: Optional[int] = 5, score_threshold: Optional[float] = None) → Tuple[List[Tuple[int, int]], List[float]][source]¶
Row-wise cosine similarity with optional top-k and score threshold filtering.
Parameters
X – Matrix.
Y – Matrix, same width as X.
top_k – Max number of results to return.
score_threshold – Minimum cosine similarity of results.
Returns
Tuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),second contains corresponding cosine similarities.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity_top_k.html
|
b977c0643605-0
|
langchain.utils.utils.xor_args¶
langchain.utils.utils.xor_args(*arg_groups: Tuple[str, ...]) → Callable[source]¶
Validate specified keyword args are mutually exclusive.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.xor_args.html
|
76c55d11e1a5-0
|
langchain.utils.utils.raise_for_status_with_text¶
langchain.utils.utils.raise_for_status_with_text(response: Response) → None[source]¶
Raise an error with the response text.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.raise_for_status_with_text.html
|
752c37d6c6ba-0
|
langchain.utils.strings.comma_list¶
langchain.utils.strings.comma_list(items: List[Any]) → str[source]¶
Convert a list to a comma-separated string.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.comma_list.html
|
907b0c6afc34-0
|
langchain.utils.input.print_text¶
langchain.utils.input.print_text(text: str, color: Optional[str] = None, end: str = '', file: Optional[TextIO] = None) → None[source]¶
Print text with highlighting and no end characters.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.input.print_text.html
|
845e65141467-0
|
langchain.utils.utils.mock_now¶
langchain.utils.utils.mock_now(dt_value)[source]¶
Context manager for mocking out datetime.now() in unit tests.
Example:
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11)
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.mock_now.html
|
ffc666bfaf31-0
|
langchain.utils.env.get_from_env¶
langchain.utils.env.get_from_env(key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_env.html
|
f279f557f9be-0
|
langchain.utils.strings.stringify_dict¶
langchain.utils.strings.stringify_dict(data: dict) → str[source]¶
Stringify a dictionary.
Parameters
data – The dictionary to stringify.
Returns
The stringified dictionary.
Return type
str
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_dict.html
|
8b3358bd391d-0
|
langchain.utils.env.get_from_dict_or_env¶
langchain.utils.env.get_from_dict_or_env(data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable.
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_dict_or_env.html
|
e6f13a7b3c05-0
|
langchain.utils.utils.build_extra_kwargs¶
langchain.utils.utils.build_extra_kwargs(extra_kwargs: Dict[str, Any], values: Dict[str, Any], all_required_field_names: Set[str]) → Dict[str, Any][source]¶
|
https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.build_extra_kwargs.html
|
5d50cc1fd707-0
|
langchain.memory.buffer_window.ConversationBufferWindowMemory¶
class langchain.memory.buffer_window.ConversationBufferWindowMemory[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory inside a limited size window.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 5¶
Number of messages to store in buffer.
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html
|
5d50cc1fd707-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html
|
5d50cc1fd707-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationBufferWindowMemory¶
Figma
Meta-Prompt
Voice Assistant
Create ChatGPT clone
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html
|
86c8affee6a7-0
|
langchain.memory.readonly.ReadOnlySharedMemory¶
class langchain.memory.readonly.ReadOnlySharedMemory[source]¶
Bases: BaseMemory
A memory wrapper that is read-only and cannot be changed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memory: langchain.schema.memory.BaseMemory [Required]¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html
|
86c8affee6a7-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Load memory variables from memory.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html
|
86c8affee6a7-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Return memory variables.
Examples using ReadOnlySharedMemory¶
Shared memory across agents and tools
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html
|
8831fc51c425-0
|
langchain.memory.vectorstore.VectorStoreRetrieverMemory¶
class langchain.memory.vectorstore.VectorStoreRetrieverMemory[source]¶
Bases: BaseMemory
VectorStoreRetriever-backed memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param exclude_input_keys: Sequence[str] [Optional]¶
Input keys to exclude in addition to memory key when constructing the document
param input_key: Optional[str] = None¶
Key name to index the inputs to load_memory_variables.
param memory_key: str = 'history'¶
Key name to locate the memories in the result of load_memory_variables.
param retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]¶
VectorStoreRetriever object to connect to.
param return_docs: bool = False¶
Whether or not to return the result of querying the database directly.
clear() → None[source]¶
Nothing to clear.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
8831fc51c425-1
|
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Union[List[Document], str]][source]¶
Return history buffer.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
8831fc51c425-2
|
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
8831fc51c425-3
|
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
The list of keys emitted from the load_memory_variables method.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html
|
600f4089ac57-0
|
langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory¶
class langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(table_name: str, session_id: str, endpoint_url: Optional[str] = None)[source]¶
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name – name of the DynamoDB table
session_id – arbitrary key that is used to store the messages
of a single chat session.
endpoint_url – URL of the AWS endpoint to connect to. This argument
is optional and useful for test purposes, like using Localstack.
If you plan to use AWS cloud service, you normally don’t have to
worry about setting the endpoint_url.
Attributes
messages
Retrieve the messages from DynamoDB
Methods
__init__(table_name, session_id[, endpoint_url])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in DynamoDB
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from DynamoDB
__init__(table_name: str, session_id: str, endpoint_url: Optional[str] = None)[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in DynamoDB
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory.html
|
600f4089ac57-1
|
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from DynamoDB
Examples using DynamoDBChatMessageHistory¶
Dynamodb Chat Message History
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory.html
|
6bf3718c37b7-0
|
langchain.memory.entity.ConversationEntityMemory¶
class langchain.memory.entity.ConversationEntityMemory[source]¶
Bases: BaseChatMemory
Entity extractor & summarizer memory.
Extracts named entities from the recent chat history and generates summaries.
With a swappable entity store, persisting entities across conversations.
Defaults to an in-memory entity store, and can be swapped out for a Redis,
SQLite, or other entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_history_key: str = 'history'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param entity_cache: List[str] = []¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-1
|
param entity_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-2
|
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-3
|
param entity_store: langchain.memory.entity.BaseEntityStore [Optional]¶
param entity_summarization_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 3¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-4
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-5
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Returns chat history and all generated entities with summaries if available,
and updates or clears the recent entity cache.
New entity name can be found when calling this method, before the entity
summaries are generated, so the entity cache values may be empty if no entity
descriptions are generated yet.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation history to the entity store.
Generates a summary for each entity in the entity cache by prompting
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
6bf3718c37b7-6
|
Generates a summary for each entity in the entity cache by prompting
the model, and saves these summaries to the entity store.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
Access chat memory messages.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationEntityMemory¶
Entity Memory with SQLite storage
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html
|
c62138de87e6-0
|
langchain.memory.zep_memory.ZepMemory¶
class langchain.memory.zep_memory.ZepMemory[source]¶
Bases: ConversationBufferMemory
Persist your chain history to the Zep Memory Server.
The number of messages returned by Zep and when the Zep server summarizes chat
histories is configurable. See the Zep documentation for more details.
Documentation: https://docs.getzep.com
Example
memory = ZepMemory(
session_id=session_id, # Identifies your user or a user’s session
url=ZEP_API_URL, # Your Zep server’s URL
api_key=<your_api_key>, # Optional
memory_key=”history”, # Ensure this matches the key used in
# chain’s prompt template
return_messages=True, # Does your prompt template expect a string# or a list of Messages?
)
chain = LLMChain(memory=memory,…) # Configure your chain to use the ZepMemoryinstance
Note
To persist metadata alongside your chat history, your will need to create a
custom Chain class that overrides the prep_outputs method to include the metadata
in the call to self.memory.save_context.
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Initialize ZepMemory.
Parameters
session_id (str) – Identifies your user or a user’s session
url (str, optional) – Your Zep server’s URL. Defaults to
“http://localhost:8000”.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html
|
c62138de87e6-1
|
“http://localhost:8000”.
api_key (Optional[str], optional) – Your Zep API key. Defaults to None.
output_key (Optional[str], optional) – The key to use for the output message.
Defaults to None.
input_key (Optional[str], optional) – The key to use for the input message.
Defaults to None.
return_messages (bool, optional) – Does your prompt template expect a string
or a list of Messages? Defaults to False
i.e. return a string.
human_prefix (str, optional) – The prefix to use for human messages.
Defaults to “Human”.
ai_prefix (str, optional) – The prefix to use for AI messages.
Defaults to “AI”.
memory_key (str, optional) – The key to use for the memory.
Defaults to “history”.
Ensure that this matches the key used in
chain’s prompt template.
param ai_prefix: str = 'AI'¶
param chat_memory: ZepChatMessageHistory [Required]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html
|
c62138de87e6-2
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html
|
c62138de87e6-3
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str], metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) – The inputs to the chain.
outputs (Dict[str, str]) – The outputs from the chain.
metadata (Optional[Dict[str, Any]], optional) – Any metadata to save with
the context. Defaults to None
Returns
None
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html
|
c62138de87e6-4
|
the context. Defaults to None
Returns
None
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: Any¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ZepMemory¶
Zep Memory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html
|
97b72f3fb690-0
|
langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory¶
class langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory(session_id: str, session: Session, keyspace: str, table_name: str = 'message_store', ttl_seconds: int | None = None)[source]¶
Chat message history that stores history in Cassandra.
Parameters
session_id – arbitrary key that is used to store the messages
of a single chat session.
session – a Cassandra Session object (an open DB connection)
keyspace – name of the keyspace to use.
table_name – name of the table to use.
ttl_seconds – time-to-live (seconds) for automatic expiration
of stored entries. None (default) for no expiration.
Attributes
messages
Retrieve all session messages from DB
Methods
__init__(session_id, session, keyspace[, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Write a message to the table
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from DB
__init__(session_id: str, session: Session, keyspace: str, table_name: str = 'message_store', ttl_seconds: int | None = None) → None[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Write a message to the table
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html
|
97b72f3fb690-1
|
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from DB
Examples using CassandraChatMessageHistory¶
Cassandra Chat Message History
Cassandra
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html
|
5d44800831eb-0
|
langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory¶
class langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Chat message history backed by Azure CosmosDB.
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
Attributes
messages
A list of Messages stored in-memory.
Methods
__init__(cosmos_endpoint, cosmos_database, ...)
Initializes a new instance of the CosmosDBChatMessageHistory class.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a self-created message to the store
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html
|
5d44800831eb-1
|
Convenience method for adding a human message string to the store.
clear()
Clear session memory from this memory and cosmos.
load_messages()
Retrieve the messages from Cosmos
prepare_cosmos()
Prepare the CosmosDB client.
upsert_messages()
Update the cosmosdb item.
__init__(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html
|
5d44800831eb-2
|
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from this memory and cosmos.
load_messages() → None[source]¶
Retrieve the messages from Cosmos
prepare_cosmos() → None[source]¶
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages() → None[source]¶
Update the cosmosdb item.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html
|
219aa3602a27-0
|
langchain.memory.buffer.ConversationStringBufferMemory¶
class langchain.memory.buffer.ConversationStringBufferMemory[source]¶
Bases: BaseMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
Prefix to use for AI generated responses.
param buffer: str = ''¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html
|
219aa3602a27-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html
|
219aa3602a27-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Will always return list of memory variables.
:meta private:
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html
|
c9559bce2e68-0
|
langchain.memory.combined.CombinedMemory¶
class langchain.memory.combined.CombinedMemory[source]¶
Bases: BaseMemory
Combining multiple memories’ data together.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memories: List[langchain.schema.memory.BaseMemory] [Required]¶
For tracking all the memories that should be accessed.
clear() → None[source]¶
Clear context from this session for every memory.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html
|
c9559bce2e68-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Load all vars from sub-memories.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html
|
c9559bce2e68-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this session for every memory.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
All the memory variables that this instance provides.
Examples using CombinedMemory¶
How to use multiple memory classes in the same chain
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html
|
3c89e856bbf0-0
|
langchain.memory.chat_message_histories.sql.SQLChatMessageHistory¶
class langchain.memory.chat_message_histories.sql.SQLChatMessageHistory(session_id: str, connection_string: str, table_name: str = 'message_store')[source]¶
Chat message history stored in an SQL database.
Attributes
messages
Retrieve all messages from db
Methods
__init__(session_id, connection_string[, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in db
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from db
__init__(session_id: str, connection_string: str, table_name: str = 'message_store')[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in db
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from db
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.SQLChatMessageHistory.html
|
b382a115e67d-0
|
langchain.memory.entity.InMemoryEntityStore¶
class langchain.memory.entity.InMemoryEntityStore[source]¶
Bases: BaseEntityStore
In-memory Entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param store: Dict[str, Optional[str]] = {}¶
clear() → None[source]¶
Delete all entities from store.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
delete(key: str) → None[source]¶
Delete entity value from store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html
|
b382a115e67d-1
|
delete(key: str) → None[source]¶
Delete entity value from store.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
exists(key: str) → bool[source]¶
Check if entity exists in store.
classmethod from_orm(obj: Any) → Model¶
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html
|
b382a115e67d-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html
|
9b935016dcfe-0
|
langchain.memory.entity.RedisEntityStore¶
class langchain.memory.entity.RedisEntityStore[source]¶
Bases: BaseEntityStore
Redis-backed Entity store.
Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param key_prefix: str = 'memory_store'¶
param recall_ttl: Optional[int] = 259200¶
param redis_client: Any = None¶
param session_id: str = 'default'¶
param ttl: Optional[int] = 86400¶
clear() → None[source]¶
Delete all entities from store.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.RedisEntityStore.html
|
9b935016dcfe-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
delete(key: str) → None[source]¶
Delete entity value from store.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
exists(key: str) → bool[source]¶
Check if entity exists in store.
classmethod from_orm(obj: Any) → Model¶
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.RedisEntityStore.html
|
9b935016dcfe-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property full_key_prefix: str¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.RedisEntityStore.html
|
98e1871013ea-0
|
langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory¶
class langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/
Instantiate a chat message history cache that uses Momento as a backend.
Note: to instantiate the cache client passed to MomentoChatMessageHistory,
you must have a Momento account at https://gomomento.com/.
Parameters
session_id (str) – The session ID to use for this chat session.
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the messages.
key_prefix (str, optional) – The prefix to apply to the cache key.
Defaults to “message_store:”.
ttl (Optional[timedelta], optional) – The TTL to use for the messages.
Defaults to None, ie the default TTL of the cache will be used.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist.
Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
Attributes
messages
Retrieve the messages from Momento.
Methods
__init__(session_id, cache_client, cache_name, *)
Instantiate a chat message history cache that uses Momento as a backend.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Store a message in the cache.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html
|
98e1871013ea-1
|
add_message(message)
Store a message in the cache.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Remove the session's messages from the cache.
from_client_params(session_id, cache_name, ...)
Construct cache from CacheClient parameters.
__init__(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Instantiate a chat message history cache that uses Momento as a backend.
Note: to instantiate the cache client passed to MomentoChatMessageHistory,
you must have a Momento account at https://gomomento.com/.
Parameters
session_id (str) – The session ID to use for this chat session.
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the messages.
key_prefix (str, optional) – The prefix to apply to the cache key.
Defaults to “message_store:”.
ttl (Optional[timedelta], optional) – The TTL to use for the messages.
Defaults to None, ie the default TTL of the cache will be used.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist.
Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Store a message in the cache.
Parameters
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html
|
98e1871013ea-2
|
Store a message in the cache.
Parameters
message (BaseMessage) – The message object to store.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Remove the session’s messages from the cache.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
classmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoChatMessageHistory[source]¶
Construct cache from CacheClient parameters.
Examples using MomentoChatMessageHistory¶
Momento Chat Message History
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html
|
0bb83e5d25de-0
|
langchain.memory.chat_message_histories.rocksetdb.RocksetChatMessageHistory¶
class langchain.memory.chat_message_histories.rocksetdb.RocksetChatMessageHistory(session_id: str, client: ~typing.Any, collection: str, workspace: str = 'commons', messages_key: str = 'messages', sync: bool = False, message_uuid_method: ~typing.Callable[[], ~typing.Union[str, int]] = <function RocksetChatMessageHistory.<lambda>>)[source]¶
Uses Rockset to store chat messages.
To use, ensure that the rockset python package installed.
Example
from langchain.memory.chat_message_histories import (
RocksetChatMessageHistory
)
from rockset import RocksetClient
history = RocksetChatMessageHistory(
session_id="MySession",
client=RocksetClient(),
collection="langchain_demo",
sync=True
)
history.add_user_message("hi!")
history.add_ai_message("whats up?")
print(history.messages)
Constructs a new RocksetChatMessageHistory.
Parameters
session_id (-) – The ID of the chat session
client (-) – The RocksetClient object to use to query
collection (-) – The name of the collection to use to store chat
messages. If a collection with the given name
does not exist in the workspace, it is created.
workspace (-) – The workspace containing collection. Defaults
to “commons”
messages_key (-) – The DB column containing message history.
Defaults to “messages”
sync (-) – Whether to wait for messages to be added. Defaults
to False. NOTE: setting this to True will slow
down performance.
message_uuid_method (-) – The method that generates message IDs.
If set, all messages will have an id field within the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.rocksetdb.RocksetChatMessageHistory.html
|
0bb83e5d25de-1
|
If set, all messages will have an id field within the
additional_kwargs property. If this param is not set
and sync is False, message IDs will not be created.
If this param is not set and sync is True, the
uuid.uuid4 method will be used to create message IDs.
Attributes
ADD_TIMEOUT_MS
CREATE_TIMEOUT_MS
SLEEP_INTERVAL_MS
messages
Messages in this chat history.
Methods
__init__(session_id, client, collection[, ...])
Constructs a new RocksetChatMessageHistory.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a Message object to the history.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Removes all messages from the chat history
__init__(session_id: str, client: ~typing.Any, collection: str, workspace: str = 'commons', messages_key: str = 'messages', sync: bool = False, message_uuid_method: ~typing.Callable[[], ~typing.Union[str, int]] = <function RocksetChatMessageHistory.<lambda>>) → None[source]¶
Constructs a new RocksetChatMessageHistory.
Parameters
session_id (-) – The ID of the chat session
client (-) – The RocksetClient object to use to query
collection (-) – The name of the collection to use to store chat
messages. If a collection with the given name
does not exist in the workspace, it is created.
workspace (-) – The workspace containing collection. Defaults
to “commons”
messages_key (-) – The DB column containing message history.
Defaults to “messages”
sync (-) – Whether to wait for messages to be added. Defaults
to False. NOTE: setting this to True will slow
down performance.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.rocksetdb.RocksetChatMessageHistory.html
|
0bb83e5d25de-2
|
to False. NOTE: setting this to True will slow
down performance.
message_uuid_method (-) – The method that generates message IDs.
If set, all messages will have an id field within the
additional_kwargs property. If this param is not set
and sync is False, message IDs will not be created.
If this param is not set and sync is True, the
uuid.uuid4 method will be used to create message IDs.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a Message object to the history.
Parameters
message – A BaseMessage object to store.
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Removes all messages from the chat history
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.rocksetdb.RocksetChatMessageHistory.html
|
6838872f5b18-0
|
langchain.memory.token_buffer.ConversationTokenBufferMemory¶
class langchain.memory.token_buffer.ConversationTokenBufferMemory[source]¶
Bases: BaseChatMemory
Conversation chat memory with token limit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html
|
6838872f5b18-1
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html
|
6838872f5b18-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer. Pruned.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationTokenBufferMemory¶
ConversationTokenBufferMemory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html
|
538db85c38af-0
|
langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory¶
class langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
Chat message history stored in a Postgres database.
Attributes
messages
Retrieve the messages from PostgreSQL
Methods
__init__(session_id[, connection_string, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in PostgreSQL
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from PostgreSQL
__init__(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in PostgreSQL
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from PostgreSQL
Examples using PostgresChatMessageHistory¶
Postgres Chat Message History
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory.html
|
68226e211433-0
|
langchain.memory.buffer.ConversationBufferMemory¶
class langchain.memory.buffer.ConversationBufferMemory[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
68226e211433-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
68226e211433-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: Any¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationBufferMemory¶
Gradio Tools
SceneXplain
Dynamodb Chat Message History
Chat Over Documents with Vectara
Bedrock
QA over Documents
Structure answers with OpenAI functions
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
68226e211433-3
|
Bedrock
QA over Documents
Structure answers with OpenAI functions
Agent Debates with Tools
Adding Message Memory backed by a database to an Agent
How to add memory to a Multi-Input Chain
How to add Memory to an LLMChain
How to use multiple memory classes in the same chain
How to customize conversational memory
How to add Memory to an Agent
Shared memory across agents and tools
Add Memory to OpenAI Functions Agent
Retrieval QA using OpenAI functions
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html
|
b93dde8ae89c-0
|
langchain.memory.utils.get_prompt_input_key¶
langchain.memory.utils.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) → str[source]¶
Get the prompt input key.
Parameters
inputs – Dict[str, Any]
memory_variables – List[str]
Returns
A prompt input key.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.utils.get_prompt_input_key.html
|
2f09108b2269-0
|
langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory¶
class langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory(collection_name: str, session_id: str, user_id: str, firestore_client: Optional[Client] = None)[source]¶
Chat message history backed by Google Firestore.
Initialize a new instance of the FirestoreChatMessageHistory class.
Parameters
collection_name – The name of the collection to use.
session_id – The session ID for the chat..
user_id – The user ID for the chat.
Attributes
messages
A list of Messages stored in-memory.
Methods
__init__(collection_name, session_id, user_id)
Initialize a new instance of the FirestoreChatMessageHistory class.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a Message object to the store.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from this memory and Firestore.
load_messages()
Retrieve the messages from Firestore
prepare_firestore()
Prepare the Firestore client.
upsert_messages([new_message])
Update the Firestore document.
__init__(collection_name: str, session_id: str, user_id: str, firestore_client: Optional[Client] = None)[source]¶
Initialize a new instance of the FirestoreChatMessageHistory class.
Parameters
collection_name – The name of the collection to use.
session_id – The session ID for the chat..
user_id – The user ID for the chat.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a Message object to the store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html
|
2f09108b2269-1
|
Add a Message object to the store.
Parameters
message – A BaseMessage object to store.
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from this memory and Firestore.
load_messages() → None[source]¶
Retrieve the messages from Firestore
prepare_firestore() → None[source]¶
Prepare the Firestore client.
Use this function to make sure your database is ready.
upsert_messages(new_message: Optional[BaseMessage] = None) → None[source]¶
Update the Firestore document.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html
|
63fa2f7992ef-0
|
langchain.memory.entity.SQLiteEntityStore¶
class langchain.memory.entity.SQLiteEntityStore[source]¶
Bases: BaseEntityStore
SQLite-backed Entity store
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param session_id: str = 'default'¶
param table_name: str = 'memory_store'¶
clear() → None[source]¶
Delete all entities from store.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
delete(key: str) → None[source]¶
Delete entity value from store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.SQLiteEntityStore.html
|
63fa2f7992ef-1
|
delete(key: str) → None[source]¶
Delete entity value from store.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
exists(key: str) → bool[source]¶
Check if entity exists in store.
classmethod from_orm(obj: Any) → Model¶
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.SQLiteEntityStore.html
|
63fa2f7992ef-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property full_table_name: str¶
Examples using SQLiteEntityStore¶
Entity Memory with SQLite storage
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.SQLiteEntityStore.html
|
4f679dd73139-0
|
langchain.memory.summary.SummarizerMixin¶
class langchain.memory.summary.SummarizerMixin[source]¶
Bases: BaseModel
Mixin for summarizer.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param human_prefix: str = 'Human'¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param summary_message_cls: Type[langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html
|
4f679dd73139-1
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html
|
4f679dd73139-2
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str[source]¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html
|
08865e1789ce-0
|
langchain.memory.chat_message_histories.zep.ZepChatMessageHistory¶
class langchain.memory.chat_message_histories.zep.ZepChatMessageHistory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None)[source]¶
Chat message history that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
api_key=<your_api_key>,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Attributes
messages
Retrieve messages from Zep memory
zep_messages
Retrieve summary from Zep memory
zep_summary
Retrieve summary from Zep memory
Methods
__init__(session_id[, url, api_key])
add_ai_message(message[, metadata])
Convenience method for adding an AI message string to the store.
add_message(message[, metadata])
Append the message to the Zep memory history
add_user_message(message[, metadata])
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html
|
08865e1789ce-1
|
Append the message to the Zep memory history
add_user_message(message[, metadata])
Convenience method for adding a human message string to the store.
clear()
Clear session memory from Zep.
search(query[, metadata, limit])
Search Zep memory for messages matching the query
__init__(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None) → None[source]¶
add_ai_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
metadata – Optional metadata to attach to the message.
add_message(message: BaseMessage, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Append the message to the Zep memory history
add_user_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
metadata – Optional metadata to attach to the message.
clear() → None[source]¶
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
search(query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None) → List[MemorySearchResult][source]¶
Search Zep memory for messages matching the query
Examples using ZepChatMessageHistory¶
Zep
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html
|
151707fd9008-0
|
langchain.memory.chat_message_histories.in_memory.ChatMessageHistory¶
class langchain.memory.chat_message_histories.in_memory.ChatMessageHistory[source]¶
Bases: BaseChatMessageHistory, BaseModel
In memory implementation of chat message history.
Stores messages in an in memory list.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param messages: List[langchain.schema.messages.BaseMessage] = []¶
A list of Messages stored in-memory.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Remove all messages from the store
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html
|
151707fd9008-1
|
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html
|
151707fd9008-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ChatMessageHistory¶
Adding Message Memory backed by a database to an Agent
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html
|
116b556ba24e-0
|
langchain.memory.summary_buffer.ConversationSummaryBufferMemory¶
class langchain.memory.summary_buffer.ConversationSummaryBufferMemory[source]¶
Bases: BaseChatMemory, SummarizerMixin
Buffer with summarizer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param moving_summary_buffer: str = ''¶
param output_key: Optional[str] = None¶
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param return_messages: bool = False¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
116b556ba24e-1
|
param return_messages: bool = False¶
param summary_message_cls: Type[BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
116b556ba24e-2
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
prune() → None[source]¶
Prune buffer if it exceeds max token limit
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
116b556ba24e-3
|
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationSummaryBufferMemory¶
ConversationSummaryBufferMemory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html
|
044b462b7714-0
|
langchain.memory.summary.ConversationSummaryMemory¶
class langchain.memory.summary.ConversationSummaryMemory[source]¶
Bases: BaseChatMemory, SummarizerMixin
Conversation summarizer to chat memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param buffer: str = ''¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param return_messages: bool = False¶
param summary_message_cls: Type[BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
clear() → None[source]¶
Clear memory contents.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.