id stringlengths 14 16 | text stringlengths 31 2.41k | source stringlengths 53 121 |
|---|---|---|
76417e48a30b-14 | constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.ChatMessage(*, content, additional_kwargs=None, role)[source]ο
Bases: langchain.schema.BaseMessage
Type of message with arbitrary speaker.
Parameters
content (str) β
additional_kwargs (dict) β
role (str) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-15 | the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-16 | Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
langchain.schema.messages_to_dict(messages)[source]ο
Convert messages to dict.
Parameters
messages (List[langchain.schema.BaseMessage]) β List of messages to convert.
Returns
List of dicts.
Return type
List[dict]
langchain.schema.messages_from_dict(messages)[source]ο
Convert messages from dict.
Parameters
messages (List[dict]) β List of messages (dicts) to convert.
Returns
List of messages (BaseMessages).
Return type
List[langchain.schema.BaseMessage]
class langchain.schema.ChatGeneration(*, text='', generation_info=None, message)[source]ο
Bases: langchain.schema.Generation
Output of a single generation.
Parameters
text (str) β
generation_info (Optional[Dict[str, Any]]) β
message (langchain.schema.BaseMessage) β
Return type
None | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-17 | message (langchain.schema.BaseMessage) β
Return type
None
attribute generation_info: Optional[Dict[str, Any]] = Noneο
Raw generation info response from the provider
attribute text: str = ''ο
Generated text output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-18 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-19 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
class langchain.schema.RunInfo(*, run_id)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant metadata for a Run.
Parameters
run_id (uuid.UUID) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-20 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-21 | Parameters
localns (Any) β
Return type
None
class langchain.schema.ChatResult(*, generations, llm_output=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for a Chat Result.
Parameters
generations (List[langchain.schema.ChatGeneration]) β
llm_output (Optional[dict]) β
Return type
None
attribute generations: List[langchain.schema.ChatGeneration] [Required]ο
List of the things generated.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-22 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-23 | Parameters
localns (Any) β
Return type
None
class langchain.schema.LLMResult(*, generations, llm_output=None, run=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for an LLM Result.
Parameters
generations (List[List[langchain.schema.Generation]]) β
llm_output (Optional[dict]) β
run (Optional[List[langchain.schema.RunInfo]]) β
Return type
None
attribute generations: List[List[langchain.schema.Generation]] [Required]ο
List of the things generated. This is List[List[]] because
each input could have multiple generations.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
attribute run: Optional[List[langchain.schema.RunInfo]] = Noneο
Run metadata.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-24 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
flatten()[source]ο
Flatten generations into a single list.
Return type
List[langchain.schema.LLMResult]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-25 | exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
class langchain.schema.PromptValue[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-26 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract to_messages()[source]ο
Return prompt as messages.
Return type
List[langchain.schema.BaseMessage]
abstract to_string()[source]ο
Return prompt as string. | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-27 | abstract to_string()[source]ο
Return prompt as string.
Return type
str
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseMemory[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Base interface for memory in chains.
Return type
None
abstract clear()[source]ο
Clear memory contents.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-28 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-29 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
abstract save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-30 | property lc_serializable: boolο
Return whether or not the class is serializable.
abstract property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.schema.BaseChatMessageHistory[source]ο
Bases: abc.ABC
Base interface for chat message history
See ChatMessageHistory for default implementation.
add_user_message(message)[source]ο
Add a user message to the store
Parameters
message (str) β
Return type
None
add_ai_message(message)[source]ο
Add an AI message to the store
Parameters
message (str) β
Return type
None
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
abstract clear()[source]ο
Remove all messages from the store
Return type
None
class langchain.schema.Document(*, page_content, metadata=None)[source]ο
Bases: langchain.load.serializable.Serializable
Interface for interacting with a document.
Parameters
page_content (str) β
metadata (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-31 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-32 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseRetriever[source]ο
Bases: abc.ABC
Base interface for retrievers.
abstract get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
abstract async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-33 | Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
langchain.schema.Memoryο
alias of langchain.schema.BaseMemory
class langchain.schema.BaseLLMOutputParser[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC, Generic[langchain.schema.T]
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-34 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-35 | Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseOutputParser[source]ο
Bases: langchain.schema.BaseLLMOutputParser, abc.ABC, Generic[langchain.schema.T]
Class to parse the output of an LLM call.
Output parsers help structure language model responses.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-36 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)[source]ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.schema.T | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-37 | Returns
structured output
Return type
langchain.schema.T
parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.NoOpOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser[str]
Output parser that just returns the text as is.
Return type
None
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-38 | Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-39 | Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
str
parse_result(result)ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-40 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
exception langchain.schema.OutputParserException(error, observation=None, llm_output=None, send_to_llm=False)[source]ο
Bases: ValueError
Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
Parameters
error (Any) β
observation (str | None) β
llm_output (str | None) β
send_to_llm (bool) β
add_note()ο
Exception.add_note(note) β
add a note to the exception
with_traceback()ο
Exception.with_traceback(tb) β
set self.__traceback__ to tb and return self.
class langchain.schema.BaseDocumentTransformer[source]ο
Bases: abc.ABC
Base interface for transforming documents.
abstract transform_documents(documents, **kwargs)[source]ο
Transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
abstract async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a list of documents. | https://api.python.langchain.com/en/stable/modules/base_classes.html |
76417e48a30b-41 | Asynchronously transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/base_classes.html |
a506c7ad03e7-0 | Chat Modelsο
class langchain.chat_models.ChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-1 | max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: Optional[int] = Noneο
Maximum number of tokens to generate.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt.
attribute openai_api_base: Optional[str] = Noneο
attribute openai_api_key: Optional[str] = Noneο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_organization: Optional[str] = Noneο
attribute openai_proxy: Optional[str] = Noneο
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-2 | be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
completion_with_retry(**kwargs)[source]ο
Use tenacity to retry the completion call.
Parameters
kwargs (Any) β
Return type
Any
get_num_tokens_from_messages(messages)[source]ο
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)[source]ο
Get the tokens present in the text with tiktoken package.
Parameters
text (str) β
Return type
List[int]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-3 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.AzureChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key='', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the βModel deployment nameβ in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-4 | Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (str) β
openai_api_base (str) β
openai_organization (str) β
openai_proxy (str) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
deployment_name (str) β
openai_api_type (str) β
openai_api_version (str) β
Return type
None
attribute deployment_name: str = ''ο
attribute openai_api_base: str = ''ο
attribute openai_api_key: str = ''ο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_api_type: str = 'azure'ο
attribute openai_api_version: str = ''ο
attribute openai_organization: str = ''ο
attribute openai_proxy: str = ''ο
class langchain.chat_models.FakeListChatModel(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]ο
Bases: langchain.chat_models.base.SimpleChatModel
Fake ChatModel for testing purposes.
Parameters | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-5 | Fake ChatModel for testing purposes.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
responses (List) β
i (int) β
Return type
None
attribute i: int = 0ο
attribute responses: List [Required]ο
class langchain.chat_models.PromptLayerChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags (Optional[List[str]]) β List of strings to tag the request with.
return_pl_id (Optional[bool]) β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
cache (Optional[bool]) β | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-6 | Generation object.
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
attribute pl_tags: Optional[List[str]] = Noneο
attribute return_pl_id: Optional[bool] = Falseο
class langchain.chat_models.ChatAnthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, langchain.llms.anthropic._AnthropicCommon | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-7 | Wrapper around Anthropicβs large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-8 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.ChatGooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='models/chat-bison-001', google_api_key=None, temperature=None, top_p=None, top_k=None, n=1)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, pydantic.main.BaseModel
Wrapper around Googleβs PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
google_api_key (Optional[str]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
top_k (Optional[int]) β
n (int) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute model_name: str = 'models/chat-bison-001'ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated. | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-9 | not return the full n completions if duplicates are generated.
attribute temperature: Optional[float] = Noneο
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
attribute top_k: Optional[int] = Noneο
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
attribute top_p: Optional[float] = Noneο
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
class langchain.chat_models.ChatVertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='chat-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5)[source]ο
Bases: langchain.llms.vertexai._VertexAICommon, langchain.chat_models.base.BaseChatModel
Wrapper around Vertex AI large language models.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (_LanguageModel) β
model_name (str) β
temperature (float) β
max_output_tokens (int) β
top_p (float) β
top_k (int) β
stop (Optional[List[str]]) β
project (Optional[str]) β
location (str) β
credentials (Any) β | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a506c7ad03e7-10 | location (str) β
credentials (Any) β
request_parallelism (int) β
Return type
None
attribute model_name: str = 'chat-bison'ο
Model name to use. | https://api.python.langchain.com/en/stable/modules/chat_models.html |
a44a60c1f5be-0 | LLMsο
Wrappers on top of large language models APIs.
class langchain.llms.AI21(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='j2-jumbo-instruct', temperature=0.7, maxTokens=256, minTokens=0, topP=1.0, presencePenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults=1, logitBias=None, ai21_api_key=None, stop=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
temperature (float) β
maxTokens (int) β
minTokens (int) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-1 | maxTokens (int) β
minTokens (int) β
topP (float) β
presencePenalty (langchain.llms.ai21.AI21PenaltyData) β
countPenalty (langchain.llms.ai21.AI21PenaltyData) β
frequencyPenalty (langchain.llms.ai21.AI21PenaltyData) β
numResults (int) β
logitBias (Optional[Dict[str, float]]) β
ai21_api_key (Optional[str]) β
stop (Optional[List[str]]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to count.
attribute frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to frequency.
attribute logitBias: Optional[Dict[str, float]] = Noneο
Adjust the probability of specific tokens being generated.
attribute maxTokens: int = 256ο
The maximum number of tokens to generate in the completion.
attribute minTokens: int = 0ο
The minimum number of tokens to generate in the completion.
attribute model: str = 'j2-jumbo-instruct'ο
Model name to use. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-2 | Model name to use.
attribute numResults: int = 1ο
How many completions to generate for each prompt.
attribute presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute topP: float = 1.0ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-3 | async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-4 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-5 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-6 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-7 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AlephAlpha(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='luminous-base', maximum_tokens=64, temperature=0.0, top_k=0, top_p=0.0, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalties_include_prompt=False, use_multiplicative_presence_penalty=False, penalty_bias=None, penalty_exceptions=None, penalty_exceptions_include_stop_sequences=None, best_of=None, n=1, logit_bias=None, log_probs=None, tokens=False, disable_optimizations=False, minimum_tokens=0, echo=False, use_multiplicative_frequency_penalty=False, sequence_penalty=0.0, sequence_penalty_min_length=2, use_multiplicative_sequence_penalty=False, completion_bias_inclusion=None, completion_bias_inclusion_first_token_only=False, completion_bias_exclusion=None, completion_bias_exclusion_first_token_only=False, contextual_control_threshold=None, control_log_additive=True, repetition_penalties_include_completion=True, raw_completion=False, aleph_alpha_api_key=None, stop_sequences=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10
Example
from langchain.llms import AlephAlpha | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-8 | Example
from langchain.llms import AlephAlpha
aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (Optional[str]) β
maximum_tokens (int) β
temperature (float) β
top_k (int) β
top_p (float) β
presence_penalty (float) β
frequency_penalty (float) β
repetition_penalties_include_prompt (Optional[bool]) β
use_multiplicative_presence_penalty (Optional[bool]) β
penalty_bias (Optional[str]) β
penalty_exceptions (Optional[List[str]]) β
penalty_exceptions_include_stop_sequences (Optional[bool]) β
best_of (Optional[int]) β
n (int) β
logit_bias (Optional[Dict[int, float]]) β
log_probs (Optional[int]) β
tokens (Optional[bool]) β
disable_optimizations (Optional[bool]) β
minimum_tokens (Optional[int]) β
echo (bool) β
use_multiplicative_frequency_penalty (bool) β
sequence_penalty (float) β
sequence_penalty_min_length (int) β
use_multiplicative_sequence_penalty (bool) β
completion_bias_inclusion (Optional[Sequence[str]]) β
completion_bias_inclusion_first_token_only (bool) β
completion_bias_exclusion (Optional[Sequence[str]]) β
completion_bias_exclusion_first_token_only (bool) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-9 | completion_bias_exclusion_first_token_only (bool) β
contextual_control_threshold (Optional[float]) β
control_log_additive (Optional[bool]) β
repetition_penalties_include_completion (bool) β
raw_completion (bool) β
aleph_alpha_api_key (Optional[str]) β
stop_sequences (Optional[List[str]]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute best_of: Optional[int] = Noneο
returns the one with the βbest ofβ results
(highest log probability per token)
attribute completion_bias_exclusion_first_token_only: bool = Falseο
Only consider the first token for the completion_bias_exclusion.
attribute contextual_control_threshold: Optional[float] = Noneο
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
attribute control_log_additive: Optional[bool] = Trueο
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
attribute echo: bool = Falseο
Echo the prompt in the completion.
attribute frequency_penalty: float = 0.0ο
Penalizes repeated tokens according to frequency.
attribute log_probs: Optional[int] = Noneο
Number of top log probabilities to be returned for each generated token.
attribute logit_bias: Optional[Dict[int, float]] = Noneο
The logit bias allows to influence the likelihood of generating tokens.
attribute maximum_tokens: int = 64ο
The maximum number of tokens to be generated. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-10 | The maximum number of tokens to be generated.
attribute minimum_tokens: Optional[int] = 0ο
Generate at least this number of tokens.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute penalty_bias: Optional[str] = Noneο
Penalty bias for the completion.
attribute penalty_exceptions: Optional[List[str]] = Noneο
List of strings that may be generated without penalty,
regardless of other penalty settings
attribute penalty_exceptions_include_stop_sequences: Optional[bool] = Noneο
Should stop_sequences be included in penalty_exceptions.
attribute presence_penalty: float = 0.0ο
Penalizes repeated tokens.
attribute raw_completion: bool = Falseο
Force the raw completion of the model to be returned.
attribute repetition_penalties_include_completion: bool = Trueο
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
attribute repetition_penalties_include_prompt: Optional[bool] = Falseο
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
attribute stop_sequences: Optional[List[str]] = Noneο
Stop sequences to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.0ο
A non-negative float that tunes the degree of randomness in generation.
attribute tokens: Optional[bool] = Falseο
return tokens of completion.
attribute top_k: int = 0ο
Number of most likely tokens to consider at each step.
attribute top_p: float = 0.0ο
Total probability mass of tokens to consider at each step. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-11 | Total probability mass of tokens to consider at each step.
attribute use_multiplicative_presence_penalty: Optional[bool] = Falseο
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-12 | Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-13 | Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-14 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-15 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AmazonAPIGateway(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url, model_kwargs=None, content_handler=<langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around custom Amazon API Gateway
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
api_url (str) β
model_kwargs (Optional[Dict]) β
content_handler (langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway) β
Return type
None
attribute api_url: str [Required]ο
API Gateway URL
attribute content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-16 | output transform functions to handle formats between LLM
and the endpoint.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-17 | Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-18 | Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-19 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-20 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.llms.base.LLM, langchain.llms.anthropic._AnthropicCommon
Wrapper around Anthropicβs large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-21 | # Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
attribute default_request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
attribute max_tokens_to_sample: int = 256ο
Denotes the number of tokens to predict per generation.
attribute model: str = 'claude-v1'ο
Model name to use.
attribute streaming: bool = Falseο
Whether to stream the results.
attribute tags: Optional[List[str]] = Noneο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-22 | Whether to stream the results.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = Noneο
A non-negative float that tunes the degree of randomness in generation.
attribute top_k: Optional[int] = Noneο
Number of most likely tokens to consider at each step.
attribute top_p: Optional[float] = Noneο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-23 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-24 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-25 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)[source]ο
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompt to pass into the model. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-26 | Parameters
prompt (str) β The prompt to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Return type
Generator
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anyscale(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_kwargs=None, anyscale_service_url=None, anyscale_service_route=None, anyscale_service_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-27 | Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_kwargs (Optional[dict]) β
anyscale_service_url (Optional[str]) β
anyscale_service_route (Optional[str]) β
anyscale_service_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model. Reserved for future use
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-28 | kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-29 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-30 | Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-31 | dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-32 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Aviary(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='amazon/LightGPT', aviary_url=None, aviary_token=None, use_prompt_format=True, version=None)[source]ο
Bases: langchain.llms.base.LLM
Allow you to use an Aviary.
Aviary is a backend for hosted models. You can
find out more about aviary at
http://github.com/ray-project/aviary
To get a list of the models supported on an
aviary, follow the instructions on the web site to
install the aviary CLI and then use:
aviary models
AVIARY_URL and AVIARY_TOKEN environement variables must be set.
Example
from langchain.llms import Aviary
os.environ["AVIARY_URL"] = "<URL>"
os.environ["AVIARY_TOKEN"] = "<TOKEN>"
light = Aviary(model='amazon/LightGPT')
output = light('How do you make fried rice?')
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
aviary_url (Optional[str]) β
aviary_token (Optional[str]) β
use_prompt_format (bool) β
version (Optional[str]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-33 | Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-34 | Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-35 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-36 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-37 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AzureMLOnlineEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', endpoint_api_key='', deployment_name='', http_client=None, content_formatter=None, model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM, pydantic.main.BaseModel
Wrapper around Azure ML Hosted models using Managed Online Endpoints.
Example
azure_llm = AzureMLModel(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
endpoint_api_key="my-api-key",
deployment_name="my-deployment-name",
content_formatter=content_formatter,
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
endpoint_api_key (str) β
deployment_name (str) β
http_client (Any) β
content_formatter (Any) β
model_kwargs (Optional[dict]) β
Return type
None
attribute content_formatter: Any = Noneο
The content formatter that provides an input and output | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-38 | attribute content_formatter: Any = Noneο
The content formatter that provides an input and output
transform function to handle formats between the LLM and
the endpoint
attribute deployment_name: str = ''ο
Deployment Name for Endpoint. Should be passed to constructor or specified as
env var AZUREML_DEPLOYMENT_NAME.
attribute endpoint_api_key: str = ''ο
Authentication Key for Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_API_KEY.
attribute endpoint_url: str = ''ο
URL of pre-existing Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_URL.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-39 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-40 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-41 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-42 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-43 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AzureOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-44 | max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
deployment_name (str) β
openai_api_type (str) β
openai_api_version (str) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute deployment_name: str = ''ο
Deployment name to use.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-45 | attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-46 | supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-47 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-48 | self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-49 | Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.") | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-50 | int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-51 | stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.Banana(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_key='', model_kwargs=None, banana_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Banana
banana = Banana(model_key="")
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-52 | Example
from langchain.llms import Banana
banana = Banana(model_key="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_key (str) β
model_kwargs (Dict[str, Any]) β
banana_api_key (Optional[str]) β
Return type
None
attribute model_key: str = ''ο
model endpoint to use
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-53 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-54 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-55 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-56 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Baseten(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Use your Baseten models in Langchain
To use, you should have the baseten python package installed,
and run baseten.login() with your Baseten API key.
The required model param can be either a model id or model
version id. Using a model version ID will result in
slightly faster invocation.
Any other model parameters can also
be passed in with the format input={model_param: value, β¦} | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-57 | be passed in with the format input={model_param: value, β¦}
The Baseten model must accept a dictionary of input with the key
βpromptβ and return a dictionary with a key βdataβ which maps
to a list of response strings.
Example
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
input (Dict[str, Any]) β
model_kwargs (Dict[str, Any]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-58 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-59 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-60 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.