id
stringlengths 14
15
| text
stringlengths 35
2.51k
| source
stringlengths 61
154
|
|---|---|---|
9dfec98ab6bd-1
|
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
9dfec98ab6bd-2
|
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
abstract async acombine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶
Combine documents into a single string asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
abstract combine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶
Combine documents into a single string.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int][source]¶
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
9dfec98ab6bd-3
|
Returns None if the method does not depend on the prompt length.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
330c4aa064dd-0
|
langchain.chains.loading.load_chain¶
langchain.chains.loading.load_chain(path: Union[str, Path], **kwargs: Any) → Chain[source]¶
Unified method for loading a chain from LangChainHub or local fs.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.loading.load_chain.html
|
744a8cca7bca-0
|
langchain.chains.openai_functions.openapi.get_openapi_chain¶
langchain.chains.openai_functions.openapi.get_openapi_chain(spec: Union[OpenAPISpec, str], llm: Optional[BaseLanguageModel] = None, prompt: Optional[BasePromptTemplate] = None, request_chain: Optional[Chain] = None, llm_kwargs: Optional[Dict] = None, verbose: bool = False, headers: Optional[Dict] = None, params: Optional[Dict] = None, **kwargs: Any) → SequentialChain[source]¶
Create a chain for querying an API from a OpenAPI spec.
Parameters
spec – OpenAPISpec or url/file/text string corresponding to one.
llm – language model, should be an OpenAI function-calling model, e.g.
ChatOpenAI(model=”gpt-3.5-turbo-0613”).
prompt – Main prompt template to use.
request_chain – Chain for taking the functions output and executing the request.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.get_openapi_chain.html
|
49a3084b9de7-0
|
langchain.chains.router.base.MultiRouteChain¶
class langchain.chains.router.base.MultiRouteChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, router_chain: RouterChain, destination_chains: Mapping[str, Chain], default_chain: Chain, silent_errors: bool = False)[source]¶
Bases: Chain
Use a single chain to route an input to one of multiple candidate chains.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param default_chain: Chain [Required]¶
Default chain to use when none of the destination chains are suitable.
param destination_chains: Mapping[str, Chain] [Required]¶
Chains that return final answer to inputs.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html
|
49a3084b9de7-1
|
There are many different types of memory - please see memory docs
for the full catalog.
param router_chain: RouterChain [Required]¶
Chain that routes inputs to destination chains.
param silent_errors: bool = False¶
If True, use default_chain when an invalid destination name is provided.
Defaults to False.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html
|
49a3084b9de7-2
|
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html
|
49a3084b9de7-3
|
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html
|
49a3084b9de7-4
|
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html
|
b1205f9e7433-0
|
langchain.chains.query_constructor.ir.Visitor¶
class langchain.chains.query_constructor.ir.Visitor[source]¶
Bases: ABC
Defines interface for IR translation using visitor pattern.
Methods
__init__()
visit_comparison(comparison)
Translate a Comparison.
visit_operation(operation)
Translate an Operation.
visit_structured_query(structured_query)
Translate a StructuredQuery.
Attributes
allowed_comparators
allowed_operators
abstract visit_comparison(comparison: Comparison) → Any[source]¶
Translate a Comparison.
abstract visit_operation(operation: Operation) → Any[source]¶
Translate an Operation.
abstract visit_structured_query(structured_query: StructuredQuery) → Any[source]¶
Translate a StructuredQuery.
allowed_comparators: Optional[Sequence[langchain.chains.query_constructor.ir.Comparator]] = None¶
allowed_operators: Optional[Sequence[langchain.chains.query_constructor.ir.Operator]] = None¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Visitor.html
|
788571586a10-0
|
langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic¶
langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic(pydantic_schema: Any, llm: BaseLanguageModel) → Chain[source]¶
Creates a chain that extracts information from a passage using pydantic schema.
Parameters
pydantic_schema – The pydantic schema of the entities to extract.
llm – The language model to use.
Returns
Chain that can be used to extract information from a passage.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html
|
aa996752a74a-0
|
langchain.chains.prompt_selector.BasePromptSelector¶
class langchain.chains.prompt_selector.BasePromptSelector[source]¶
Bases: BaseModel, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract get_prompt(llm: BaseLanguageModel) → BasePromptTemplate[source]¶
Get default prompt for a language model.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.prompt_selector.BasePromptSelector.html
|
6d9f00bfa4b2-0
|
langchain.chains.natbot.base.NatBotChain¶
class langchain.chains.natbot.base.NatBotChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, llm_chain: LLMChain, objective: str, llm: Optional[BaseLanguageModel] = None, input_url_key: str = 'url', input_browser_content_key: str = 'browser_content', previous_command: str = '', output_key: str = 'command')[source]¶
Bases: Chain
Implement an LLM driven browser.
Example
from langchain import NatBotChain
natbot = NatBotChain.from_default("Buy me a new hat.")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: Optional[BaseLanguageModel] = None¶
[Deprecated] LLM wrapper to use.
param llm_chain: LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html
|
6d9f00bfa4b2-1
|
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param objective: str [Required]¶
Objective that NatBot is tasked with completing.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html
|
6d9f00bfa4b2-2
|
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
execute(url: str, browser_content: str) → str[source]¶
Figure out next browser command to run.
Parameters
url – URL of the site currently on.
browser_content – Content of the page as currently displayed by the browser.
Returns
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html
|
6d9f00bfa4b2-3
|
browser_content – Content of the page as currently displayed by the browser.
Returns
Next browser command to run.
Example
browser_content = "...."
llm_command = natbot.run("www.google.com", browser_content)
classmethod from_default(objective: str, **kwargs: Any) → NatBotChain[source]¶
Load with default LLMChain.
classmethod from_llm(llm: BaseLanguageModel, objective: str, **kwargs: Any) → NatBotChain[source]¶
Load from LLM.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields, all fields[source]¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html
|
6d9f00bfa4b2-4
|
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html
|
2545430045a6-0
|
langchain.chains.llm_requests.LLMRequestsChain¶
class langchain.chains.llm_requests.LLMRequestsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, llm_chain: LLMChain, requests_wrapper: TextRequestsWrapper = None, text_length: int = 8000, requests_key: str = 'requests_result', input_key: str = 'url', output_key: str = 'output')[source]¶
Bases: Chain
Chain that hits a URL and then uses an LLM to parse results.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm_chain: LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param requests_wrapper: TextRequestsWrapper [Optional]¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html
|
2545430045a6-1
|
for the full catalog.
param requests_wrapper: TextRequestsWrapper [Optional]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param text_length: int = 8000¶
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html
|
2545430045a6-2
|
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html
|
2545430045a6-3
|
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html
|
2545430045a6-4
|
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html
|
1166e10fb3e9-0
|
langchain.schema.HumanMessage¶
class langchain.schema.HumanMessage(*, content: str, additional_kwargs: dict = None, example: bool = False)[source]¶
Bases: BaseMessage
Type of message that is spoken by the human.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
param example: bool = False¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.HumanMessage.html
|
8ba0406ef205-0
|
langchain.schema.Generation¶
class langchain.schema.Generation(*, text: str, generation_info: Optional[Dict[str, Any]] = None)[source]¶
Bases: Serializable
Output of a single generation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw generation info response from the provider
param text: str [Required]¶
Generated text output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.Generation.html
|
a3dbc0300c65-0
|
langchain.schema.SystemMessage¶
class langchain.schema.SystemMessage(*, content: str, additional_kwargs: dict = None)[source]¶
Bases: BaseMessage
Type of message that is a system message.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.SystemMessage.html
|
13120ad4a837-0
|
langchain.schema.messages_to_dict¶
langchain.schema.messages_to_dict(messages: List[BaseMessage]) → List[dict][source]¶
Convert messages to dict.
Parameters
messages – List of messages to convert.
Returns
List of dicts.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.messages_to_dict.html
|
38ac7df427c4-0
|
langchain.schema.LLMResult¶
class langchain.schema.LLMResult(*, generations: List[List[Generation]], llm_output: Optional[dict] = None, run: Optional[List[RunInfo]] = None)[source]¶
Bases: BaseModel
Class that contains all relevant information for an LLM Result.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generations: List[List[langchain.schema.Generation]] [Required]¶
List of the things generated. This is List[List[]] because
each input could have multiple generations.
param llm_output: Optional[dict] = None¶
For arbitrary LLM provider specific output.
param run: Optional[List[langchain.schema.RunInfo]] = None¶
Run metadata.
flatten() → List[LLMResult][source]¶
Flatten generations into a single list.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.LLMResult.html
|
393d8af327a8-0
|
langchain.schema.BaseMemory¶
class langchain.schema.BaseMemory[source]¶
Bases: Serializable, ABC
Base interface for memory in chains.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract clear() → None[source]¶
Clear memory contents.
abstract load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
If None, return all memories
abstract save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save the context of this model run to memory.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
abstract property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseMemory.html
|
0a43c3c15c20-0
|
langchain.schema.FunctionMessage¶
class langchain.schema.FunctionMessage(*, content: str, additional_kwargs: dict = None, name: str)[source]¶
Bases: BaseMessage
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
param name: str [Required]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.FunctionMessage.html
|
5122c475bcd3-0
|
langchain.schema.AgentFinish¶
class langchain.schema.AgentFinish(return_values: dict, log: str)[source]¶
Bases: NamedTuple
Agent’s return value.
Create new instance of AgentFinish(return_values, log)
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
Attributes
log
Alias for field number 1
return_values
Alias for field number 0
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
log: str¶
Alias for field number 1
return_values: dict¶
Alias for field number 0
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.AgentFinish.html
|
f03acbbc3d80-0
|
langchain.schema.messages_from_dict¶
langchain.schema.messages_from_dict(messages: List[dict]) → List[BaseMessage][source]¶
Convert messages from dict.
Parameters
messages – List of messages (dicts) to convert.
Returns
List of messages (BaseMessages).
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.messages_from_dict.html
|
757058b1876d-0
|
langchain.schema.BaseRetriever¶
class langchain.schema.BaseRetriever[source]¶
Bases: ABC
Base interface for a retriever.
Methods
__init__()
aget_relevant_documents(query, *[, callbacks])
Asynchronously get documents relevant to a query.
get_relevant_documents(query, *[, callbacks])
Retrieve documents relevant to a query.
async aget_relevant_documents(query: str, *, callbacks: Callbacks = None, **kwargs: Any) → List[Document][source]¶
Asynchronously get documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
Returns
List of relevant documents
get_relevant_documents(query: str, *, callbacks: Callbacks = None, **kwargs: Any) → List[Document][source]¶
Retrieve documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
Returns
List of relevant documents
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseRetriever.html
|
cdac1c3ee732-0
|
langchain.schema.BaseChatMessageHistory¶
class langchain.schema.BaseChatMessageHistory[source]¶
Bases: ABC
Base interface for chat message history
See ChatMessageHistory for default implementation.
Methods
__init__()
add_ai_message(message)
Add an AI message to the store
add_message(message)
Add a self-created message to the store
add_user_message(message)
Add a user message to the store
clear()
Remove all messages from the store
Attributes
messages
add_ai_message(message: str) → None[source]¶
Add an AI message to the store
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None[source]¶
Add a user message to the store
abstract clear() → None[source]¶
Remove all messages from the store
messages: List[langchain.schema.BaseMessage]¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseChatMessageHistory.html
|
85573316be85-0
|
langchain.schema.ChatGeneration¶
class langchain.schema.ChatGeneration(*, text: str = '', generation_info: Optional[Dict[str, Any]] = None, message: BaseMessage)[source]¶
Bases: Generation
Output of a single generation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw generation info response from the provider
param message: langchain.schema.BaseMessage [Required]¶
param text: str = ''¶
Generated text output.
validator set_text » all fields[source]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.ChatGeneration.html
|
ab454a775eb6-0
|
langchain.schema.get_buffer_string¶
langchain.schema.get_buffer_string(messages: List[BaseMessage], human_prefix: str = 'Human', ai_prefix: str = 'AI') → str[source]¶
Get buffer string of messages.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.get_buffer_string.html
|
5a201d617aa9-0
|
langchain.schema.PromptValue¶
class langchain.schema.PromptValue[source]¶
Bases: Serializable, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
abstract to_messages() → List[BaseMessage][source]¶
Return prompt as messages.
abstract to_string() → str[source]¶
Return prompt as string.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.PromptValue.html
|
a1e800c33bc0-0
|
langchain.schema.BaseLLMOutputParser¶
class langchain.schema.BaseLLMOutputParser[source]¶
Bases: Serializable, ABC, Generic[T]
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract parse_result(result: List[Generation]) → T[source]¶
Parse LLM Result.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseLLMOutputParser.html
|
6658a70b58cd-0
|
langchain.schema.NoOpOutputParser¶
class langchain.schema.NoOpOutputParser[source]¶
Bases: BaseOutputParser[str]
Output parser that just returns the text as is.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
parse(text: str) → str[source]¶
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_result(result: List[Generation]) → T¶
Parse LLM Result.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.NoOpOutputParser.html
|
6658a70b58cd-1
|
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.NoOpOutputParser.html
|
ccef5acbc4a2-0
|
langchain.schema.BaseDocumentTransformer¶
class langchain.schema.BaseDocumentTransformer[source]¶
Bases: ABC
Base interface for transforming documents.
Methods
__init__()
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
transform_documents(documents, **kwargs)
Transform a list of documents.
abstract async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
abstract transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseDocumentTransformer.html
|
6ed9460796b4-0
|
langchain.schema.AIMessage¶
class langchain.schema.AIMessage(*, content: str, additional_kwargs: dict = None, example: bool = False)[source]¶
Bases: BaseMessage
Type of message that is spoken by the AI.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
param example: bool = False¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.AIMessage.html
|
04301f78bab9-0
|
langchain.schema.ChatResult¶
class langchain.schema.ChatResult(*, generations: List[ChatGeneration], llm_output: Optional[dict] = None)[source]¶
Bases: BaseModel
Class that contains all relevant information for a Chat Result.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generations: List[langchain.schema.ChatGeneration] [Required]¶
List of the things generated.
param llm_output: Optional[dict] = None¶
For arbitrary LLM provider specific output.
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.ChatResult.html
|
4e3f1cd06dde-0
|
langchain.schema.OutputParserException¶
class langchain.schema.OutputParserException(error: Any, observation: str | None = None, llm_output: str | None = None, send_to_llm: bool = False)[source]¶
Bases: ValueError
Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.OutputParserException.html
|
d1bf044f070a-0
|
langchain.schema.BaseMessage¶
class langchain.schema.BaseMessage(*, content: str, additional_kwargs: dict = None)[source]¶
Bases: Serializable
Message object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
abstract property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseMessage.html
|
acb91b85156f-0
|
langchain.schema.ChatMessage¶
class langchain.schema.ChatMessage(*, content: str, additional_kwargs: dict = None, role: str)[source]¶
Bases: BaseMessage
Type of message with arbitrary speaker.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param content: str [Required]¶
param role: str [Required]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
This class is LangChain serializable.
property type: str¶
Type of the message, used for serialization.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.ChatMessage.html
|
c8f7a3c92b10-0
|
langchain.schema.Document¶
class langchain.schema.Document(*, page_content: str, metadata: dict = None)[source]¶
Bases: Serializable
Interface for interacting with a document.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param metadata: dict [Optional]¶
param page_content: str [Required]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.Document.html
|
f4e99429add9-0
|
langchain.schema.RunInfo¶
class langchain.schema.RunInfo(*, run_id: UUID)[source]¶
Bases: BaseModel
Class that contains all relevant metadata for a Run.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param run_id: uuid.UUID [Required]¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.RunInfo.html
|
a9d26939cd4b-0
|
langchain.schema.BaseOutputParser¶
class langchain.schema.BaseOutputParser[source]¶
Bases: BaseLLMOutputParser, ABC, Generic[T]
Class to parse the output of an LLM call.
Output parsers help structure language model responses.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
abstract parse(text: str) → T[source]¶
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_result(result: List[Generation]) → T[source]¶
Parse LLM Result.
parse_with_prompt(completion: str, prompt: PromptValue) → Any[source]¶
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseOutputParser.html
|
a9d26939cd4b-1
|
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/schema/langchain.schema.BaseOutputParser.html
|
0c0efdd0ae97-0
|
langchain.embeddings.embaas.EmbaasEmbeddings¶
class langchain.embeddings.embaas.EmbaasEmbeddings(*, model: str = 'e5-large-v2', instruction: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/embeddings/', embaas_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around embaas’s embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/embeddings/'¶
The URL for the embaas embeddings API.
param embaas_api_key: Optional[str] = None¶
param instruction: Optional[str] = None¶
Instruction used for domain-specific embeddings.
param model: str = 'e5-large-v2'¶
The model used for embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Get embeddings for a list of texts.
Parameters
texts – The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
|
0c0efdd0ae97-1
|
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Get embeddings for a single text.
Parameters
text – The text to get embeddings for.
Returns
List of embeddings.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
|
d86a28b1e73d-0
|
langchain.embeddings.embaas.EmbaasEmbeddingsPayload¶
class langchain.embeddings.embaas.EmbaasEmbeddingsPayload[source]¶
Bases: TypedDict
Payload for the embaas embeddings API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
model
texts
instruction
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html
|
d86a28b1e73d-1
|
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
instruction: str¶
model: str¶
texts: List[str]¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html
|
2d15bcc0e2e9-0
|
langchain.embeddings.jina.JinaEmbeddings¶
class langchain.embeddings.jina.JinaEmbeddings(*, client: Any = None, model_name: str = 'ViT-B-32::openai', jina_auth_token: Optional[str] = None, jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/', request_headers: Optional[dict] = None)[source]¶
Bases: BaseModel, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/'¶
param jina_auth_token: Optional[str] = None¶
param model_name: str = 'ViT-B-32::openai'¶
Model name to use.
param request_headers: Optional[dict] = None¶
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Jina’s embedding endpoint.
:param texts: The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Jina’s embedding endpoint.
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that auth token exists in environment.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.jina.JinaEmbeddings.html
|
986cae0e47d6-0
|
langchain.embeddings.google_palm.GooglePalmEmbeddings¶
class langchain.embeddings.google_palm.GooglePalmEmbeddings(*, client: Any = None, google_api_key: Optional[str] = None, model_name: str = 'models/embedding-gecko-001')[source]¶
Bases: BaseModel, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param client: Any = None¶
param google_api_key: Optional[str] = None¶
param model_name: str = 'models/embedding-gecko-001'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
embed_query(text: str) → List[float][source]¶
Embed query text.
validator validate_environment » all fields[source]¶
Validate api key, python package exists.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.google_palm.GooglePalmEmbeddings.html
|
870cb2f3a591-0
|
langchain.embeddings.octoai_embeddings.OctoAIEmbeddings¶
class langchain.embeddings.octoai_embeddings.OctoAIEmbeddings(*, endpoint_url: Optional[str] = None, model_kwargs: Optional[dict] = None, octoai_api_token: Optional[str] = None, embed_instruction: str = 'Represent this input: ', query_instruction: str = 'Represent the question for retrieving similar documents: ')[source]¶
Bases: BaseModel, Embeddings
Wrapper around OctoAI Compute Service embedding models.
The environment variable OCTOAI_API_TOKEN should be set
with your API token, or it can be passed
as a named parameter to the constructor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_instruction: str = 'Represent this input: '¶
Instruction to use for embedding documents.
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Keyword arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param query_instruction: str = 'Represent the question for retrieving similar documents: '¶
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute document embeddings using an OctoAI instruct model.
embed_query(text: str) → List[float][source]¶
Compute query embedding using an OctoAI instruct model.
validator validate_environment » all fields[source]¶
Ensure that the API key and python package exist in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html
|
c3d6b9d35af6-0
|
langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings¶
class langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings(*, client: Any = None, model_name: str = 'hkunlp/instructor-large', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None, embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]¶
Bases: BaseModel, Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Initialize the sentence_transformer.
param cache_folder: Optional[str] = None¶
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction to use for embedding documents.
param encode_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass when calling the encode method of the model.
param model_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass to the model.
param model_name: str = 'hkunlp/instructor-large'¶
Model name to use.
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction to use for embedding query.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
|
c3d6b9d35af6-1
|
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html
|
70cc1777f1b2-0
|
langchain.embeddings.deepinfra.DeepInfraEmbeddings¶
class langchain.embeddings.deepinfra.DeepInfraEmbeddings(*, model_id: str = 'sentence-transformers/clip-ViT-B-32', normalize: bool = False, embed_instruction: str = 'passage: ', query_instruction: str = 'query: ', model_kwargs: Optional[dict] = None, deepinfra_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around Deep Infra’s embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param deepinfra_api_token: Optional[str] = None¶
param embed_instruction: str = 'passage: '¶
Instruction used to embed documents.
param model_id: str = 'sentence-transformers/clip-ViT-B-32'¶
Embeddings model to use.
param model_kwargs: Optional[dict] = None¶
Other model keyword args
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html
|
70cc1777f1b2-1
|
param model_kwargs: Optional[dict] = None¶
Other model keyword args
param normalize: bool = False¶
whether to normalize the computed embeddings
param query_instruction: str = 'query: '¶
Instruction used to embed the query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a Deep Infra deployed embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html
|
3c2ed997bce1-0
|
langchain.embeddings.openai.embed_with_retry¶
langchain.embeddings.openai.embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) → Any[source]¶
Use tenacity to retry the embedding call.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.embed_with_retry.html
|
e89c25a860b3-0
|
langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler¶
class langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler[source]¶
Bases: ContentHandlerBase[List[str], List[List[float]]]
Content handler for LLM class.
Methods
__init__()
transform_input(prompt, model_kwargs)
Transforms the input to a format that model can accept as the request Body.
transform_output(output)
Transforms the output from the model to string that the LLM class expects.
Attributes
accepts
The MIME type of the response data returned from endpoint
content_type
The MIME type of the input data passed to endpoint
abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes¶
Transforms the input to a format that model can accept
as the request Body. Should return bytes or seekable file
like object in the format specified in the content_type
request header.
abstract transform_output(output: bytes) → OUTPUT_TYPE¶
Transforms the output from the model to string that
the LLM class expects.
accepts: Optional[str] = 'text/plain'¶
The MIME type of the response data returned from endpoint
content_type: Optional[str] = 'text/plain'¶
The MIME type of the input data passed to endpoint
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler.html
|
4625e4b73805-0
|
langchain.embeddings.cohere.CohereEmbeddings¶
class langchain.embeddings.cohere.CohereEmbeddings(*, client: Any = None, model: str = 'embed-english-v2.0', truncate: Optional[str] = None, cohere_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cohere_api_key: Optional[str] = None¶
param model: str = 'embed-english-v2.0'¶
Model name to use.
param truncate: Optional[str] = None¶
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html
|
4625e4b73805-1
|
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html
|
d06cbc061dd4-0
|
langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings¶
class langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings(*, client: Any = None, repo_id: str = 'sentence-transformers/all-mpnet-base-v2', task: Optional[str] = 'feature-extraction', model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param huggingfacehub_api_token: Optional[str] = None¶
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param repo_id: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
param task: Optional[str] = 'feature-extraction'¶
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html
|
d06cbc061dd4-1
|
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html
|
af563e21d1ab-0
|
langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings¶
class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = <function load_embedding_model>, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'sentence_transformers', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'sentence-transformers/all-mpnet-base-v2')[source]¶
Bases: SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
af563e21d1ab-1
|
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param model_id: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
param model_load_fn: Callable = <function load_embedding_model>¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']¶
Requirements to install on hardware to inference the model.
param pipeline_ref: Any = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
af563e21d1ab-2
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]]¶
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
af563e21d1ab-3
|
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
af563e21d1ab-4
|
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
53a1683709b8-0
|
langchain.embeddings.minimax.embed_with_retry¶
langchain.embeddings.minimax.embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.embed_with_retry.html
|
2bc26f3a56e2-0
|
langchain.embeddings.dashscope.DashScopeEmbeddings¶
class langchain.embeddings.dashscope.DashScopeEmbeddings(*, client: Any = None, model: str = 'text-embedding-v1', dashscope_api_key: Optional[str] = None, max_retries: int = 5)[source]¶
Bases: BaseModel, Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dashscope_api_key: Optional[str] = None¶
Maximum number of retries to make when generating.
param max_retries: int = 5¶
param model: str = 'text-embedding-v1'¶
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to DashScope’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
|
2bc26f3a56e2-1
|
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to DashScope’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
|
4ef490adb67b-0
|
langchain.embeddings.google_palm.embed_with_retry¶
langchain.embeddings.google_palm.embed_with_retry(embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.google_palm.embed_with_retry.html
|
40f14cf60e4a-0
|
langchain.embeddings.self_hosted.SelfHostedEmbeddings¶
class langchain.embeddings.self_hosted.SelfHostedEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'torch'], inference_kwargs: ~typing.Any = None)[source]¶
Bases: SelfHostedPipeline, Embeddings
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
40f14cf60e4a-1
|
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Init the pipeline with an auxiliary function.
The load function must be in global scope to be imported
and run on the server, i.e. in a module and not a REPL or closure.
Then, initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings on the remote hardware.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param model_load_fn: Callable [Required]¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
40f14cf60e4a-2
|
param model_load_fn: Callable [Required]¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'torch']¶
Requirements to install on hardware to inference the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]][source]¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
40f14cf60e4a-3
|
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
40f14cf60e4a-4
|
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
5e01a1a4184b-0
|
langchain.embeddings.dashscope.embed_with_retry¶
langchain.embeddings.dashscope.embed_with_retry(embeddings: DashScopeEmbeddings, **kwargs: Any) → Any[source]¶
Use tenacity to retry the embedding call.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.embed_with_retry.html
|
783055376292-0
|
langchain.embeddings.vertexai.VertexAIEmbeddings¶
class langchain.embeddings.vertexai.VertexAIEmbeddings(*, client: _LanguageModel = None, model_name: str = 'textembedding-gecko', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5)[source]¶
Bases: _VertexAICommon, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param model_name: str = 'textembedding-gecko'¶
Model name to use.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
|
783055376292-1
|
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
embed_documents(texts: List[str], batch_size: int = 5) → List[List[float]][source]¶
Embed a list of strings. Vertex AI currently
sets a max batch size of 5 strings.
Parameters
texts – List[str] The list of strings to embed.
batch_size – [int] The batch size of embeddings to send to the model
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
Validates that the python package exists in environment.
property is_codey_model: bool¶
task_executor: ClassVar[Optional[Executor]] = None¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
|
4f26aa187ddd-0
|
langchain.embeddings.minimax.MiniMaxEmbeddings¶
class langchain.embeddings.minimax.MiniMaxEmbeddings(*, endpoint_url: str = 'https://api.minimax.chat/v1/embeddings', model: str = 'embo-01', embed_type_db: str = 'db', embed_type_query: str = 'query', minimax_group_id: Optional[str] = None, minimax_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around MiniMax’s embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_type_db: str = 'db'¶
For embed_documents
param embed_type_query: str = 'query'¶
For embed_query
param endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'¶
Endpoint URL to use.
param minimax_api_key: Optional[str] = None¶
API Key for MiniMax API.
param minimax_group_id: Optional[str] = None¶
Group ID for MiniMax API.
param model: str = 'embo-01'¶
Embeddings model name to use.
embed(texts: List[str], embed_type: str) → List[List[float]][source]¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
|
4f26aa187ddd-1
|
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that group id and api key exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
|
24d20d3fc89a-0
|
langchain.embeddings.openai.OpenAIEmbeddings¶
class langchain.embeddings.openai.OpenAIEmbeddings(*, client: Any = None, model: str = 'text-embedding-ada-002', deployment: str = 'text-embedding-ada-002', openai_api_version: Optional[str] = None, openai_api_base: Optional[str] = None, openai_api_type: Optional[str] = None, openai_proxy: Optional[str] = None, embedding_ctx_length: int = 8191, openai_api_key: Optional[str] = None, openai_organization: Optional[str] = None, allowed_special: Union[Literal['all'], Set[str]] = {}, disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all', chunk_size: int = 1000, max_retries: int = 6, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, headers: Any = None, tiktoken_model_name: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
24d20d3fc89a-1
|
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], Set[str]] = {}¶
param chunk_size: int = 1000¶
Maximum number of texts to embed in each batch
param deployment: str = 'text-embedding-ada-002'¶
param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶
param embedding_ctx_length: int = 8191¶
param headers: Any = None¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param model: str = 'text-embedding-ada-002'¶
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
24d20d3fc89a-2
|
param openai_api_key: Optional[str] = None¶
param openai_api_type: Optional[str] = None¶
param openai_api_version: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout in seconds for the OpenAPI request.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
async aembed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to OpenAI’s embedding endpoint async for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
async aembed_query(text: str) → List[float][source]¶
Call out to OpenAI’s embedding endpoint async for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
24d20d3fc89a-3
|
Parameters
text – The text to embed.
Returns
Embedding for the text.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
6ebbe8888dd4-0
|
langchain.embeddings.fake.FakeEmbeddings¶
class langchain.embeddings.fake.FakeEmbeddings(*, size: int)[source]¶
Bases: Embeddings, BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param size: int [Required]¶
async aembed_documents(texts: List[str]) → List[List[float]]¶
Embed search docs.
async aembed_query(text: str) → List[float]¶
Embed query text.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
embed_query(text: str) → List[float][source]¶
Embed query text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.FakeEmbeddings.html
|
fd7c71da2e5f-0
|
langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding¶
class langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param compress_to_size: Optional[int] = 128¶
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
param contextual_control_threshold: Optional[int] = None¶
Attention control parameters only apply to those tokens that have
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
|
fd7c71da2e5f-1
|
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
param control_log_additive: Optional[bool] = True¶
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
param hosting: Optional[str] = 'https://api.aleph-alpha.com'¶
Optional parameter that specifies which datacenters may process the request.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param normalize: Optional[bool] = True¶
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
|
6783bdb18cba-0
|
langchain.embeddings.base.Embeddings¶
class langchain.embeddings.base.Embeddings[source]¶
Bases: ABC
Interface for embedding models.
Methods
__init__()
aembed_documents(texts)
Embed search docs.
aembed_query(text)
Embed query text.
embed_documents(texts)
Embed search docs.
embed_query(text)
Embed query text.
async aembed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
async aembed_query(text: str) → List[float][source]¶
Embed query text.
abstract embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
abstract embed_query(text: str) → List[float][source]¶
Embed query text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.base.Embeddings.html
|
2fe1ab792df1-0
|
langchain.embeddings.huggingface.HuggingFaceEmbeddings¶
class langchain.embeddings.huggingface.HuggingFaceEmbeddings(*, client: Any = None, model_name: str = 'sentence-transformers/all-mpnet-base-v2', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Initialize the sentence_transformer.
param cache_folder: Optional[str] = None¶
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
param encode_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass when calling the encode method of the model.
param model_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass to the model.
param model_name: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html
|
2fe1ab792df1-1
|
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html
|
2f3d88f9e008-0
|
langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding¶
class langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]¶
Bases: AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param client: Any = None¶
param compress_to_size: Optional[int] = 128¶
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
param contextual_control_threshold: Optional[int] = None¶
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
param control_log_additive: Optional[bool] = True¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
|
2f3d88f9e008-1
|
param control_log_additive: Optional[bool] = True¶
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
param hosting: Optional[str] = 'https://api.aleph-alpha.com'¶
Optional parameter that specifies which datacenters may process the request.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param normalize: Optional[bool] = True¶
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html
|
c57809e677d0-0
|
langchain.embeddings.modelscope_hub.ModelScopeEmbeddings¶
class langchain.embeddings.modelscope_hub.ModelScopeEmbeddings(*, embed: Any = None, model_id: str = 'damo/nlp_corom_sentence-embedding_english-base')[source]¶
Bases: BaseModel, Embeddings
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Initialize the modelscope
param embed: Any = None¶
param model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html
|
c6c7bda1ab37-0
|
langchain.embeddings.bedrock.BedrockEmbeddings¶
class langchain.embeddings.bedrock.BedrockEmbeddings(*, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str = 'amazon.titan-e1t-medium', model_kwargs: Optional[Dict] = None)[source]¶
Bases: BaseModel, Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param model_id: str = 'amazon.titan-e1t-medium'¶
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.