id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
|---|---|---|
110e7ca8530f-1
|
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.multion.create_session.MultionCreateSession.html
|
110e7ca8530f-2
|
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.multion.create_session.MultionCreateSession.html
|
110e7ca8530f-3
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.multion.create_session.MultionCreateSession.html
|
110e7ca8530f-4
|
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.multion.create_session.MultionCreateSession.html
|
59c9991c185d-0
|
langchain.tools.base.Tool¶
class langchain.tools.base.Tool[source]¶
Bases: BaseTool
Tool that takes in function or coroutine directly.
Initialize tool.
param args_schema: Optional[Type[pydantic.main.BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
Callbacks to be called during tool execution.
param coroutine: Optional[Callable[[...], Awaitable[str]]] = None¶
The asynchronous version of the function.
param description: str = ''¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param func: Callable[[...], str] [Required]¶
The function to run when the tool is called.
param handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str [Required]¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
59c9991c185d-1
|
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
59c9991c185d-2
|
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, **kwargs: Any) → Tool[source]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
59c9991c185d-3
|
Initialize tool from a function.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
59c9991c185d-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
The tool’s input arguments.
property is_single_input: bool¶
Whether the tool only accepts a single input.
Examples using Tool¶
DataForSeo API Wrapper
Google Serper API
SerpAPI
Google Search
Python REPL
Zep Memory
Dynamodb Chat Message History
Google Serper
Document Comparison
Natural Language APIs
Github Toolkit
Comparing Chain Outputs
Agent VectorDB Question Answering Benchmarking
AutoGPT
BabyAGI with Tools
Plug-and-Plai
Wikibase Agent
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Custom Agent with PlugIn Retrieval
Agent Debates with Tools
Adding Message Memory backed by a database to an Agent
How to add Memory to an Agent
Multi-Input Tools
Defining Custom Tools
Self ask with search
ReAct document store
OpenAI Multi Functions Agent
Combine agents and vector stores
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
59c9991c185d-5
|
ReAct document store
OpenAI Multi Functions Agent
Combine agents and vector stores
Custom MRKL agent
Handle parsing errors
Shared memory across agents and tools
Custom multi-action agent
Running Agent as an Iterator
Timeouts for agents
Add Memory to OpenAI Functions Agent
Cap the max number of iterations
Custom agent
Use ToolKits with OpenAI Functions
Custom agent with tool retrieval
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html
|
950a07fa5d2d-0
|
langchain.tools.gmail.utils.import_installed_app_flow¶
langchain.tools.gmail.utils.import_installed_app_flow() → InstalledAppFlow[source]¶
Import InstalledAppFlow class.
Returns
InstalledAppFlow class.
Return type
InstalledAppFlow
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.import_installed_app_flow.html
|
31a52ca6730b-0
|
langchain.tools.file_management.file_search.FileSearchTool¶
class langchain.tools.file_management.file_search.FileSearchTool[source]¶
Bases: BaseFileToolMixin, BaseTool
Tool that searches for files in a subdirectory that match a regex pattern.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'>¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Recursively search for files in a subdirectory that match the regex pattern'¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'file_search'¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html
|
31a52ca6730b-1
|
that after the tool is called, the AgentExecutor will stop looping.
param root_dir: Optional[str] = None¶
The final path will be chosen relative to root_dir if specified.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html
|
31a52ca6730b-2
|
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_relative_path(file_path: str) → Path¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html
|
31a52ca6730b-3
|
get_relative_path(file_path: str) → Path¶
Get the relative path, returning an error if unsupported.
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html
|
31a52ca6730b-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html
|
c017c1db7a82-0
|
langchain.tools.gmail.utils.get_gmail_credentials¶
langchain.tools.gmail.utils.get_gmail_credentials(token_file: Optional[str] = None, client_secrets_file: Optional[str] = None, scopes: Optional[List[str]] = None) → Credentials[source]¶
Get credentials.
Examples using get_gmail_credentials¶
Gmail Toolkit
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.get_gmail_credentials.html
|
0b5597f6dfb1-0
|
langchain.tools.office365.send_event.O365SendEvent¶
class langchain.tools.office365.send_event.O365SendEvent[source]¶
Bases: O365BaseTool
Tool for sending calendar events in Office 365.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param account: Account [Optional]¶
The account object for the Office 365 account.
param args_schema: Type[langchain.tools.office365.send_event.SendEventSchema] = <class 'langchain.tools.office365.send_event.SendEventSchema'>¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Use this tool to create and send an event with the provided event fields.'¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'send_event'¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html
|
0b5597f6dfb1-1
|
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html
|
0b5597f6dfb1-2
|
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html
|
0b5597f6dfb1-3
|
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html
|
0b5597f6dfb1-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html
|
a3f0353af8cd-0
|
langchain.tools.nuclia.tool.NucliaUnderstandingAPI¶
class langchain.tools.nuclia.tool.NucliaUnderstandingAPI[source]¶
Bases: BaseTool
Tool to process files with the Nuclia Understanding API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.nuclia.tool.NUASchema'>¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'A wrapper around Nuclia Understanding API endpoints. Useful for when you need to extract text from any kind of files. '¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'nuclia_understanding_api'¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.nuclia.tool.NucliaUnderstandingAPI.html
|
a3f0353af8cd-1
|
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.nuclia.tool.NucliaUnderstandingAPI.html
|
a3f0353af8cd-2
|
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.nuclia.tool.NucliaUnderstandingAPI.html
|
a3f0353af8cd-3
|
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.nuclia.tool.NucliaUnderstandingAPI.html
|
a3f0353af8cd-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.nuclia.tool.NucliaUnderstandingAPI.html
|
31a149f67f4c-0
|
langchain.tools.python.tool.PythonAstREPLTool¶
class langchain.tools.python.tool.PythonAstREPLTool[source]¶
Bases: BaseTool
A tool for running python code in a REPL.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Optional[Type[BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.'¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param globals: Optional[Dict] [Optional]¶
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param locals: Optional[Dict] [Optional]¶
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'python_repl_ast'¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html
|
31a149f67f4c-1
|
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param sanitize_input: bool = True¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html
|
31a149f67f4c-2
|
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html
|
31a149f67f4c-3
|
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html
|
31a149f67f4c-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html
|
42bb79f8ce81-0
|
langchain.tools.file_management.write.WriteFileInput¶
class langchain.tools.file_management.write.WriteFileInput[source]¶
Bases: BaseModel
Input for WriteFileTool.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param append: bool = False¶
Whether to append to an existing file.
param file_path: str [Required]¶
name of file
param text: str [Required]¶
text to write to file
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileInput.html
|
42bb79f8ce81-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileInput.html
|
42bb79f8ce81-2
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileInput.html
|
0ab377126849-0
|
langchain.tools.office365.create_draft_message.O365CreateDraftMessage¶
class langchain.tools.office365.create_draft_message.O365CreateDraftMessage[source]¶
Bases: O365BaseTool
Tool for creating a draft email in Office 365.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param account: Account [Optional]¶
The account object for the Office 365 account.
param args_schema: Type[langchain.tools.office365.create_draft_message.CreateDraftMessageSchema] = <class 'langchain.tools.office365.create_draft_message.CreateDraftMessageSchema'>¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Use this tool to create a draft email with the provided message fields.'¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'create_email_draft'¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html
|
0ab377126849-1
|
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html
|
0ab377126849-2
|
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html
|
0ab377126849-3
|
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html
|
0ab377126849-4
|
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html
|
4606e8f4f919-0
|
langchain.tools.azure_cognitive_services.utils.download_audio_from_url¶
langchain.tools.azure_cognitive_services.utils.download_audio_from_url(audio_url: str) → str[source]¶
Download audio from url to local.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.utils.download_audio_from_url.html
|
6b0f6b619943-0
|
langchain.tools.amadeus.closest_airport.ClosestAirportSchema¶
class langchain.tools.amadeus.closest_airport.ClosestAirportSchema[source]¶
Bases: BaseModel
Schema for the AmadeusClosestAirport tool.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param location: str [Required]¶
The location for which you would like to find the nearest airport along with optional details such as country, state, region, or province, allowing for easy processing and identification of the closest airport. Examples of the format are the following:
Cali, Colombia
Lincoln, Nebraska, United States
New York, United States
Sydney, New South Wales, Australia
Rome, Lazio, Italy
Toronto, Ontario, Canada
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.amadeus.closest_airport.ClosestAirportSchema.html
|
6b0f6b619943-1
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.amadeus.closest_airport.ClosestAirportSchema.html
|
6b0f6b619943-2
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.amadeus.closest_airport.ClosestAirportSchema.html
|
a277ee0e7a82-0
|
langchain.tools.base.StructuredTool¶
class langchain.tools.base.StructuredTool[source]¶
Bases: BaseTool
Tool that can operate on any number of inputs.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Type[pydantic.main.BaseModel] [Required]¶
The input arguments’ schema.
The tool schema.
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
Callbacks to be called during tool execution.
param coroutine: Optional[Callable[[...], Awaitable[Any]]] = None¶
The asynchronous version of the function.
param description: str = ''¶
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
param func: Callable[[...], Any] [Required]¶
The function to run when the tool is called.
param handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str [Required]¶
The unique name of the tool that clearly communicates its purpose.
param return_direct: bool = False¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a277ee0e7a82-1
|
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a277ee0e7a82-2
|
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a277ee0e7a82-3
|
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) → StructuredTool[source]¶
Create tool from a given function.
A classmethod that helps to create a tool from a function.
Parameters
func – The function from which to create a tool
name – The name of the tool. Defaults to the function name
description – The description of the tool. Defaults to the function docstring
return_direct – Whether to return the result directly or as a callback
args_schema – The schema of the tool’s input arguments
infer_schema – Whether to infer the schema from the function’s signature
**kwargs – Additional arguments to pass to the tool
Returns
The tool
Examples
… code-block:: python
def add(a: int, b: int) -> int:“””Add two numbers”””
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
classmethod from_orm(obj: Any) → Model¶
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a277ee0e7a82-4
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a277ee0e7a82-5
|
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property args: dict¶
The tool’s input arguments.
property is_single_input: bool¶
Whether the tool only accepts a single input.
Examples using StructuredTool¶
Multi-Input Tools
Defining Custom Tools
|
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html
|
a0ae616d1057-0
|
langchain.server.main¶
langchain.server.main() → None[source]¶
Run the langchain server locally.
|
https://api.python.langchain.com/en/latest/server/langchain.server.main.html
|
8bc625d3bce3-0
|
langchain.graphs.arangodb_graph.get_arangodb_client¶
langchain.graphs.arangodb_graph.get_arangodb_client(url: Optional[str] = None, dbname: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None) → Any[source]¶
Get the Arango DB client from credentials.
Parameters
url – Arango DB url. Can be passed in as named arg or set as environment
var ARANGODB_URL. Defaults to “http://localhost:8529”.
dbname – Arango DB name. Can be passed in as named arg or set as
environment var ARANGODB_DBNAME. Defaults to “_system”.
username – Can be passed in as named arg or set as environment var
ARANGODB_USERNAME. Defaults to “root”.
password – Can be passed ni as named arg or set as environment var
ARANGODB_PASSWORD. Defaults to “”.
Returns
An arango.database.StandardDatabase.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.get_arangodb_client.html
|
557fbf0f6b7a-0
|
langchain.graphs.memgraph_graph.MemgraphGraph¶
class langchain.graphs.memgraph_graph.MemgraphGraph(url: str, username: str, password: str, *, database: str = 'memgraph')[source]¶
Memgraph wrapper for graph operations.
Create a new Memgraph graph wrapper instance.
Attributes
get_schema
Returns the schema of the Neo4j database
Methods
__init__(url, username, password, *[, database])
Create a new Memgraph graph wrapper instance.
query(query[, params])
Query Neo4j database.
refresh_schema()
Refreshes the Memgraph graph schema information.
__init__(url: str, username: str, password: str, *, database: str = 'memgraph') → None[source]¶
Create a new Memgraph graph wrapper instance.
query(query: str, params: dict = {}) → List[Dict[str, Any]]¶
Query Neo4j database.
refresh_schema() → None[source]¶
Refreshes the Memgraph graph schema information.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.memgraph_graph.MemgraphGraph.html
|
7fa032e64821-0
|
langchain.graphs.kuzu_graph.KuzuGraph¶
class langchain.graphs.kuzu_graph.KuzuGraph(db: Any, database: str = 'kuzu')[source]¶
Kùzu wrapper for graph operations.
Attributes
get_schema
Returns the schema of the Kùzu database
Methods
__init__(db[, database])
query(query[, params])
Query Kùzu database
refresh_schema()
Refreshes the Kùzu graph schema information
__init__(db: Any, database: str = 'kuzu') → None[source]¶
query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query Kùzu database
refresh_schema() → None[source]¶
Refreshes the Kùzu graph schema information
Examples using KuzuGraph¶
KuzuQAChain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.kuzu_graph.KuzuGraph.html
|
af1f8bf5eb50-0
|
langchain.graphs.neo4j_graph.Neo4jGraph¶
class langchain.graphs.neo4j_graph.Neo4jGraph(url: str, username: str, password: str, database: str = 'neo4j')[source]¶
Neo4j wrapper for graph operations.
Create a new Neo4j graph wrapper instance.
Attributes
get_schema
Returns the schema of the Neo4j database
Methods
__init__(url, username, password[, database])
Create a new Neo4j graph wrapper instance.
query(query[, params])
Query Neo4j database.
refresh_schema()
Refreshes the Neo4j graph schema information.
__init__(url: str, username: str, password: str, database: str = 'neo4j') → None[source]¶
Create a new Neo4j graph wrapper instance.
query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query Neo4j database.
refresh_schema() → None[source]¶
Refreshes the Neo4j graph schema information.
Examples using Neo4jGraph¶
Graph DB QA chain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.neo4j_graph.Neo4jGraph.html
|
55bc1275503d-0
|
langchain.graphs.neptune_graph.NeptuneGraph¶
class langchain.graphs.neptune_graph.NeptuneGraph(host: str, port: int = 8182, use_https: bool = True)[source]¶
Neptune wrapper for graph operations. This version
does not support Sigv4 signing of requests.
Example
graph = NeptuneGraph(host=’<my-cluster>’,
port=8182
)
Create a new Neptune graph wrapper instance.
Attributes
get_schema
Returns the schema of the Neptune database
Methods
__init__(host[, port, use_https])
Create a new Neptune graph wrapper instance.
query(query[, params])
Query Neptune database.
__init__(host: str, port: int = 8182, use_https: bool = True) → None[source]¶
Create a new Neptune graph wrapper instance.
query(query: str, params: dict = {}) → Dict[str, Any][source]¶
Query Neptune database.
Examples using NeptuneGraph¶
Neptune Open Cypher QA Chain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.neptune_graph.NeptuneGraph.html
|
83b0434785d0-0
|
langchain.graphs.rdf_graph.RdfGraph¶
class langchain.graphs.rdf_graph.RdfGraph(source_file: Optional[str] = None, serialization: Optional[str] = 'ttl', query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, standard: Optional[str] = 'rdf', local_copy: Optional[str] = None)[source]¶
RDFlib wrapper for graph operations.
Modes:
* local: Local file - can be queried and changed
* online: Online file - can only be queried, changes can be stored locally
* store: Triple store - can be queried and changed if update_endpoint available
Together with a source file, the serialization should be specified.
Set up the RDFlib graph
Parameters
source_file – either a path for a local file or a URL
serialization – serialization of the input
query_endpoint – SPARQL endpoint for queries, read access
update_endpoint – SPARQL endpoint for UPDATE queries, write access
standard – RDF, RDFS, or OWL
local_copy – new local copy for storing changes
Attributes
get_schema
Returns the schema of the graph database.
Methods
__init__([source_file, serialization, ...])
Set up the RDFlib graph
load_schema()
Load the graph schema information.
query(query)
Query the graph.
update(query)
Update the graph.
__init__(source_file: Optional[str] = None, serialization: Optional[str] = 'ttl', query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, standard: Optional[str] = 'rdf', local_copy: Optional[str] = None) → None[source]¶
Set up the RDFlib graph
Parameters
source_file – either a path for a local file or a URL
serialization – serialization of the input
query_endpoint – SPARQL endpoint for queries, read access
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.rdf_graph.RdfGraph.html
|
83b0434785d0-1
|
serialization – serialization of the input
query_endpoint – SPARQL endpoint for queries, read access
update_endpoint – SPARQL endpoint for UPDATE queries, write access
standard – RDF, RDFS, or OWL
local_copy – new local copy for storing changes
load_schema() → None[source]¶
Load the graph schema information.
query(query: str) → List[rdflib.query.ResultRow][source]¶
Query the graph.
update(query: str) → None[source]¶
Update the graph.
Examples using RdfGraph¶
GraphSparqlQAChain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.rdf_graph.RdfGraph.html
|
169b828d9762-0
|
langchain.graphs.hugegraph.HugeGraph¶
class langchain.graphs.hugegraph.HugeGraph(username: str = 'default', password: str = 'default', address: str = '127.0.0.1', port: int = 8081, graph: str = 'hugegraph')[source]¶
HugeGraph wrapper for graph operations
Create a new HugeGraph wrapper instance.
Attributes
get_schema
Returns the schema of the HugeGraph database
Methods
__init__([username, password, address, ...])
Create a new HugeGraph wrapper instance.
query(query)
refresh_schema()
Refreshes the HugeGraph schema information.
__init__(username: str = 'default', password: str = 'default', address: str = '127.0.0.1', port: int = 8081, graph: str = 'hugegraph') → None[source]¶
Create a new HugeGraph wrapper instance.
query(query: str) → List[Dict[str, Any]][source]¶
refresh_schema() → None[source]¶
Refreshes the HugeGraph schema information.
Examples using HugeGraph¶
HugeGraph QA Chain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.hugegraph.HugeGraph.html
|
ac5bdd62b6ff-0
|
langchain.graphs.neptune_graph.NeptuneQueryException¶
class langchain.graphs.neptune_graph.NeptuneQueryException(exception: Union[str, Dict])[source]¶
A class to handle queries that fail to execute
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.neptune_graph.NeptuneQueryException.html
|
5767e16859dc-0
|
langchain.graphs.networkx_graph.NetworkxEntityGraph¶
class langchain.graphs.networkx_graph.NetworkxEntityGraph(graph: Optional[Any] = None)[source]¶
Networkx wrapper for entity graph operations.
Create a new graph.
Methods
__init__([graph])
Create a new graph.
add_triple(knowledge_triple)
Add a triple to the graph.
clear()
Clear the graph.
delete_triple(knowledge_triple)
Delete a triple from the graph.
draw_graphviz(**kwargs)
Provides better drawing
from_gml(gml_path)
get_entity_knowledge(entity[, depth])
Get information about an entity.
get_topological_sort()
Get a list of entity names in the graph sorted by causal dependence.
get_triples()
Get all triples in the graph.
write_to_gml(path)
__init__(graph: Optional[Any] = None) → None[source]¶
Create a new graph.
add_triple(knowledge_triple: KnowledgeTriple) → None[source]¶
Add a triple to the graph.
clear() → None[source]¶
Clear the graph.
delete_triple(knowledge_triple: KnowledgeTriple) → None[source]¶
Delete a triple from the graph.
draw_graphviz(**kwargs: Any) → None[source]¶
Provides better drawing
Usage in a jupyter notebook:
>>> from IPython.display import SVG
>>> self.draw_graphviz_svg(layout="dot", filename="web.svg")
>>> SVG('web.svg')
classmethod from_gml(gml_path: str) → NetworkxEntityGraph[source]¶
get_entity_knowledge(entity: str, depth: int = 1) → List[str][source]¶
Get information about an entity.
get_topological_sort() → List[str][source]¶
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.NetworkxEntityGraph.html
|
5767e16859dc-1
|
Get information about an entity.
get_topological_sort() → List[str][source]¶
Get a list of entity names in the graph sorted by causal dependence.
get_triples() → List[Tuple[str, str, str]][source]¶
Get all triples in the graph.
write_to_gml(path: str) → None[source]¶
Examples using NetworkxEntityGraph¶
Graph QA
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.NetworkxEntityGraph.html
|
dd15c2fb0e53-0
|
langchain.graphs.arangodb_graph.ArangoGraph¶
class langchain.graphs.arangodb_graph.ArangoGraph(db: Any)[source]¶
ArangoDB wrapper for graph operations.
Create a new ArangoDB graph wrapper instance.
Attributes
db
schema
Methods
__init__(db)
Create a new ArangoDB graph wrapper instance.
from_db_credentials([url, dbname, username, ...])
Convenience constructor that builds Arango DB from credentials.
generate_schema([sample_ratio])
Generates the schema of the ArangoDB Database and returns it User can specify a sample_ratio (0 to 1) to determine the ratio of documents/edges used (in relation to the Collection size) to render each Collection Schema.
query(query[, top_k])
Query the ArangoDB database.
set_db(db)
set_schema([schema])
Set the schema of the ArangoDB Database.
__init__(db: Any) → None[source]¶
Create a new ArangoDB graph wrapper instance.
classmethod from_db_credentials(url: Optional[str] = None, dbname: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None) → Any[source]¶
Convenience constructor that builds Arango DB from credentials.
Parameters
url – Arango DB url. Can be passed in as named arg or set as environment
var ARANGODB_URL. Defaults to “http://localhost:8529”.
dbname – Arango DB name. Can be passed in as named arg or set as
environment var ARANGODB_DBNAME. Defaults to “_system”.
username – Can be passed in as named arg or set as environment var
ARANGODB_USERNAME. Defaults to “root”.
password – Can be passed ni as named arg or set as environment var
ARANGODB_PASSWORD. Defaults to “”.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.ArangoGraph.html
|
dd15c2fb0e53-1
|
ARANGODB_PASSWORD. Defaults to “”.
Returns
An arango.database.StandardDatabase.
generate_schema(sample_ratio: float = 0) → Dict[str, List[Dict[str, Any]]][source]¶
Generates the schema of the ArangoDB Database and returns it
User can specify a sample_ratio (0 to 1) to determine the
ratio of documents/edges used (in relation to the Collection size)
to render each Collection Schema.
query(query: str, top_k: Optional[int] = None, **kwargs: Any) → List[Dict[str, Any]][source]¶
Query the ArangoDB database.
set_db(db: Any) → None[source]¶
set_schema(schema: Optional[Dict[str, Any]] = None) → None[source]¶
Set the schema of the ArangoDB Database.
Auto-generates Schema if schema is None.
Examples using ArangoGraph¶
ArangoDB QA chain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.ArangoGraph.html
|
01dfaaf40400-0
|
langchain.graphs.networkx_graph.KnowledgeTriple¶
class langchain.graphs.networkx_graph.KnowledgeTriple(subject: str, predicate: str, object_: str)[source]¶
A triple in the graph.
Create new instance of KnowledgeTriple(subject, predicate, object_)
Attributes
object_
Alias for field number 2
predicate
Alias for field number 1
subject
Alias for field number 0
Methods
__init__()
count(value, /)
Return number of occurrences of value.
from_string(triple_string)
Create a KnowledgeTriple from a string.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
classmethod from_string(triple_string: str) → KnowledgeTriple[source]¶
Create a KnowledgeTriple from a string.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.KnowledgeTriple.html
|
d070bb60be8c-0
|
langchain.graphs.nebula_graph.NebulaGraph¶
class langchain.graphs.nebula_graph.NebulaGraph(space: str, username: str = 'root', password: str = 'nebula', address: str = '127.0.0.1', port: int = 9669, session_pool_size: int = 30)[source]¶
NebulaGraph wrapper for graph operations
NebulaGraph inherits methods from Neo4jGraph to bring ease to the user space.
Create a new NebulaGraph wrapper instance.
Attributes
get_schema
Returns the schema of the NebulaGraph database
Methods
__init__(space[, username, password, ...])
Create a new NebulaGraph wrapper instance.
execute(query[, params, retry])
Query NebulaGraph database.
query(query[, retry])
refresh_schema()
Refreshes the NebulaGraph schema information.
__init__(space: str, username: str = 'root', password: str = 'nebula', address: str = '127.0.0.1', port: int = 9669, session_pool_size: int = 30) → None[source]¶
Create a new NebulaGraph wrapper instance.
execute(query: str, params: dict = {}, retry: int = 0) → Any[source]¶
Query NebulaGraph database.
query(query: str, retry: int = 0) → Dict[str, Any][source]¶
refresh_schema() → None[source]¶
Refreshes the NebulaGraph schema information.
Examples using NebulaGraph¶
NebulaGraphQAChain
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.nebula_graph.NebulaGraph.html
|
0124db558222-0
|
langchain.graphs.networkx_graph.get_entities¶
langchain.graphs.networkx_graph.get_entities(entity_str: str) → List[str][source]¶
Extract entities from entity string.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.get_entities.html
|
9f18b1812d21-0
|
langchain.graphs.networkx_graph.parse_triples¶
langchain.graphs.networkx_graph.parse_triples(knowledge_str: str) → List[KnowledgeTriple][source]¶
Parse knowledge triples from the knowledge string.
|
https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.parse_triples.html
|
aefff8f4b7fb-0
|
langchain.prompts.pipeline.PipelinePromptTemplate¶
class langchain.prompts.pipeline.PipelinePromptTemplate[source]¶
Bases: BasePromptTemplate
A prompt template for composing multiple prompt templates together.
This can be useful when you want to reuse parts of prompts.
A PipelinePrompt consists of two main parts:
final_prompt: This is the final prompt that is returned
pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.
Each PromptTemplate will be formatted and then passed
to future prompt templates as a variable with
the same name as name
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param final_prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶
The final prompt that is returned.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param pipeline_prompts: List[Tuple[str, langchain.schema.prompt_template.BasePromptTemplate]] [Required]¶
A list of tuples, consisting of a string (name) and a Prompt Template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Input, config: Optional[RunnableConfig] = None) → Output¶
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
|
aefff8f4b7fb-1
|
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
|
aefff8f4b7fb-2
|
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict, config: langchain.schema.runnable.RunnableConfig | None = None) → PromptValue¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
|
aefff8f4b7fb-3
|
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
|
4275a2ae634e-0
|
langchain.prompts.chat.BaseChatPromptTemplate¶
class langchain.prompts.chat.BaseChatPromptTemplate[source]¶
Bases: BasePromptTemplate, ABC
Base class for chat prompt templates.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Input, config: Optional[RunnableConfig] = None) → Output¶
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
|
4275a2ae634e-1
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
abstract format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) → PromptValue[source]¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict, config: langchain.schema.runnable.RunnableConfig | None = None) → PromptValue¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
|
4275a2ae634e-2
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
|
4275a2ae634e-3
|
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
|
189e8fc53655-0
|
langchain.prompts.example_selector.base.BaseExampleSelector¶
class langchain.prompts.example_selector.base.BaseExampleSelector[source]¶
Interface for selecting examples to include in prompts.
Methods
__init__()
add_example(example)
Add new example to store for a key.
select_examples(input_variables)
Select which examples to use based on the inputs.
__init__()¶
abstract add_example(example: Dict[str, str]) → Any[source]¶
Add new example to store for a key.
abstract select_examples(input_variables: Dict[str, str]) → List[dict][source]¶
Select which examples to use based on the inputs.
Examples using BaseExampleSelector¶
Custom example selector
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.base.BaseExampleSelector.html
|
14581543a231-0
|
langchain.prompts.chat.MessagesPlaceholder¶
class langchain.prompts.chat.MessagesPlaceholder[source]¶
Bases: BaseMessagePromptTemplate
Prompt template that assumes variable is already list of messages.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param variable_name: str [Required]¶
Name of variable to use as messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html
|
14581543a231-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessage.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html
|
14581543a231-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Whether this object should be serialized.
Returns
Whether this object should be serialized.
Examples using MessagesPlaceholder¶
How to add Memory to an LLMChain
Add Memory to OpenAI Functions Agent
Types of `MessagePromptTemplate`
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html
|
e250371b4a7c-0
|
langchain.prompts.chat.ChatMessagePromptTemplate¶
class langchain.prompts.chat.ChatMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
Chat message prompt template.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
String prompt template.
param role: str [Required]¶
Role of the message.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
|
e250371b4a7c-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
|
e250371b4a7c-2
|
Returns
A new instance of this class.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
|
e250371b4a7c-3
|
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Whether this object should be serialized.
Returns
Whether this object should be serialized.
Examples using ChatMessagePromptTemplate¶
Types of `MessagePromptTemplate`
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
|
025a24d98247-0
|
langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector¶
class langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector[source]¶
Bases: BaseExampleSelector, BaseModel
Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: langchain.prompts.prompt.PromptTemplate [Required]¶
Prompt template used to format the examples.
param examples: List[dict] [Required]¶
A list of the examples that the prompt template expects.
param threshold: float = -1.0¶
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example: Dict[str, str]) → None[source]¶
Add new example to list.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
|
025a24d98247-1
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
|
025a24d98247-2
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
select_examples(input_variables: Dict[str, str]) → List[dict][source]¶
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
classmethod update_forward_refs(**localns: Any) → None¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
|
025a24d98247-3
|
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using NGramOverlapExampleSelector¶
Select by n-gram overlap
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
|
d3de2c9f1b3c-0
|
langchain.prompts.chat.ChatPromptTemplate¶
class langchain.prompts.chat.ChatPromptTemplate[source]¶
Bases: BaseChatPromptTemplate, ABC
A prompt template for chat models.
Use to create flexible templated prompts for chat models.
Examples
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
])
messages = template.format_messages(
name="Bob",
user_input="What is your name?"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
List of input variables in template messages. Used for validation.
param messages: List[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]] [Required]¶
List of messages consisting of either message prompt templates or messages.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Input, config: Optional[RunnableConfig] = None) → Output¶
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
d3de2c9f1b3c-1
|
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
d3de2c9f1b3c-2
|
formatted string
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format the chat template into a list of finalized messages.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
list of formatted messages
format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseChatPromptTemplate, BaseMessage, Tuple[str, str], Tuple[Type, str], str]]) → ChatPromptTemplate[source]¶
Create a chat prompt template from a variety of message formats.
Examples
Instantiation from a list of message templates:
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
Parameters
messages – sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., (“human”, “{user_input}”),
(4) 2-tuple of (message class, template), (4) a string which is
shorthand for (“human”, template); e.g., “{user_input}”
Returns
a chat prompt template
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
d3de2c9f1b3c-3
|
Returns
a chat prompt template
classmethod from_orm(obj: Any) → Model¶
classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate[source]¶
Create a chat prompt template from a list of (role, template) tuples.
Parameters
string_messages – list of (role, template) tuples.
Returns
a chat prompt template
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate[source]¶
Create a chat prompt template from a list of (role class, template) tuples.
Parameters
string_messages – list of (role class, template) tuples.
Returns
a chat prompt template
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate[source]¶
Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Parameters
template – template string
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
invoke(input: Dict, config: langchain.schema.runnable.RunnableConfig | None = None) → PromptValue¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
d3de2c9f1b3c-4
|
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → ChatPromptTemplate[source]¶
Return a new ChatPromptTemplate with some of the input variables already
filled in.
Parameters
**kwargs – keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns
A new ChatPromptTemplate.
Example
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
save(file_path: Union[Path, str]) → None[source]¶
Save prompt to file.
Parameters
file_path – path to file.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
d3de2c9f1b3c-5
|
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ChatPromptTemplate¶
Anthropic
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Context
OpenAI Functions Metadata Tagger
Figma
Tagging
Structure answers with OpenAI functions
Extraction with OpenAI Functions
Multi-agent authoritarian speaker selection
How to add Memory to an LLMChain
Retry parser
Pydantic (JSON) parser
Few shot examples for chat models
Prompt Pipelining
Using OpenAI functions
Extraction
Retrieval QA using OpenAI functions
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
34d4762170eb-0
|
langchain.prompts.chat.AIMessagePromptTemplate¶
class langchain.prompts.chat.AIMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
AI message prompt template. This is a message that is not sent to the user.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
|
34d4762170eb-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
|
34d4762170eb-2
|
Returns
A new instance of this class.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
|
34d4762170eb-3
|
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Whether this object should be serialized.
Returns
Whether this object should be serialized.
Examples using AIMessagePromptTemplate¶
Anthropic
OpenAI
JinaChat
Figma
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
|
a161b033831c-0
|
langchain.prompts.base.StringPromptTemplate¶
class langchain.prompts.base.StringPromptTemplate[source]¶
Bases: BasePromptTemplate, ABC
String prompt that exposes the format method, returning a prompt.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: Input, config: Optional[RunnableConfig] = None) → Output¶
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
|
a161b033831c-1
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict, config: langchain.schema.runnable.RunnableConfig | None = None) → PromptValue¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.