id stringlengths 14 16 | text stringlengths 13 2.7k | source stringlengths 57 178 |
|---|---|---|
6243b12ca803-4 | Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, on... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-5 | Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], cal... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-6 | The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-7 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names:... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-8 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic m... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-9 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_ou... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-10 | classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to v... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-11 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defa... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-12 | predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjec... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-13 | Prepare prompts from inputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__c... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-14 | save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definiti... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-15 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_liste... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
6243b12ca803-16 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces speci... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html |
d6eedc0502c1-0 | langchain_experimental.tot.prompts.JSONListOutputParser¶
class langchain_experimental.tot.prompts.JSONListOutputParser[source]¶
Bases: BaseOutputParser
Class to parse the output of a PROPOSE_PROMPT response.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, retur... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-1 | to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support str... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-2 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → R... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-3 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-4 | methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input:... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-5 | The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(text: str) → List[str][source]¶
Parse the output of an LLM call.
classmethod pa... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-6 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override t... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-7 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
d6eedc0502c1-8 | The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html |
5ae854727651-0 | langchain_experimental.tot.controller.ToTController¶
class langchain_experimental.tot.controller.ToTController(c: int = 3)[source]¶
Tree of Thought (ToT) controller.
This is a version of a ToT controller, dubbed in the paper as a “Simple
Controller”.
It has one parameter c which is the number of children to explore for... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.controller.ToTController.html |
bbfca53a6a26-0 | langchain_experimental.tot.memory.ToTDFSMemory¶
class langchain_experimental.tot.memory.ToTDFSMemory(stack: Optional[List[Thought]] = None)[source]¶
Memory for the Tree of Thought (ToT) chain. Implemented as a stack of
thoughts. This allows for a depth first search (DFS) of the ToT.
Attributes
level
Return the current ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.memory.ToTDFSMemory.html |
6b38c496bf04-0 | langchain_experimental.tot.thought_generation.SampleCoTStrategy¶
class langchain_experimental.tot.thought_generation.SampleCoTStrategy[source]¶
Bases: BaseThoughtGenerationStrategy
Sample thoughts from a Chain-of-Thought (CoT) prompt.
This strategy works better when the thought space is rich, such as when each
thought ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-1 | Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'text'¶
param output_pars... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-2 | accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = Non... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-3 | Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async abatch(inputs: List[Input], config:... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-4 | addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to c... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-5 | Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-6 | **kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we ha... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-7 | step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subcla... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-8 | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-9 | Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input t... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-10 | Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Retur... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-11 | Generate the next thought given the problem description and the thoughts
generated so far.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_r... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-12 | Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the fi... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-13 | Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
cont... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-14 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runna... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-15 | added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exc... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
6b38c496bf04-16 | property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html |
0278d5fa14f0-0 | langchain_experimental.tot.base.ToTChain¶
class langchain_experimental.tot.base.ToTChain[source]¶
Bases: Chain
A Chain implementing the Tree of Thought (ToT).
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-1 | and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and pass... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-2 | memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addi... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-3 | Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, on... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-4 | Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-5 | # -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, conf... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-6 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → R... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-7 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-8 | Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables th... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-9 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defa... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-10 | Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, inc... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-11 | addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'que... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-12 | to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-13 | The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_af... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0278d5fa14f0-14 | A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html |
0d20df055e77-0 | langchain_experimental.tot.prompts.CheckerOutputParser¶
class langchain_experimental.tot.prompts.CheckerOutputParser[source]¶
Bases: BaseOutputParser
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Lis... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-1 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, incl... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-2 | Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[Bas... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-3 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-4 | This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an out... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-5 | Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(text: str) → ThoughtValidity[source]¶
Parse the output of the language model.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = No... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-6 | stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implem... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-7 | Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
0d20df055e77-8 | The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html |
9a3f93df3a5a-0 | langchain_experimental.tot.checker.ToTChecker¶
class langchain_experimental.tot.checker.ToTChecker[source]¶
Bases: Chain, ABC
Tree of Thought (ToT) checker.
This is an abstract ToT checker that must be implemented by the user. You
can implement a simple rule-based checker or a more sophisticated
neural network based cl... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-1 | Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not r... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-2 | metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-3 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-4 | callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the c... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-5 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names:... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-6 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic m... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-7 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kw... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-8 | namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema tha... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-9 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-10 | memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags:... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-11 | context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to s... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-12 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A se... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
9a3f93df3a5a-13 | between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html |
d42cd5edf8ad-0 | langchain_experimental.tot.thought.Thought¶
class langchain_experimental.tot.thought.Thought[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param children: Set[langchain_experiment... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html |
d42cd5edf8ad-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html |
d42cd5edf8ad-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html |
1c2491d56b77-0 | langchain_experimental.tot.thought.ThoughtValidity¶
class langchain_experimental.tot.thought.ThoughtValidity(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
VALID_INTERMEDIATE = 0¶
VALID_FINAL = 1¶
INVALID = 2¶ | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.ThoughtValidity.html |
a61800c31b1b-0 | langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy¶
class langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy[source]¶
Bases: LLMChain
Base class for a thought generation strategy.
Create a new model by parsing and validating input data from keyword arguments.
Raises Val... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-1 | You can use these to eg identify a specific instance of a chain with its use case.
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Required]¶
Prompt object to use.
param retur... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-2 | response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-3 | e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, ... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-4 | Generate LLM result from inputs.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async versi... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-5 | Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Ch... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-6 | # -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, conf... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-7 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → R... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-8 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-9 | Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “open... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-10 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defa... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-11 | predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjec... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-12 | Prepare prompts from inputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__c... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-13 | save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definiti... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-14 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_liste... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
a61800c31b1b-15 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces speci... | lang/api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
867d65bbe23b-0 | langchain.utils.math.cosine_similarity_top_k¶
langchain.utils.math.cosine_similarity_top_k(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray], top_k: Optional[int] = 5, score_threshold: Optional[float] = None) → Tuple[List[Tuple[int, int]], List[float]][source]¶
Row... | lang/api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity_top_k.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.