id
stringlengths 14
15
| text
stringlengths 35
2.51k
| source
stringlengths 61
154
|
|---|---|---|
33f8c367a0e4-3
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
33f8c367a0e4-4
|
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
max_tokens_for_prompt(prompt: str) → int¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
33f8c367a0e4-5
|
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]¶
Prepare the params for streaming.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]] = None) → Generator¶
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
33f8c367a0e4-6
|
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
c19c1383e6af-0
|
langchain.llms.octoai_endpoint.OctoAIEndpoint¶
class langchain.llms.octoai_endpoint.OctoAIEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: Optional[str] = None, model_kwargs: Optional[dict] = None, octoai_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around OctoAI Inference Endpoints.
OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints.
To use, you should have the octoai python package installed, and the
environment variable OCTOAI_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms.octoai_endpoint import OctoAIEndpoint
OctoAIEndpoint(
octoai_api_token="octoai-api-key",
endpoint_url="https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate",
model_kwargs={
"max_new_tokens": 200,
"temperature": 0.75,
"top_p": 0.95,
"repetition_penalty": 1,
"seed": None,
"stop": [],
},
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
|
c19c1383e6af-1
|
param callbacks: Callbacks = None¶
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
|
c19c1383e6af-2
|
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
|
c19c1383e6af-3
|
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
|
9f1a3f1e54a0-0
|
langchain.llms.base.BaseLLM¶
class langchain.llms.base.BaseLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseLanguageModel, ABC
LLM wrapper should take in a prompt and return a string.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str[source]¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult[source]¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
|
9f1a3f1e54a0-1
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Predict message from messages.
dict(**kwargs: Any) → Dict[source]¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult[source]¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
|
9f1a3f1e54a0-2
|
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Predict message from messages.
validator raise_deprecation » all fields[source]¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None[source]¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose[source]¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
|
90d77bc2d0d7-0
|
langchain.llms.baseten.Baseten¶
class langchain.llms.baseten.Baseten(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, input: Dict[str, Any] = None, model_kwargs: Dict[str, Any] = None)[source]¶
Bases: LLM
Use your Baseten models in Langchain
To use, you should have the baseten python package installed,
and run baseten.login() with your Baseten API key.
The required model param can be either a model id or model
version id. Using a model version ID will result in
slightly faster invocation.
Any other model parameters can also
be passed in with the format input={model_param: value, …}
The Baseten model must accept a dictionary of input with the key
“prompt” and return a dictionary with a key “data” which maps
to a list of response strings.
Example
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param input: Dict[str, Any] [Optional]¶
param model: str [Required]¶
param model_kwargs: Dict[str, Any] [Optional]¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
|
90d77bc2d0d7-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
|
90d77bc2d0d7-2
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
|
90d77bc2d0d7-3
|
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
|
248d18415414-0
|
langchain.llms.sagemaker_endpoint.LLMContentHandler¶
class langchain.llms.sagemaker_endpoint.LLMContentHandler[source]¶
Bases: ContentHandlerBase[str, str]
Content handler for LLM class.
Methods
__init__()
transform_input(prompt, model_kwargs)
Transforms the input to a format that model can accept as the request Body.
transform_output(output)
Transforms the output from the model to string that the LLM class expects.
Attributes
accepts
The MIME type of the response data returned from endpoint
content_type
The MIME type of the input data passed to endpoint
abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes¶
Transforms the input to a format that model can accept
as the request Body. Should return bytes or seekable file
like object in the format specified in the content_type
request header.
abstract transform_output(output: bytes) → OUTPUT_TYPE¶
Transforms the output from the model to string that
the LLM class expects.
accepts: Optional[str] = 'text/plain'¶
The MIME type of the response data returned from endpoint
content_type: Optional[str] = 'text/plain'¶
The MIME type of the input data passed to endpoint
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.LLMContentHandler.html
|
3f5f5798a860-0
|
langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint¶
class langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = '', endpoint_api_key: str = '', deployment_name: str = '', http_client: Any = None, content_formatter: Any = None, model_kwargs: Optional[dict] = None)[source]¶
Bases: LLM, BaseModel
Wrapper around Azure ML Hosted models using Managed Online Endpoints.
Example
azure_llm = AzureMLModel(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
endpoint_api_key="my-api-key",
deployment_name="my-deployment-name",
content_formatter=content_formatter,
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param content_formatter: Any = None¶
The content formatter that provides an input and output
transform function to handle formats between the LLM and
the endpoint
param deployment_name: str = ''¶
Deployment Name for Endpoint. Should be passed to constructor or specified as
env var AZUREML_DEPLOYMENT_NAME.
param endpoint_api_key: str = ''¶
Authentication Key for Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_API_KEY.
param endpoint_url: str = ''¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
3f5f5798a860-1
|
env var AZUREML_ENDPOINT_API_KEY.
param endpoint_url: str = ''¶
URL of pre-existing Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_URL.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
3f5f5798a860-2
|
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
3f5f5798a860-3
|
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_client » http_client[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
b05556a4741d-0
|
langchain.llms.promptlayer_openai.PromptLayerOpenAI¶
class langchain.llms.promptlayer_openai.PromptLayerOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶
Bases: OpenAI
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-1
|
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
Parameters
pl_tags – List of strings to tag the request with.
return_pl_id – If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-2
|
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param pl_tags: Optional[List[str]] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param return_pl_id: Optional[bool] = False¶
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-3
|
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-4
|
Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
max_tokens_for_prompt(prompt: str) → int¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-5
|
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]¶
Prepare the params for streaming.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]] = None) → Generator¶
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
b05556a4741d-6
|
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
|
86517f2efda9-0
|
langchain.llms.aleph_alpha.AlephAlpha¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-1
|
class langchain.llms.aleph_alpha.AlephAlpha(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: Optional[str] = 'luminous-base', maximum_tokens: int = 64, temperature: float = 0.0, top_k: int = 0, top_p: float = 0.0, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, repetition_penalties_include_prompt: Optional[bool] = False, use_multiplicative_presence_penalty: Optional[bool] = False, penalty_bias: Optional[str] = None, penalty_exceptions: Optional[List[str]] = None, penalty_exceptions_include_stop_sequences: Optional[bool] = None, best_of: Optional[int] = None, n: int = 1, logit_bias: Optional[Dict[int, float]] = None, log_probs: Optional[int] = None, tokens: Optional[bool] = False, disable_optimizations: Optional[bool] = False, minimum_tokens: Optional[int] = 0, echo: bool = False, use_multiplicative_frequency_penalty: bool = False, sequence_penalty: float = 0.0, sequence_penalty_min_length: int = 2, use_multiplicative_sequence_penalty: bool = False, completion_bias_inclusion: Optional[Sequence[str]] = None, completion_bias_inclusion_first_token_only: bool = False, completion_bias_exclusion: Optional[Sequence[str]] = None, completion_bias_exclusion_first_token_only: bool = False, contextual_control_threshold: Optional[float] = None, control_log_additive: Optional[bool] = True, repetition_penalties_include_completion: bool = True,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-2
|
Optional[bool] = True, repetition_penalties_include_completion: bool = True, raw_completion: bool = False, aleph_alpha_api_key: Optional[str] = None, stop_sequences: Optional[List[str]] = None)[source]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-3
|
Bases: LLM
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10
Example
from langchain.llms import AlephAlpha
aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param best_of: Optional[int] = None¶
returns the one with the “best of” results
(highest log probability per token)
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param completion_bias_exclusion: Optional[Sequence[str]] = None¶
param completion_bias_exclusion_first_token_only: bool = False¶
Only consider the first token for the completion_bias_exclusion.
param completion_bias_inclusion: Optional[Sequence[str]] = None¶
param completion_bias_inclusion_first_token_only: bool = False¶
param contextual_control_threshold: Optional[float] = None¶
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-4
|
If set to a non-None value, control parameters are also applied to similar tokens.
param control_log_additive: Optional[bool] = True¶
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
param disable_optimizations: Optional[bool] = False¶
param echo: bool = False¶
Echo the prompt in the completion.
param frequency_penalty: float = 0.0¶
Penalizes repeated tokens according to frequency.
param log_probs: Optional[int] = None¶
Number of top log probabilities to be returned for each generated token.
param logit_bias: Optional[Dict[int, float]] = None¶
The logit bias allows to influence the likelihood of generating tokens.
param maximum_tokens: int = 64¶
The maximum number of tokens to be generated.
param minimum_tokens: Optional[int] = 0¶
Generate at least this number of tokens.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param penalty_bias: Optional[str] = None¶
Penalty bias for the completion.
param penalty_exceptions: Optional[List[str]] = None¶
List of strings that may be generated without penalty,
regardless of other penalty settings
param penalty_exceptions_include_stop_sequences: Optional[bool] = None¶
Should stop_sequences be included in penalty_exceptions.
param presence_penalty: float = 0.0¶
Penalizes repeated tokens.
param raw_completion: bool = False¶
Force the raw completion of the model to be returned.
param repetition_penalties_include_completion: bool = True¶
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-5
|
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
param repetition_penalties_include_prompt: Optional[bool] = False¶
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
param sequence_penalty: float = 0.0¶
param sequence_penalty_min_length: int = 2¶
param stop_sequences: Optional[List[str]] = None¶
Stop sequences to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
A non-negative float that tunes the degree of randomness in generation.
param tokens: Optional[bool] = False¶
return tokens of completion.
param top_k: int = 0¶
Number of most likely tokens to consider at each step.
param top_p: float = 0.0¶
Total probability mass of tokens to consider at each step.
param use_multiplicative_frequency_penalty: bool = False¶
param use_multiplicative_presence_penalty: Optional[bool] = False¶
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
param use_multiplicative_sequence_penalty: bool = False¶
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-6
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
86517f2efda9-7
|
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
|
654840d0f1a5-0
|
langchain.llms.huggingface_hub.HuggingFaceHub¶
class langchain.llms.huggingface_hub.HuggingFaceHub(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, repo_id: str = 'gpt2', task: Optional[str] = None, model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param huggingfacehub_api_token: Optional[str] = None¶
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param repo_id: str = 'gpt2'¶
Model name to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
654840d0f1a5-1
|
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param task: Optional[str] = None¶
Task to call the model with.
Should be a task that returns generated_text or summary_text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
654840d0f1a5-2
|
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
654840d0f1a5-3
|
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
db2c71302137-0
|
langchain.llms.base.get_prompts¶
langchain.llms.base.get_prompts(params: Dict[str, Any], prompts: List[str]) → Tuple[Dict[int, List], str, List[int], List[str]][source]¶
Get prompts that are already cached.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.get_prompts.html
|
749f1191f4e3-0
|
langchain.llms.azureml_endpoint.AzureMLEndpointClient¶
class langchain.llms.azureml_endpoint.AzureMLEndpointClient(endpoint_url: str, endpoint_api_key: str, deployment_name: str)[source]¶
Bases: object
Wrapper around AzureML Managed Online Endpoint Client.
Initialize the class.
Methods
__init__(endpoint_url, endpoint_api_key, ...)
Initialize the class.
call(body)
call.
call(body: bytes) → bytes[source]¶
call.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLEndpointClient.html
|
39884ea8312a-0
|
langchain.llms.clarifai.Clarifai¶
class langchain.llms.clarifai.Clarifai(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, stub: Any = None, metadata: Any = None, userDataObject: Any = None, model_id: Optional[str] = None, model_version_id: Optional[str] = None, app_id: Optional[str] = None, user_id: Optional[str] = None, clarifai_pat_key: Optional[str] = None, api_base: str = 'https://api.clarifai.com', stop: Optional[List[str]] = None)[source]¶
Bases: LLM
Wrapper around Clarifai’s large language models.
To use, you should have an account on the Clarifai platform,
the clarifai python package installed, and the
environment variable CLARIFAI_PAT_KEY set with your PAT key,
or pass it as a named parameter to the constructor.
Example
from langchain.llms import Clarifai
clarifai_llm = Clarifai(clarifai_pat_key=CLARIFAI_PAT_KEY, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_base: str = 'https://api.clarifai.com'¶
param app_id: Optional[str] = None¶
Clarifai application id to use.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
|
39884ea8312a-1
|
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param clarifai_pat_key: Optional[str] = None¶
param metadata: Any = None¶
param model_id: Optional[str] = None¶
Model id to use.
param model_version_id: Optional[str] = None¶
Model version id to use.
param stop: Optional[List[str]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param userDataObject: Any = None¶
param user_id: Optional[str] = None¶
Clarifai user id to use.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
|
39884ea8312a-2
|
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
|
39884ea8312a-3
|
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that we have all required info to access Clarifai
platform and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
|
3939dfbc5f18-0
|
langchain.llms.cohere.completion_with_retry¶
langchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.completion_with_retry.html
|
4d14cab2576c-0
|
langchain.llms.gooseai.GooseAI¶
class langchain.llms.gooseai.GooseAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'gpt-neo-20b', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, min_tokens: int = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, model_kwargs: Dict[str, Any] = None, logit_bias: Optional[Dict[str, float]] = None, gooseai_api_key: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
|
4d14cab2576c-1
|
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param gooseai_api_key: Optional[str] = None¶
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param min_tokens: int = 1¶
The minimum number of tokens to generate in the completion.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-neo-20b'¶
Model name to use
param n: int = 1¶
How many completions to generate for each prompt.
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
|
4d14cab2576c-2
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
|
4d14cab2576c-3
|
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
|
4d14cab2576c-4
|
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
|
c491a0b47a1d-0
|
langchain.llms.vertexai.VertexAI¶
class langchain.llms.vertexai.VertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: _LanguageModel = None, model_name: str = 'text-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, tuned_model_name: Optional[str] = None)[source]¶
Bases: _VertexAICommon, LLM
Wrapper around Google Vertex AI large language models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param model_name: str = 'text-bison'¶
The name of the Vertex AI large language model.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
|
c491a0b47a1d-1
|
param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
param tuned_model_name: Optional[str] = None¶
The name of a tuned model. If provided, model_name is ignored.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
|
c491a0b47a1d-2
|
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
|
c491a0b47a1d-3
|
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that the python package exists in environment.
property is_codey_model: bool¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
task_executor: ClassVar[Optional[Executor]] = None¶
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
|
3bfc2340e427-0
|
langchain.llms.openai.completion_with_retry¶
langchain.llms.openai.completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.completion_with_retry.html
|
cfb896d229a4-0
|
langchain.llms.vertexai.is_codey_model¶
langchain.llms.vertexai.is_codey_model(model_name: str) → bool[source]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.is_codey_model.html
|
3d572236cc2a-0
|
langchain.llms.google_palm.generate_with_retry¶
langchain.llms.google_palm.generate_with_retry(llm: GooglePalm, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.generate_with_retry.html
|
7cdb7073e231-0
|
langchain.llms.base.LLM¶
class langchain.llms.base.LLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseLLM
LLM class that expect subclasses to implement a simpler call method.
The purpose of this class is to expose a simpler interface for working
with LLMs, rather than expect the user to implement the full _generate method.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
7cdb7073e231-1
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
7cdb7073e231-2
|
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
971da2d85faf-0
|
langchain.llms.replicate.Replicate¶
class langchain.llms.replicate.Replicate(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, input: Dict[str, Any] = None, model_kwargs: Dict[str, Any] = None, replicate_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around Replicate models.
To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, …}
Example
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param input: Dict[str, Any] [Optional]¶
param model: str [Required]¶
param model_kwargs: Dict[str, Any] [Optional]¶
param replicate_api_token: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
971da2d85faf-1
|
param replicate_api_token: Optional[str] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
971da2d85faf-2
|
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
971da2d85faf-3
|
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
cf4dc4e305d9-0
|
langchain.requests.TextRequestsWrapper¶
class langchain.requests.TextRequestsWrapper(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]¶
Bases: BaseModel
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.client.ClientSession] = None¶
param headers: Optional[Dict[str, str]] = None¶
async adelete(url: str, **kwargs: Any) → str[source]¶
DELETE the URL and return the text asynchronously.
async aget(url: str, **kwargs: Any) → str[source]¶
GET the URL and return the text asynchronously.
async apatch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) → str[source]¶
DELETE the URL and return the text.
get(url: str, **kwargs: Any) → str[source]¶
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
|
https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html
|
cf4dc4e305d9-1
|
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
PUT the URL and return the text.
property requests: langchain.requests.Requests¶
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html
|
9146ab47fe09-0
|
langchain.requests.Requests¶
class langchain.requests.Requests(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]¶
Bases: BaseModel
Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.client.ClientSession] = None¶
param headers: Optional[Dict[str, str]] = None¶
adelete(url: str, **kwargs: Any) → AsyncGenerator[ClientResponse, None][source]¶
DELETE the URL and return the text asynchronously.
aget(url: str, **kwargs: Any) → AsyncGenerator[ClientResponse, None][source]¶
GET the URL and return the text asynchronously.
apatch(url: str, data: Dict[str, Any], **kwargs: Any) → AsyncGenerator[ClientResponse, None][source]¶
PATCH the URL and return the text asynchronously.
apost(url: str, data: Dict[str, Any], **kwargs: Any) → AsyncGenerator[ClientResponse, None][source]¶
POST to the URL and return the text asynchronously.
aput(url: str, data: Dict[str, Any], **kwargs: Any) → AsyncGenerator[ClientResponse, None][source]¶
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) → Response[source]¶
DELETE the URL and return the text.
get(url: str, **kwargs: Any) → Response[source]¶
GET the URL and return the text.
|
https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html
|
9146ab47fe09-1
|
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶
PUT the URL and return the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html
|
a72178c04882-0
|
langchain.indexes.vectorstore.VectorstoreIndexCreator¶
class langchain.indexes.vectorstore.VectorstoreIndexCreator(*, vectorstore_cls: ~typing.Type[~langchain.vectorstores.base.VectorStore] = <class 'langchain.vectorstores.chroma.Chroma'>, embedding: ~langchain.embeddings.base.Embeddings = None, text_splitter: ~langchain.text_splitter.TextSplitter = None, vectorstore_kwargs: dict = None)[source]¶
Bases: BaseModel
Logic for creating indexes.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embedding: langchain.embeddings.base.Embeddings [Optional]¶
param text_splitter: langchain.text_splitter.TextSplitter [Optional]¶
param vectorstore_cls: Type[langchain.vectorstores.base.VectorStore] = <class 'langchain.vectorstores.chroma.Chroma'>¶
param vectorstore_kwargs: dict [Optional]¶
from_documents(documents: List[Document]) → VectorStoreIndexWrapper[source]¶
Create a vectorstore index from documents.
from_loaders(loaders: List[BaseLoader]) → VectorStoreIndexWrapper[source]¶
Create a vectorstore index from loaders.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
|
c7d61773abbf-0
|
langchain.indexes.graph.GraphIndexCreator¶
class langchain.indexes.graph.GraphIndexCreator(*, llm: ~typing.Optional[~langchain.base_language.BaseLanguageModel] = None, graph_type: ~typing.Type[~langchain.graphs.networkx_graph.NetworkxEntityGraph] = <class 'langchain.graphs.networkx_graph.NetworkxEntityGraph'>)[source]¶
Bases: BaseModel
Functionality to create graph index.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param graph_type: Type[langchain.graphs.networkx_graph.NetworkxEntityGraph] = <class 'langchain.graphs.networkx_graph.NetworkxEntityGraph'>¶
param llm: Optional[langchain.base_language.BaseLanguageModel] = None¶
async afrom_text(text: str) → NetworkxEntityGraph[source]¶
Create graph index from text asynchronously.
from_text(text: str) → NetworkxEntityGraph[source]¶
Create graph index from text.
|
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
|
eb45f8fe35dd-0
|
langchain.indexes.vectorstore.VectorStoreIndexWrapper¶
class langchain.indexes.vectorstore.VectorStoreIndexWrapper(*, vectorstore: VectorStore)[source]¶
Bases: BaseModel
Wrapper around a vectorstore for easy access.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶
query(question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any) → str[source]¶
Query the vectorstore.
query_with_sources(question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any) → dict[source]¶
Query the vectorstore and get back sources.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html
|
e7c07d00ee0d-0
|
langchain.document_transformers.get_stateful_documents¶
langchain.document_transformers.get_stateful_documents(documents: Sequence[Document]) → Sequence[_DocumentWithState][source]¶
Convert a list of documents to a list of documents with state.
Parameters
documents – The documents to convert.
Returns
A list of documents with state.
|
https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.get_stateful_documents.html
|
7c666d3a7e36-0
|
langchain.document_transformers.EmbeddingsRedundantFilter¶
class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings: ~langchain.embeddings.base.Embeddings, similarity_fn: ~typing.Callable = <function cosine_similarity>, similarity_threshold: float = 0.95)[source]¶
Bases: BaseDocumentTransformer, BaseModel
Filter that drops redundant documents by comparing their embeddings.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embeddings: langchain.embeddings.base.Embeddings [Required]¶
Embeddings to use for embedding document contents.
param similarity_fn: Callable = <function cosine_similarity>¶
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
param similarity_threshold: float = 0.95¶
Threshold for determining when two documents are similar enough
to be considered redundant.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Filter down documents.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.EmbeddingsRedundantFilter.html
|
79c6666e27a2-0
|
langchain.experimental.plan_and_execute.schema.ListStepContainer¶
class langchain.experimental.plan_and_execute.schema.ListStepContainer(*, steps: List[Tuple[Step, StepResponse]] = None)[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param steps: List[Tuple[langchain.experimental.plan_and_execute.schema.Step, langchain.experimental.plan_and_execute.schema.StepResponse]] [Optional]¶
add_step(step: Step, step_response: StepResponse) → None[source]¶
get_final_response() → str[source]¶
get_steps() → List[Tuple[Step, StepResponse]][source]¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.ListStepContainer.html
|
08875b4e3983-0
|
langchain.experimental.llms.jsonformer_decoder.import_jsonformer¶
langchain.experimental.llms.jsonformer_decoder.import_jsonformer() → jsonformer[source]¶
Lazily import jsonformer.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.import_jsonformer.html
|
d939322cf74c-0
|
langchain.experimental.plan_and_execute.planners.base.BasePlanner¶
class langchain.experimental.plan_and_execute.planners.base.BasePlanner[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract async aplan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Plan[source]¶
Given input, decide what to do.
abstract plan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Plan[source]¶
Given input, decide what to do.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.base.BasePlanner.html
|
859fd6b9c886-0
|
langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser¶
class langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser[source]¶
Bases: PlanOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
parse(text: str) → Plan[source]¶
Parse into a plan.
parse_result(result: List[Generation]) → T¶
Parse LLM Result.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
|
859fd6b9c886-1
|
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
|
1f45d9cd2509-0
|
langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input¶
langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(input_str: str) → str[source]¶
Preprocesses a string to be parsed as json.
Replace single backslashes with double backslashes,
while leaving already escaped ones intact.
Parameters
input_str – String to be preprocessed
Returns
Preprocessed string
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input.html
|
acd563f202ba-0
|
langchain.experimental.plan_and_execute.executors.base.BaseExecutor¶
class langchain.experimental.plan_and_execute.executors.base.BaseExecutor[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take step.
abstract step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take step.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.base.BaseExecutor.html
|
b9320da01997-0
|
langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute¶
class langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, planner: BasePlanner, executor: BaseExecutor, step_container: BaseStepContainer = None, input_key: str = 'input', output_key: str = 'output')[source]¶
Bases: Chain
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param executor: langchain.experimental.plan_and_execute.executors.base.BaseExecutor [Required]¶
param input_key: str = 'input'¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'output'¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html
|
b9320da01997-1
|
for the full catalog.
param output_key: str = 'output'¶
param planner: langchain.experimental.plan_and_execute.planners.base.BasePlanner [Required]¶
param step_container: langchain.experimental.plan_and_execute.schema.BaseStepContainer [Optional]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html
|
b9320da01997-2
|
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html
|
b9320da01997-3
|
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Input keys this chain expects.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html
|
b9320da01997-4
|
Return whether or not the class is serializable.
property output_keys: List[str]¶
Output keys this chain expects.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html
|
2f1f7078da44-0
|
langchain.experimental.plan_and_execute.schema.BaseStepContainer¶
class langchain.experimental.plan_and_execute.schema.BaseStepContainer[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract add_step(step: Step, step_response: StepResponse) → None[source]¶
Add step and step response to the container.
abstract get_final_response() → str[source]¶
Return the final response based on steps taken.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.BaseStepContainer.html
|
adf6bd43de47-0
|
langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor¶
langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor(llm: BaseLanguageModel, tools: List[BaseTool], verbose: bool = False, include_task_in_prompt: bool = False) → ChainExecutor[source]¶
Load an agent executor.
Parameters
llm – BaseLanguageModel
tools – List[BaseTool]
verbose – bool. Defaults to False.
include_task_in_prompt – bool. Defaults to False.
Returns
ChainExecutor
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor.html
|
c7d550c3a083-0
|
langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory¶
class langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, retriever: VectorStoreRetriever)[source]¶
Bases: BaseChatMemory
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param chat_memory: BaseChatMessageHistory [Optional]¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]¶
VectorStoreRetriever object to connect to.
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html
|
c7d550c3a083-1
|
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html
|
60cda2cadb2c-0
|
langchain.experimental.plan_and_execute.schema.StepResponse¶
class langchain.experimental.plan_and_execute.schema.StepResponse(*, response: str)[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param response: str [Required]¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.StepResponse.html
|
375b522c90b1-0
|
langchain.experimental.generative_agents.memory.GenerativeAgentMemory¶
class langchain.experimental.generative_agents.memory.GenerativeAgentMemory(*, llm: BaseLanguageModel, memory_retriever: TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories', now_key: str = 'now', reflecting: bool = False)[source]¶
Bases: BaseMemory
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param add_memory_key: str = 'add_memory'¶
param aggregate_importance: float = 0.0¶
Track the sum of the ‘importance’ of recent memories.
Triggers reflection when it reaches reflection_threshold.
param current_plan: List[str] = []¶
The current plan of the agent.
param importance_weight: float = 0.15¶
How much weight to assign the memory importance.
param llm: langchain.base_language.BaseLanguageModel [Required]¶
The core language model.
param max_tokens_limit: int = 1200¶
param memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]¶
The retriever to fetch related memories.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html
|
375b522c90b1-1
|
The retriever to fetch related memories.
param most_recent_memories_key: str = 'most_recent_memories'¶
param most_recent_memories_token_key: str = 'recent_memories_token'¶
param now_key: str = 'now'¶
param queries_key: str = 'queries'¶
param reflecting: bool = False¶
param reflection_threshold: Optional[float] = None¶
When aggregate_importance exceeds reflection_threshold, stop to reflect.
param relevant_memories_key: str = 'relevant_memories'¶
param relevant_memories_simple_key: str = 'relevant_memories_simple'¶
param verbose: bool = False¶
add_memories(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
Add an observations or memories to the agent’s memory.
add_memory(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
Add an observation or memory to the agent’s memory.
chain(prompt: PromptTemplate) → LLMChain[source]¶
clear() → None[source]¶
Clear memory contents.
fetch_memories(observation: str, now: Optional[datetime] = None) → List[Document][source]¶
Fetch related memories.
format_memories_detail(relevant_memories: List[Document]) → str[source]¶
format_memories_simple(relevant_memories: List[Document]) → str[source]¶
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain.
pause_to_reflect(now: Optional[datetime] = None) → List[str][source]¶
Reflect on recent observations and generate ‘insights’.
save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) → None[source]¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html
|
375b522c90b1-2
|
Save the context of this model run to memory.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html
|
ae8107d71508-0
|
langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain¶
class langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶
Bases: LLMChain
Chain to prioritize tasks.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: BaseLanguageModel [Required]¶
Language model to call.
param llm_kwargs: dict [Optional]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
ae8107d71508-1
|
There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Required]¶
Prompt object to use.
param return_final_only: bool = True¶
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
ae8107d71508-2
|
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
ae8107d71508-3
|
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶
Create outputs from response.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
ae8107d71508-4
|
Create outputs from response.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
classmethod from_llm(llm: BaseLanguageModel, verbose: bool = True) → LLMChain[source]¶
Get the response parser.
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
ae8107d71508-5
|
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.