id stringlengths 14 15 | text stringlengths 35 2.51k | source stringlengths 61 154 |
|---|---|---|
33f8c367a0e4-3 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
33f8c367a0e4-4 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
33f8c367a0e4-5 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → Base... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
33f8c367a0e4-6 | validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
c19c1383e6af-0 | langchain.llms.octoai_endpoint.OctoAIEndpoint¶
class langchain.llms.octoai_endpoint.OctoAIEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
c19c1383e6af-1 | param callbacks: Callbacks = None¶
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param... | https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
c19c1383e6af-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
c19c1383e6af-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
prop... | https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
9f1a3f1e54a0-0 | langchain.llms.base.BaseLLM¶
class langchain.llms.base.BaseLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseLanguageM... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
9f1a3f1e54a0-1 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶
Take in a list of prompt values and return an LLMResult.
classmethod all_... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
9f1a3f1e54a0-2 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Predict message from messages.
validator raise... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
90d77bc2d0d7-0 | langchain.llms.baseten.Baseten¶
class langchain.llms.baseten.Baseten(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, input: Dict... | https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html |
90d77bc2d0d7-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html |
90d77bc2d0d7-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html |
90d77bc2d0d7-3 | constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html |
248d18415414-0 | langchain.llms.sagemaker_endpoint.LLMContentHandler¶
class langchain.llms.sagemaker_endpoint.LLMContentHandler[source]¶
Bases: ContentHandlerBase[str, str]
Content handler for LLM class.
Methods
__init__()
transform_input(prompt, model_kwargs)
Transforms the input to a format that model can accept as the request Body.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.LLMContentHandler.html |
3f5f5798a860-0 | langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint¶
class langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optio... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html |
3f5f5798a860-1 | env var AZUREML_ENDPOINT_API_KEY.
param endpoint_url: str = ''¶
URL of pre-existing Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_URL.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html |
3f5f5798a860-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html |
3f5f5798a860-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_client » http_client[source]¶
Validate that api key and python package exists in environment.
property... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html |
b05556a4741d-0 | langchain.llms.promptlayer_openai.PromptLayerOpenAI¶
class langchain.llms.promptlayer_openai.PromptLayerOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-1 | promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
Parameters
pl_tags – List of strings to tag the request with.
return_pl_id – If True, the PromptLayer request ID will be
returned in the generation_info field of the
Gen... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-2 | -1 returns as many tokens as possible given the prompt and
the models maximal context size.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-3 | when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optio... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-4 | Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-5 | static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(tex... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
b05556a4741d-6 | Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html |
86517f2efda9-0 | langchain.llms.aleph_alpha.AlephAlpha¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-1 | class langchain.llms.aleph_alpha.AlephAlpha(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: Optional[str] = 'lumi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-2 | Optional[bool] = True, repetition_penalties_include_completion: bool = True, raw_completion: bool = False, aleph_alpha_api_key: Optional[str] = None, stop_sequences: Optional[List[str]] = None)[source]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-3 | Bases: LLM
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-4 | If set to a non-None value, control parameters are also applied to similar tokens.
param control_log_additive: Optional[bool] = True¶
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
param disable_optimizations: Optional[b... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-5 | Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
param repetition_penalties_include_prompt: Optional[bool] = False¶
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
param sequence_penalty: float = 0.0¶
param sequence_penalty_min_length:... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-6 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
86517f2efda9-7 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
654840d0f1a5-0 | langchain.llms.huggingface_hub.HuggingFaceHub¶
class langchain.llms.huggingface_hub.HuggingFaceHub(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
654840d0f1a5-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param task: Optional[str] = None¶
Task to call the model with.
Should be a task that returns generated_text or summary_text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, c... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
654840d0f1a5-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
654840d0f1a5-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
prop... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
db2c71302137-0 | langchain.llms.base.get_prompts¶
langchain.llms.base.get_prompts(params: Dict[str, Any], prompts: List[str]) → Tuple[Dict[int, List], str, List[int], List[str]][source]¶
Get prompts that are already cached. | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.get_prompts.html |
749f1191f4e3-0 | langchain.llms.azureml_endpoint.AzureMLEndpointClient¶
class langchain.llms.azureml_endpoint.AzureMLEndpointClient(endpoint_url: str, endpoint_api_key: str, deployment_name: str)[source]¶
Bases: object
Wrapper around AzureML Managed Online Endpoint Client.
Initialize the class.
Methods
__init__(endpoint_url, endpoint_a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLEndpointClient.html |
39884ea8312a-0 | langchain.llms.clarifai.Clarifai¶
class langchain.llms.clarifai.Clarifai(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, stub: Any = None, m... | https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html |
39884ea8312a-1 | param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param clarifai_pat_key: Optional[str] = None¶
param metadata: Any = None¶
param model_id: Optional[str] = None¶
Model id to use.
param model_version_id: Optional[str] = None¶
Model version id to use.
param stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html |
39884ea8312a-2 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optiona... | https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html |
39884ea8312a-3 | Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImple... | https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html |
3939dfbc5f18-0 | langchain.llms.cohere.completion_with_retry¶
langchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.completion_with_retry.html |
4d14cab2576c-0 | langchain.llms.gooseai.GooseAI¶
class langchain.llms.gooseai.GooseAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, mod... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
4d14cab2576c-1 | param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param gooseai_api_key: Optional[str] = None¶
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_tokens: int = 256¶
The maximum number of tokens to generate in ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
4d14cab2576c-2 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
4d14cab2576c-3 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
4d14cab2576c-4 | property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
c491a0b47a1d-0 | langchain.llms.vertexai.VertexAI¶
class langchain.llms.vertexai.VertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: _LanguageMo... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
c491a0b47a1d-1 | param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
Sampling tempera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
c491a0b47a1d-2 | classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(*... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
c491a0b47a1d-3 | Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to p... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
3bfc2340e427-0 | langchain.llms.openai.completion_with_retry¶
langchain.llms.openai.completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.completion_with_retry.html |
cfb896d229a4-0 | langchain.llms.vertexai.is_codey_model¶
langchain.llms.vertexai.is_codey_model(model_name: str) → bool[source]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.is_codey_model.html |
3d572236cc2a-0 | langchain.llms.google_palm.generate_with_retry¶
langchain.llms.google_palm.generate_with_retry(llm: GooglePalm, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.generate_with_retry.html |
7cdb7073e231-0 | langchain.llms.base.LLM¶
class langchain.llms.base.LLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseLLM
LLM class tha... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html |
7cdb7073e231-1 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html |
7cdb7073e231-2 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html |
971da2d85faf-0 | langchain.llms.replicate.Replicate¶
class langchain.llms.replicate.Replicate(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, inp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html |
971da2d85faf-1 | param replicate_api_token: Optional[str] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html |
971da2d85faf-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html |
971da2d85faf-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
prop... | https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html |
cf4dc4e305d9-0 | langchain.requests.TextRequestsWrapper¶
class langchain.requests.TextRequestsWrapper(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]¶
Bases: BaseModel
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
Create ... | https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html |
cf4dc4e305d9-1 | POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]¶
PUT the URL and return the text.
property requests: langchain.requests.Requests¶
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html |
9146ab47fe09-0 | langchain.requests.Requests¶
class langchain.requests.Requests(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]¶
Bases: BaseModel
Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable ... | https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html |
9146ab47fe09-1 | GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → Response[source]¶... | https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html |
a72178c04882-0 | langchain.indexes.vectorstore.VectorstoreIndexCreator¶
class langchain.indexes.vectorstore.VectorstoreIndexCreator(*, vectorstore_cls: ~typing.Type[~langchain.vectorstores.base.VectorStore] = <class 'langchain.vectorstores.chroma.Chroma'>, embedding: ~langchain.embeddings.base.Embeddings = None, text_splitter: ~langcha... | https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html |
c7d61773abbf-0 | langchain.indexes.graph.GraphIndexCreator¶
class langchain.indexes.graph.GraphIndexCreator(*, llm: ~typing.Optional[~langchain.base_language.BaseLanguageModel] = None, graph_type: ~typing.Type[~langchain.graphs.networkx_graph.NetworkxEntityGraph] = <class 'langchain.graphs.networkx_graph.NetworkxEntityGraph'>)[source]¶... | https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html |
eb45f8fe35dd-0 | langchain.indexes.vectorstore.VectorStoreIndexWrapper¶
class langchain.indexes.vectorstore.VectorStoreIndexWrapper(*, vectorstore: VectorStore)[source]¶
Bases: BaseModel
Wrapper around a vectorstore for easy access.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError i... | https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html |
e7c07d00ee0d-0 | langchain.document_transformers.get_stateful_documents¶
langchain.document_transformers.get_stateful_documents(documents: Sequence[Document]) → Sequence[_DocumentWithState][source]¶
Convert a list of documents to a list of documents with state.
Parameters
documents – The documents to convert.
Returns
A list of document... | https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.get_stateful_documents.html |
7c666d3a7e36-0 | langchain.document_transformers.EmbeddingsRedundantFilter¶
class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings: ~langchain.embeddings.base.Embeddings, similarity_fn: ~typing.Callable = <function cosine_similarity>, similarity_threshold: float = 0.95)[source]¶
Bases: BaseDocumentTransformer, Ba... | https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.EmbeddingsRedundantFilter.html |
79c6666e27a2-0 | langchain.experimental.plan_and_execute.schema.ListStepContainer¶
class langchain.experimental.plan_and_execute.schema.ListStepContainer(*, steps: List[Tuple[Step, StepResponse]] = None)[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if t... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.ListStepContainer.html |
08875b4e3983-0 | langchain.experimental.llms.jsonformer_decoder.import_jsonformer¶
langchain.experimental.llms.jsonformer_decoder.import_jsonformer() → jsonformer[source]¶
Lazily import jsonformer. | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.import_jsonformer.html |
d939322cf74c-0 | langchain.experimental.plan_and_execute.planners.base.BasePlanner¶
class langchain.experimental.plan_and_execute.planners.base.BasePlanner[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid m... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.base.BasePlanner.html |
859fd6b9c886-0 | langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser¶
class langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser[source]¶
Bases: PlanOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the inpu... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html |
859fd6b9c886-1 | Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html |
1f45d9cd2509-0 | langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input¶
langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(input_str: str) → str[source]¶
Preprocesses a string to be parsed as json.
Replace single backslashes with double backslashes,
while leaving already ... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input.html |
acd563f202ba-0 | langchain.experimental.plan_and_execute.executors.base.BaseExecutor¶
class langchain.experimental.plan_and_execute.executors.base.BaseExecutor[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a val... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.base.BaseExecutor.html |
b9320da01997-0 | langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute¶
class langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = ... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
b9320da01997-1 | for the full catalog.
param output_key: str = 'output'¶
param planner: langchain.experimental.plan_and_execute.planners.base.BasePlanner [Required]¶
param step_container: langchain.experimental.plan_and_execute.schema.BaseStepContainer [Optional]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
b9320da01997-2 | include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → ... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
b9320da01997-3 | Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCa... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
b9320da01997-4 | Return whether or not the class is serializable.
property output_keys: List[str]¶
Output keys this chain expects.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
2f1f7078da44-0 | langchain.experimental.plan_and_execute.schema.BaseStepContainer¶
class langchain.experimental.plan_and_execute.schema.BaseStepContainer[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid mod... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.BaseStepContainer.html |
adf6bd43de47-0 | langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor¶
langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor(llm: BaseLanguageModel, tools: List[BaseTool], verbose: bool = False, include_task_in_prompt: bool = False) → ChainExecutor[source]¶
Load an agent ex... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor.html |
c7d550c3a083-0 | langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory¶
class langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, retriever: VectorStoreRetrie... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html |
c7d550c3a083-1 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
Input keys this memory class will load dynamically.
model Config¶
Bases: object
Configuration for this... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html |
60cda2cadb2c-0 | langchain.experimental.plan_and_execute.schema.StepResponse¶
class langchain.experimental.plan_and_execute.schema.StepResponse(*, response: str)[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a v... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.StepResponse.html |
375b522c90b1-0 | langchain.experimental.generative_agents.memory.GenerativeAgentMemory¶
class langchain.experimental.generative_agents.memory.GenerativeAgentMemory(*, llm: BaseLanguageModel, memory_retriever: TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] =... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html |
375b522c90b1-1 | The retriever to fetch related memories.
param most_recent_memories_key: str = 'most_recent_memories'¶
param most_recent_memories_token_key: str = 'recent_memories_token'¶
param now_key: str = 'now'¶
param queries_key: str = 'queries'¶
param reflecting: bool = False¶
param reflection_threshold: Optional[float] = None¶
... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html |
375b522c90b1-2 | Save the context of this model run to memory.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by th... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html |
ae8107d71508-0 | langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain¶
class langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, ca... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
ae8107d71508-1 | There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Require... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
ae8107d71508-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[Base... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
ae8107d71508-3 | Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallba... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
ae8107d71508-4 | Create outputs from response.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
classmethod from_llm(llm: BaseLanguageModel, verbose: bool = True) → LLMChain[source]¶
Get the response parser.
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and tem... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
ae8107d71508-5 | Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
... | https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.