id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
97baff38bd83-3 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-4 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrenc... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-5 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-6 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-7 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_sub_prompts(params: Dict... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-8 | Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_toke... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-9 | Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Retur... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-10 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7dde71060e11-0 | langchain.llms.edenai.EdenAI¶
class langchain.llms.edenai.EdenAI[source]¶
Bases: LLM
Wrapper around edenai models.
To use, you should have
the environment variable EDENAI_API_KEY set with your API token.
You can find your token here: https://app.edenai.run/admin/account/settings
feature and subfeature are required, but... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-1 | Stop sequences to use.
param subfeature: Literal['generation'] = 'generation'¶
Subfeature of above feature, use generation by default
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-2 | Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are ag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-3 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-5 | Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agno... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-6 | Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-7 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
fir... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
7dde71060e11-8 | stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_ref... | https://api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html |
682747c3cb14-0 | langchain.llms.fireworks.BaseFireworks¶
class langchain.llms.fireworks.BaseFireworks[source]¶
Bases: BaseLLM
Wrapper around Fireworks large language models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-5 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptVal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-6 | get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-7 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
fir... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
682747c3cb14-8 | stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_ref... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.BaseFireworks.html |
c65ed7560c92-0 | langchain.llms.xinference.Xinference¶
class langchain.llms.xinference.Xinference[source]¶
Bases: LLM
Wrapper for accessing Xinference’s large-scale model inference service.
To use, you should have the xinference library installed:
.. code-block:: bash
pip install “xinference[all]”
Check out: https://github.com/xorbitsa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-1 | Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
c65ed7560c92-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.xinference.Xinference.html |
15e2d4d6988e-0 | langchain.llms.mlflow_ai_gateway.MlflowAIGateway¶
class langchain.llms.mlflow_ai_gateway.MlflowAIGateway[source]¶
Bases: LLM
Wrapper around completions LLMs in the MLflow AI Gateway.
To use, you should have the mlflow[gateway] python package installed.
For more information, see https://mlflow.org/docs/latest/gateway/in... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
15e2d4d6988e-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.MlflowAIGateway.html |
e71c7954f6aa-0 | langchain.llms.cohere.completion_with_retry¶
langchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.completion_with_retry.html |
d6760bc5b09b-0 | langchain.llms.koboldai.clean_url¶
langchain.llms.koboldai.clean_url(url: str) → str[source]¶
Remove trailing slash and /api from url if present. | https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.clean_url.html |
10db2c2ff903-0 | langchain.llms.manifest.ManifestWrapper¶
class langchain.llms.manifest.ManifestWrapper[source]¶
Bases: LLM
HazyResearch’s Manifest library.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Option... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-1 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-2 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-3 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-4 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-5 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-6 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-7 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
10db2c2ff903-8 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
99dcdfb78892-0 | langchain.llms.base.BaseLLM¶
class langchain.llms.base.BaseLLM[source]¶
Bases: BaseLanguageModel[str], ABC
Base LLM abstract interface.
It should take in a prompt and return a string.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parse... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-1 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-2 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-3 | to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str][source]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-4 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict[source]¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If y... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
99dcdfb78892-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html |
e89c5df52a4a-0 | langchain.llms.ctransformers.CTransformers¶
class langchain.llms.ctransformers.CTransformers[source]¶
Bases: LLM
C Transformers LLM models.
To use, you should have the ctransformers python package installed.
See https://github.com/marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformer... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
e89c5df52a4a-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
dad9bd7ce7e7-0 | langchain.llms.aviary.get_completions¶
langchain.llms.aviary.get_completions(model: str, prompt: str, use_prompt_format: bool = True, version: str = '') → Dict[str, Union[str, float, int]][source]¶
Get completions from Aviary models. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_completions.html |
33c82e1c2ba3-0 | langchain.llms.amazon_api_gateway.AmazonAPIGateway¶
class langchain.llms.amazon_api_gateway.AmazonAPIGateway[source]¶
Bases: LLM
Amazon API Gateway to access LLM models hosted on AWS.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parse... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
33c82e1c2ba3-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
113726c82884-0 | langchain.llms.fireworks.execute¶
langchain.llms.fireworks.execute(prompt: str, model: str, api_key: Optional[str], max_tokens: int = 256, temperature: float = 0.0, top_p: float = 1.0) → Any[source]¶
Execute LLM query | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.execute.html |
1a0a992ca971-0 | langchain.llms.rwkv.RWKV¶
class langchain.llms.rwkv.RWKV[source]¶
Bases: LLM, BaseModel
RWKV language models.
To use, you should have the rwkv python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import RWKV
model = RWKV(model="./models/rwkv-3b-fp16.bin",... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-1 | Token context window.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 1.0¶
The temperature to use for sampling.
param tokens_path: str [Required]¶
Path to the RWKV tokens file.
param top_p: float = 0.5¶
The top-p value to use for sampling.
param verbose: bool [Optional]¶... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-3 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrenc... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-8 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
1a0a992ca971-9 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using RWKV¶
RWKV-4 | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
975d85323c3a-0 | langchain.llms.fireworks.acompletion_with_retry¶
async langchain.llms.fireworks.acompletion_with_retry(llm: Union[BaseFireworks, FireworksChat], **kwargs: Any) → Any[source]¶
Use tenacity to retry the async completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.acompletion_with_retry.html |
1052727bcf11-0 | langchain.llms.tongyi.stream_generate_with_retry¶
langchain.llms.tongyi.stream_generate_with_retry(llm: Tongyi, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.stream_generate_with_retry.html |
20fe4c734ec8-0 | langchain.llms.predibase.Predibase¶
class langchain.llms.predibase.Predibase[source]¶
Bases: LLM
Use your Predibase models with Langchain.
To use, you should have the predibase python package installed,
and have your Predibase API key.
Create a new model by parsing and validating input data from keyword arguments.
Rais... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html |
20fe4c734ec8-1 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html |
20fe4c734ec8-2 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.