id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
ba1cb208b9b2-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
ba1cb208b9b2-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
ba1cb208b9b2-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
ba1cb208b9b2-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
ba1cb208b9b2-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[Promp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
ba1cb208b9b2-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Cohere¶
Cohere | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
46979765d3b9-0 | langchain.llms.gpt4all.GPT4All¶
class langchain.llms.gpt4all.GPT4All[source]¶
Bases: LLM
GPT4All language models.
To use, you should have the gpt4all python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-1 | param n_parts: int = -1¶
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
param n_predict: Optional[int] = 256¶
The maximum number of tokens to generate.
param n_threads: Optional[int] = 4¶
Number of threads to use.
param repeat_last_n: Optional[int] = 64¶
Last n tokens t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-2 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-3 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-4 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-5 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-6 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-7 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-8 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
46979765d3b9-9 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
1d38faa4714b-0 | langchain.llms.databricks.get_repl_context¶
langchain.llms.databricks.get_repl_context() → Any[source]¶
Gets the notebook REPL context if running inside a Databricks notebook.
Returns None otherwise. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_repl_context.html |
e9805ba24d87-0 | langchain_experimental.llms.rellm_decoder.RELLM¶
class langchain_experimental.llms.rellm_decoder.RELLM[source]¶
Bases: HuggingFacePipeline
RELLM wrapped LLM using HuggingFace Pipeline API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be ... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict]... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
e9805ba24d87-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html |
b84cd6bd5101-0 | langchain.llms.utils.enforce_stop_tokens¶
langchain.llms.utils.enforce_stop_tokens(text: str, stop: List[str]) → str[source]¶
Cut off the text as soon as any stop words occur. | https://api.python.langchain.com/en/latest/llms/langchain.llms.utils.enforce_stop_tokens.html |
e819f8288595-0 | langchain.llms.modal.Modal¶
class langchain.llms.modal.Modal[source]¶
Bases: LLM
Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llm... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-1 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-2 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-3 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-4 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-5 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-6 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-7 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
e819f8288595-8 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
444216ca284e-0 | langchain_experimental.llms.jsonformer_decoder.JsonFormer¶
class langchain_experimental.llms.jsonformer_decoder.JsonFormer[source]¶
Bases: HuggingFacePipeline
Jsonformer wrapped LLM using HuggingFace Pipeline API.
This pipeline is experimental and not yet stable.
Create a new model by parsing and validating input data ... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict]... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
444216ca284e-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.JsonFormer.html |
884d205fd46a-0 | langchain.llms.stochasticai.StochasticAI¶
class langchain.llms.stochasticai.StochasticAI[source]¶
Bases: LLM
StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_ur... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
884d205fd46a-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
4f66d75d8705-0 | langchain.llms.azureml_endpoint.HFContentFormatter¶
class langchain.llms.azureml_endpoint.HFContentFormatter[source]¶
Content handler for LLMs from the HuggingFace catalog.
Attributes
accepts
The MIME type of the response data returned from the endpoint
content_type
The MIME type of the input data passed to the endpoin... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.HFContentFormatter.html |
c61ab2342eb6-0 | langchain.llms.ai21.AI21¶
class langchain.llms.ai21.AI21[source]¶
Bases: LLM
AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Create a new model by parsing and validating input ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-1 | Metadata to add to the run trace.
param minTokens: int = 0¶
The minimum number of tokens to generate in the completion.
param model: str = 'j2-jumbo-instruct'¶
Model name to use.
param numResults: int = 1¶
How many completions to generate for each prompt.
param presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI2... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
c61ab2342eb6-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
738ebb40c5d2-0 | langchain.llms.cerebriumai.CerebriumAI¶
class langchain.llms.cerebriumai.CerebriumAI[source]¶
Bases: LLM
CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
738ebb40c5d2-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
5006e74bf346-0 | langchain.llms.mlflow_ai_gateway.Params¶
class langchain.llms.mlflow_ai_gateway.Params[source]¶
Bases: BaseModel
Parameters for the MLflow AI Gateway LLM.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
para... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.Params.html |
5006e74bf346-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.Params.html |
5006e74bf346-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.Params.html |
2f9d1332833f-0 | langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference¶
class langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference[source]¶
Bases: LLM
HuggingFace text generation API.
It generates text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
-... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-1 | param async_client: Any = None¶
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param inference_server_url: str = ''¶
param max_new_tokens: int = 512¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
2f9d1332833f-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
54c08587fdfc-0 | langchain.llms.vertexai.is_codey_model¶
langchain.llms.vertexai.is_codey_model(model_name: str) → bool[source]¶
Returns True if the model name is a Codey model.
Parameters
model_name – The model name to check.
Returns: True if the model name is a Codey model. | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.is_codey_model.html |
037e117a4aac-0 | langchain.llms.anyscale.Anyscale¶
class langchain.llms.anyscale.Anyscale[source]¶
Bases: LLM
Anyscale Service models.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructo... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
037e117a4aac-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html |
97baff38bd83-0 | langchain.llms.openai.AzureOpenAI¶
class langchain.llms.openai.AzureOpenAI[source]¶
Bases: BaseOpenAI
Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-1 | Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
97baff38bd83-2 | them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.