id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
8cfe372708b4-0 | langchain.llms.openai.BaseOpenAI¶
class langchain.llms.openai.BaseOpenAI[source]¶
Bases: BaseLLM
Base OpenAI large language model class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special:... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-1 | Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-4 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-5 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-6 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptVal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-7 | get_token_ids(text: str) → List[int][source]¶
Get the token IDs using the tiktoken package.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-8 | max_tokens = openai.modelname_to_contextsize("text-davinci-003")
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-9 | to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-10 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
940ba16d3251-0 | langchain.llms.tongyi.generate_with_retry¶
langchain.llms.tongyi.generate_with_retry(llm: Tongyi, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.generate_with_retry.html |
fb5af9ff7e50-0 | langchain.llms.openai.acompletion_with_retry¶
async langchain.llms.openai.acompletion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶
Use tenacity to retry the async completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.acompletion_with_retry.html |
fedee7b712a4-0 | langchain.llms.textgen.TextGen¶
class langchain.llms.textgen.TextGen[source]¶
Bases: LLM
text-generation-webui models.
To use, you should have the text-generation-webui installed, a model loaded,
and –api added as a command-line option.
Suggested installation, use one-click installer for your OS:
https://github.com/oob... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-1 | Metadata to add to the run trace.
param min_length: Optional[int] = 0¶
Minimum generation length in tokens.
param model_url: str [Required]¶
The full URL to the textgen webui including http[s]://host:port
param no_repeat_ngram_size: Optional[int] = 0¶
If not set to 0, specifies the length of token sets that are complet... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-2 | Similar to top_p, but select instead only the top_k most likely tokens.
Higher value = higher range of possible random results.
param top_p: Optional[float] = 0.1¶
If not set to 1, select tokens with probabilities adding up to less than this
number. Higher value = higher range of possible random results.
param truncati... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-3 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-4 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-5 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-6 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-7 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-8 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-9 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-10 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
52060ac4f403-0 | langchain.llms.openllm.OpenLLM¶
class langchain.llms.openllm.OpenLLM[source]¶
Bases: LLM
OpenLLM, supporting both in-process model
instance and remote OpenLLM servers.
To use, you should have the openllm library installed:
pip install openllm
Learn more at: https://github.com/bentoml/openllm
Example running an LLM mode... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-1 | Metadata to add to the run trace.
param model_id: Optional[str] = None¶
Model Id to use. If not provided, will use the default model for the model name.
See ‘openllm models’ for all available model variants.
param model_name: Optional[str] = None¶
Model name to use. See ‘openllm models’ for all available models.
param ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
9c56a846455e-0 | langchain.llms.aleph_alpha.AlephAlpha¶
class langchain.llms.aleph_alpha.AlephAlpha[source]¶
Bases: LLM
Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to th... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-1 | If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
param control_log_additive: Optional[bool] = True¶
True: apply control by adding the log(control_factor) to attention s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-2 | param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param minimum_tokens: Optional[int] = 0¶
Generate at least this number of tokens.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param nice: bool = Fal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-3 | Stop sequences to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
A non-negative float that tunes the degree of randomness in generation.
param tokens: Optional[bool] = False¶
return tokens of completion.
param top_k: int = 0¶
Number of most likely tokens to co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-4 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-5 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-6 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-7 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-8 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-9 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-10 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-11 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
ba7db26775b2-0 | langchain.llms.petals.Petals¶
class langchain.llms.petals.Petals[source]¶
Bases: LLM
Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-1 | What sampling temperature to use
param tokenizer: Any = None¶
The tokenizer to use for the API calls.
param top_k: Optional[int] = None¶
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
param top_p: float = 0.9¶
The cumulative probability for top-p sampling.
param verbose: bool [Optional... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-3 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrenc... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[Promp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Petals¶
Petals | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
fdb373857b48-0 | langchain_experimental.llms.rellm_decoder.import_rellm¶
langchain_experimental.llms.rellm_decoder.import_rellm() → rellm[source]¶
Lazily import rellm. | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.import_rellm.html |
2b0cebe4a417-0 | langchain.llms.minimax.Minimax¶
class langchain.llms.minimax.Minimax[source]¶
Bases: LLM
Wrapper around Minimax large language models.
To use, you should have the environment variable
MINIMAX_API_KEY and MINIMAX_GROUP_ID set with your API key,
or pass them as a named parameter to the constructor.
.. rubric:: Example
Cr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
c573674c31da-0 | langchain.llms.vertexai.completion_with_retry¶
langchain.llms.vertexai.completion_with_retry(llm: VertexAI, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.completion_with_retry.html |
029374e3d6aa-0 | langchain.llms.self_hosted.SelfHostedPipeline¶
class langchain.llms.self_hosted.SelfHostedPipeline[source]¶
Bases: LLM
Model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-1 | model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-4 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-5 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-6 | This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-7 | Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Op... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-8 | Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-9 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallba... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
acb83c275746-0 | langchain.llms.huggingface_pipeline.HuggingFacePipeline¶
class langchain.llms.huggingface_pipeline.HuggingFacePipeline[source]¶
Bases: LLM
HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example u... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-1 | Key word arguments passed to the pipeline.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-2 | This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-3 | to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and on... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[Promp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using HuggingFacePipeline¶
Hugging Face
RELLM
JSONFormer | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
76a6ee07393e-0 | langchain.llms.databricks.Databricks¶
class langchain.llms.databricks.Databricks[source]¶
Bases: LLM
Databricks serving endpoint or a cluster driver proxy app for LLM.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-1 | If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data can... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-2 | param host: str [Optional]¶
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in “single user”
or “no isola... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-3 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-4 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-5 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-6 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-7 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-8 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-9 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-10 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
33c7e08273db-0 | langchain.llms.aviary.get_models¶
langchain.llms.aviary.get_models() → List[str][source]¶
List available models | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html |
cffb9fdd9dcf-0 | langchain.llms.koboldai.KoboldApiLLM¶
class langchain.llms.koboldai.KoboldApiLLM[source]¶
Bases: LLM
Kobold API language model.
It includes several fields that can be used to control the text generation process.
To use this class, instantiate it with the required parameters and call it with a
prompt to generate text. F... | https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.