id stringlengths 14 15 | text stringlengths 35 2.51k | source stringlengths 61 154 |
|---|---|---|
7730760eb145-0 | langchain.llms.cerebriumai.CerebriumAI¶
class langchain.llms.cerebriumai.CerebriumAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoin... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
7730760eb145-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
7730760eb145-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
7730760eb145-3 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html |
16b6ba791ab3-0 | langchain.llms.azureml_endpoint.HFContentFormatter¶
class langchain.llms.azureml_endpoint.HFContentFormatter[source]¶
Bases: ContentFormatterBase
Content handler for LLMs from the HuggingFace catalog.
Methods
__init__()
format_request_payload(prompt, model_kwargs)
Formats the request body according to the input schema ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.HFContentFormatter.html |
40757a1584fb-0 | langchain.llms.forefrontai.ForefrontAI¶
class langchain.llms.forefrontai.ForefrontAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoin... | https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
40757a1584fb-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param top_k: int = 40¶
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
param top_p: float = 1.0¶
Total probability mass of tokens to consider at each s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
40757a1584fb-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
40757a1584fb-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key exists in environment.
property lc_attributes:... | https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
57146e423078-0 | langchain.llms.nlpcloud.NLPCloud¶
class langchain.llms.nlpcloud.NLPCloud(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
57146e423078-1 | param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param do_sample: bool = True¶
Whether to use sampling (True) or greedy decoding.
param early_stopping: bool = False¶
Whether to stop beam search at num_beams sentences.
param length_no_input: bool = True¶
Whether min_length... | https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
57146e423078-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
57146e423078-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
57146e423078-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
a697b75c8101-0 | langchain.llms.openllm.IdentifyingParams¶
class langchain.llms.openllm.IdentifyingParams[source]¶
Bases: TypedDict
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dic... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html |
a697b75c8101-1 | keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html |
888983f8938c-0 | langchain.llms.ai21.AI21¶
class langchain.llms.ai21.AI21(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str = 'j2-jumbo-instruct', t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
888983f8938c-1 | ai21 = AI21(model="j2-jumbo-instruct")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai21_api_key: Optional[str] = None¶
param base_url: Optional[str] = None¶
Base url to use, if None decides based o... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
888983f8938c-2 | How many completions to generate for each prompt.
param presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)¶
Penalizes repeated tokens.
param stop: Optional[List[str]] = None¶
p... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
888983f8938c-3 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optiona... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
888983f8938c-4 | Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImple... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html |
d9499e5804f0-0 | langchain.llms.stochasticai.StochasticAI¶
class langchain.llms.stochasticai.StochasticAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, api... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
d9499e5804f0-1 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
d9499e5804f0-2 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
d9499e5804f0-3 | property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
8c172dc3a900-0 | langchain.llms.gpt4all.GPT4All¶
class langchain.llms.gpt4all.GPT4All(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, backend: Op... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
8c172dc3a900-1 | # Simplest invocation
response = model("Once upon a time, ")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allow_download: bool = False¶
If model does not exist in ~/.cache/gpt4all/, download it.
par... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
8c172dc3a900-2 | The penalty to apply to repeated tokens.
param seed: int = 0¶
Seed. If -1, a random seed is used.
param stop: Optional[List[str]] = []¶
A list of strings to stop generation when encountered.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the r... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
8c172dc3a900-3 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
8c172dc3a900-4 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html |
aaae29986d92-0 | langchain.llms.databricks.get_default_api_token¶
langchain.llms.databricks.get_default_api_token() → str[source]¶
Gets the default Databricks personal access token.
Raises an error if the token cannot be automatically determined. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_api_token.html |
63ebed78a61a-0 | langchain.llms.human.HumanInputLLM¶
class langchain.llms.human.HumanInputLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, input_func: Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
63ebed78a61a-1 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
63ebed78a61a-2 | get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **... | https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
63ebed78a61a-3 | property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
544eed1d6bb2-0 | langchain.llms.databricks.get_default_host¶
langchain.llms.databricks.get_default_host() → str[source]¶
Gets the default Databricks workspace hostname.
Raises an error if the hostname cannot be automatically determined. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_host.html |
7d792c0376fa-0 | langchain.llms.huggingface_endpoint.HuggingFaceEndpoint¶
class langchain.llms.huggingface_endpoint.HuggingFaceEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: O... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html |
7d792c0376fa-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param task: Optional[str] = None¶
Task to call the model with.
Should be a task that returns generated_text or summary_text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, c... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html |
7d792c0376fa-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html |
7d792c0376fa-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
prop... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html |
3a5db92165fc-0 | langchain.llms.openai.AzureOpenAI¶
class langchain.llms.openai.AzureOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = Non... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-1 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises Val... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-2 | Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param ope... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-3 | when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHan... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-4 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-5 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → Base... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
3a5db92165fc-6 | validator validate_azure_settings » all fields[source]¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
15141fb5e9f3-0 | langchain.llms.openlm.OpenLM¶
class langchain.llms.openlm.OpenLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-1 | param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-2 | param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-3 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-4 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-5 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
15141fb5e9f3-6 | property max_context_size: int¶
Get max context size for this model.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html |
3e158fd539cb-0 | langchain.llms.rwkv.RWKV¶
class langchain.llms.rwkv.RWKV(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str, tokens_path: str, strat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
3e158fd539cb-1 | param max_tokens_per_generation: int = 256¶
Maximum number of tokens to generate.
param model: str [Required]¶
Path to the pre-trained RWKV model file.
param penalty_alpha_frequency: float = 0.4¶
Positive values penalize new tokens based on their existing frequency
in the text so far, decreasing the model’s likelihood ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
3e158fd539cb-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
3e158fd539cb-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
3e158fd539cb-4 | property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html |
358e31798a55-0 | langchain.llms.base.update_cache¶
langchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) → Optional[dict][source]¶
Update the cache and get the LLM output. | https://api.python.langchain.com/en/latest/llms/langchain.llms.base.update_cache.html |
da07fce1c38c-0 | langchain.llms.manifest.ManifestWrapper¶
class langchain.llms.manifest.ManifestWrapper(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, clien... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
da07fce1c38c-1 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
da07fce1c38c-2 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html |
40f63856ef0e-0 | langchain.llms.petals.Petals¶
class langchain.llms.petals.Petals(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, tokeniz... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
40f63856ef0e-1 | param max_length: Optional[int] = None¶
The maximum length of the sequence to be generated.
param max_new_tokens: int = 256¶
The maximum number of new tokens to generate in the completion.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call
not explicitly specified.
param mod... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
40f63856ef0e-2 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
40f63856ef0e-3 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
97cf9b057a49-0 | langchain.llms.openllm.OpenLLM¶
class langchain.llms.openllm.OpenLLM(model_name: Optional[str] = None, *, model_id: Optional[str] = None, server_url: Optional[str] = None, server_type: Literal['grpc', 'http'] = 'http', embedded: bool = True, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
97cf9b057a49-1 | param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param embedded: bool = True¶
Initialize this LLM instance in current process by default. Should
only set to False when using in conjunction with BentoML Service.
param llm_kwargs: Dict[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
97cf9b057a49-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
97cf9b057a49-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
97cf9b057a49-4 | Example:
.. code-block:: python
llm = OpenLLM(model_name=’flan-t5’,
model_id=’google/flan-t5-large’,
embedded=False,
)
tools = load_tools([“serpapi”, “llm-math”], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service(“langchain-openllm”, runners=[llm.runner])... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
14a1c749fb43-0 | langchain.llms.aviary.Aviary¶
class langchain.llms.aviary.Aviary(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model: str = 'amazon/LightG... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
14a1c749fb43-1 | param model: str = 'amazon/LightGPT'¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param use_prompt_format: bool = True¶
param verbose: bool [Optional]¶
Whether to print out response text.
param version: Optional[str] = None¶
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Op... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
14a1c749fb43-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
14a1c749fb43-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
prop... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
f98a9762d77c-0 | langchain.llms.loading.load_llm¶
langchain.llms.loading.load_llm(file: Union[str, Path]) → BaseLLM[source]¶
Load LLM from file. | https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm.html |
9000187e50ed-0 | langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference¶
class langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
9000187e50ed-1 | - inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server.
- server_kwargs: The keyword arguments to pass to the inference server.
- client: The client object used to communicate with the inference server.
- async_client: The asyn... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
9000187e50ed-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
9000187e50ed-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
9000187e50ed-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
49d173fa7dba-0 | langchain.llms.databricks.Databricks¶
class langchain.llms.databricks.Databricks(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, host: str =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
49d173fa7dba-1 | the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have “Can Attach To” permission to the cluster.
Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
49d173fa7dba-2 | param cluster_id: Optional[str] = None¶
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode, the current cluster ID is used ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
49d173fa7dba-3 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
49d173fa7dba-4 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
49d173fa7dba-5 | to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “... | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
fc749a8c994a-0 | langchain.llms.openai.BaseOpenAI¶
class langchain.llms.openai.BaseOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-1 | Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param disallowed_special: Union[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-2 | param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-3 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-4 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]][source]¶
Get the sub prompts for llm call.
get_token_ids... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-5 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
fc749a8c994a-6 | property max_context_size: int¶
Get max context size for this model.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
46b9df699f70-0 | langchain.llms.cohere.Cohere¶
class langchain.llms.cohere.Cohere(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
46b9df699f70-1 | param k: int = 0¶
Number of most likely tokens to consider at each step.
param max_retries: int = 10¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
Denotes the number of tokens to predict per generation.
param model: Optional[str] = None¶
Model name to use.
param p: int = 1¶
Total prob... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
46b9df699f70-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
46b9df699f70-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html |
796629b81c5d-0 | langchain.llms.fake.FakeListLLM¶
class langchain.llms.fake.FakeListLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, responses: List, i: i... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.