id stringlengths 14 16 | text stringlengths 29 2.73k | source stringlengths 49 117 |
|---|---|---|
479f31c64c6b-22 | for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-23 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-24 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-25 | predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-26 | "safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-27 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-28 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-29 | Creates a Python file which will be deployed on beam.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardR... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-30 | Key word arguments to pass to the model.
field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, s... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-31 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-32 | Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = No... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-33 | Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field config: Optional[Dict[str, Any]] = None#
The config parameters.
See marella/ctransformers
field... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-34 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-35 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-36 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CerebriumAI[sourc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-37 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-38 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-39 | predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-40 | field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START,... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-41 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-42 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-43 | LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have “Can Query” permission to the en... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-44 | transformations before and after the query.
Validators
raise_deprecation » all fields
set_cluster_driver_port » cluster_driver_port
set_cluster_id » cluster_id
set_model_kwargs » model_kwargs
set_verbose » verbose
field api_token: str [Optional]#
Databricks personal access token.
If not provided, the default value is d... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-45 | a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode.
field model_kwargs: Optional[Dict[str, Any]] = None#
Extra parameters to pass to the endpoint.
field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-com... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-46 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-47 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-48 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.DeepInfra[source]... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-49 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-50 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-51 | predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-52 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-53 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-54 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.ForefrontAI[sourc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-55 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-56 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-57 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-58 | Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field allow_download: bool = False#
If mod... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-59 | The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use fo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-60 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-61 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-62 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooglePalm[source... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-63 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-64 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-65 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-66 | in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-67 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-68 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-69 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-70 | it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
hugg... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-71 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-72 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-73 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceHub[so... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-74 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-75 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-76 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-77 | hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-78 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-79 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.llms.base.LLM[source]#
Construct the pipeline object from ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-80 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-81 | Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-82 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-83 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-84 | predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-85 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-86 | Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-87 | predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-88 | The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-89 | The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-90 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-91 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int[source]#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-92 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) → Generator[Dict, None, None][source]#
Yields results objects as they are generated in real time.
BETA: this is a beta feat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-93 | in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any mo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-94 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-95 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-96 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.MosaicML[source]#
Wrapper around MosaicML’s LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-97 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-98 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-99 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-100 | nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field bad_words: List[str] = []#
List of tokens not allowed to be generated.
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
field early_stopping: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-101 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-102 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-103 | get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-104 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_en... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-105 | field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampli... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-106 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-107 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-108 | Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
mo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-109 | for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python pac... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-110 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-111 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-112 | get_token_ids(text: str) → List[int][source]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: boo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-113 | Set of special tokens that are allowed。
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the “best”.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens tha... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-114 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-115 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-116 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call.
get_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-117 | max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from message... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-118 | Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field client: Any = ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-119 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-120 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
479f31c64c6b-121 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.