id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
940339bf8f27-4 | text: The text to embed.
Returns:
Embeddings for the text.
"""
instruction_pair = [self.query_instruction, text]
embedding = self.client(self.pipeline_ref, [instruction_pair])[0]
return embedding.tolist()
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
... | https://python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html |
08100a365768-0 | Source code for langchain.embeddings.tensorflow_hub
"""Wrapper around TensorflowHub embedding models."""
from typing import Any, List
from pydantic import BaseModel, Extra
from langchain.embeddings.base import Embeddings
DEFAULT_MODEL_URL = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
[docs]clas... | https://python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html |
08100a365768-1 | [docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a TensorflowHub embedding model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
texts = list(map(lambd... | https://python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html |
1738c04af99f-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
Output Parsers
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/prompts.html |
d48f722d2cdf-0 | .rst
.pdf
Models
Models#
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models
Embeddings
previous
API References
next
Chat Models
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/models.html |
d67c639f93c2-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it wi... | https://python.langchain.com/en/latest/reference/installation.html |
26b9615fbf20-0 | .rst
.pdf
Agents
Agents#
Reference guide for Agents and associated abstractions.
Agents
Tools
Agent Toolkits
previous
Memory
next
Agents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/agents.html |
e0476d135f82-0 | .rst
.pdf
Indexes
Indexes#
Indexes refer to ways to structure documents so that LLMs can best interact with them.
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Docstore
Text Splitter
Document Loaders
Vector Stores
Retrievers
Document Compressors
Document Transformers
pr... | https://python.langchain.com/en/latest/reference/indexes.html |
ded86bd066ca-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
V... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-1 | field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-2 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-4 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-5 | True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optiona... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-6 | field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-7 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-8 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-9 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anthropic[source]... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-10 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-11 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-12 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-13 | Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-14 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-15 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-16 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[sourc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-17 | Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
f... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-18 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-19 | deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: L... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-20 | Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-21 | Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-22 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-23 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-24 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-25 | and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get these is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-26 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-27 | the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler]... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-28 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-29 | field model: str [Required]#
The path to a model file or directory or the name of a Hugging Face Hub
model repo.
field model_file: Optional[str] = None#
The name of the model file in repo or directory.
field model_type: Optional[str] = None#
The model type.
field verbose: bool [Optional]#
Whether to print out response ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-30 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-31 | Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = No... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-32 | environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field en... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-33 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-34 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-35 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-36 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-37 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-38 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-39 | To wrap it as an LLM you must have βCan Queryβ permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster dri... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-40 | Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_API_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in βsingle userβ or
βno isolation sharedβ mode.
field c... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-41 | field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
field transform_output_fn: Optional[Callable[[...], str]] = None#
A function that tr... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-42 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-43 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-44 | Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.ll... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-45 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-46 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-47 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.FakeListLLM[sourc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-48 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-49 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-50 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
f... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-51 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-52 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-53 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-54 | field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token cont... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-55 | field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and in... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-56 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-57 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-58 | If unset, will default to 64.
field model_name: str = 'models/text-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: float = 0.7#
Run inference with this tempera... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-59 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-60 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-61 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-62 | Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[U... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-63 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-64 | Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = No... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-65 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.hugging... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-66 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-67 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-68 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-69 | Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.Ba... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-70 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-71 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-72 | Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import A... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-73 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-74 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.llms.base.LLM[source]#
Construct the pipeline object from ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-75 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-76 | Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-77 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-78 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-79 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-80 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-81 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-82 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-83 | field logprobs: Optional[int] = None#
The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
T... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-84 | field temperature: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-85 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-86 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-87 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) β Generator[Dict, None, None][source]#
Yield... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-88 | To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field endpoint_url: str = ''#
model end... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-89 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-90 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-91 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.MosaicML[source]#
Wrapper around MosaicMLβs LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.... | https://python.langchain.com/en/latest/reference/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.