id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
ded86bd066ca-92 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-93 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-94 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-95 | nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field bad_words: List[str] = []#
List of tokens not allowed to be generated.
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
field early_stopping: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-96 | Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β st... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-97 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-98 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-99 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_en... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-100 | field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampli... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-101 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-102 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-103 | Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
mo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-104 | for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python pac... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-105 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-106 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-107 | get_token_ids(text: str) β List[int][source]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: boo... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-108 | Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens tha... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-109 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and inpu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-110 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-111 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-112 | Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-113 | Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field client: Any = None#
The client to use for the API calls.
field do_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-114 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-115 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-116 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-117 | in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field pipeline_key: str = ''#
The id or tag of the target pipeline
field pipeline_kwargs: Dict[str, Any] [Optional]#
Holds any pipeline param... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-118 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-119 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-120 | Wrapper around Prediction Guard large language models.
To use, you should have the predictionguard python package installed, and the
environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass
it as a named parameter to the constructor.
.. rubric:: Example
Validators
raise_deprecation Β» all fields
se... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-121 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-122 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-123 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAI... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-124 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-125 | deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: L... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-126 | Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exc... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-127 | Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-128 | Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[s... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-129 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-130 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-131 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-132 | in the text so far, decreasing the modelβs likelihood to repeat the same
line verbatim..
field penalty_alpha_presence: float = 0.4#
Positive values penalize new tokens based on whether they appear
in the text so far, increasing the modelβs likelihood to talk about
new topics..
field rwkv_verbose: bool = True#
Print deb... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-133 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-134 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-135 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Replicate[source]... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-136 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-137 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-138 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-139 | The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If n... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-140 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-141 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-142 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-143 | hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-144 | Hugging Face task (βtext-generationβ, βtext2text-generationβ or
βsummarizationβ).
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbac... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-145 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-146 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-147 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedPipeline[source]#
Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-148 | hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-149 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-150 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β langchain.llms.base.LLM[source]#
Init the SelfHostedPipeline from a pipeline object or string.
generate... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-151 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-152 | stochasticai = StochasticAI(api_url="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field api_url: str = ''#
Model name to use.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly sp... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-153 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-154 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-155 | Wrapper around Google Vertex AI large language models.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field credentials: Any = None#
The default custom credentials (google.auth.credentials.Credentials) to use
field location: str = 'us-central1'#
The default location to... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-156 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-157 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-158 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-159 | field model_id: str = 'palmyra-instruct'#
Model name to use.
field n: Optional[int] = None#
How many completions to generate.
field presence_penalty: Optional[float] = None#
Penalizes repeated tokens regardless of frequency.
field repetition_penalty: Optional[float] = None#
Penalizes repeated tokens according to freque... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-160 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from m... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-161 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of p... | https://python.langchain.com/en/latest/reference/modules/llms.html |
ded86bd066ca-162 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
previous
Writer
next
Chat Models
By Harrison Ch... | https://python.langchain.com/en/latest/reference/modules/llms.html |
9e9ae03c42ac-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the βModel deployment nameβ in the Azure portal.
In addit... | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
9e9ae03c42ac-1 | Example
get_num_tokens(text: str) β int[source]#
Calculate number of tokens.
pydantic model langchain.chat_models.ChatGooglePalm[source]#
Wrapper around Googleβs PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API ke... | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
9e9ae03c42ac-2 | in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kw... | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
9e9ae03c42ac-3 | main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) β List[int][source]#
Get the tokens present in the text with tiktoken package.
pydantic model langchain.chat_models.ChatVertexAI[source]#
Wrapper around Vertex AI large language models.
field model_name: str = 'chat-bison'#
Model name t... | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
3f58ae622f5d-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) β List[str][source]#
Parse the output of an LLM call... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
3f58ae622f5d-1 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instruc... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
3f58ae622f5d-2 | and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field ... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
3f58ae622f5d-3 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], outp... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
3f58ae622f5d-4 | that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
3f58ae622f5d-5 | The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion β output of language model
prompt β prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[sourc... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
a2cced17e8ae-0 | .rst
.pdf
Document Compressors
Document Compressors#
pydantic model langchain.retrievers.document_compressors.CohereRerank[source]#
field client: Client [Required]#
field model: str = 'rerank-english-v2.0'#
field top_n: int = 3#
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Seq... | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
a2cced17e8ae-1 | similarity_threshold must be specified. Defaults to 20.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field simi... | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
a2cced17e8ae-2 | Compress page content of raw documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) β langchain.retrievers.docu... | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
7ac6f55dfb3b-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) β None[source]#
Add texts to in memory dictionary.
search(search: s... | https://python.langchain.com/en/latest/reference/modules/docstore.html |
9ae427f4521a-0 | .rst
.pdf
Example Selector
Example Selector#
Logic for selecting examples to include in prompts.
pydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#
Select examples based on length.
Validators
calculate_example_text_lengths Β» example_text_lengths
field example_prompt: langchain.prompts... | https://python.langchain.com/en/latest/reference/modules/example_selector.html |
9ae427f4521a-1 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples β List of examples to use in the prompt.
embeddings β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls β A vector store DB interface class, e.g.... | https://python.langchain.com/en/latest/reference/modules/example_selector.html |
9ae427f4521a-2 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples β List of examples to use in the prompt.
embeddings β An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls β A vector store DB interface class, e.g... | https://python.langchain.com/en/latest/reference/modules/example_selector.html |
91834e277c59-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BaseChatPromptTemplate[source]#
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
91834e277c59-1 | file_path β Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt tem... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
91834e277c59-2 | A list of the names of the variables the prompt template expects.
field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: βf-str... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
91834e277c59-3 | field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwa... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
91834e277c59-4 | Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwarg... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
91834e277c59-5 | Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) β langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last ... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
4830fa39a4c4-0 | .rst
.pdf
Memory
Memory#
class langchain.memory.CassandraChatMessageHistory(contact_points: List[str], session_id: str, port: int = 9042, username: str = 'cassandra', password: str = 'cassandra', keyspace_name: str = 'chat_history', table_name: str = 'message_store')[source]#
Chat message history that stores history in... | https://python.langchain.com/en/latest/reference/modules/memory.html |
4830fa39a4c4-1 | Validators
check_input_key Β» memories
check_repeated_memory_variable Β» memories
field memories: List[langchain.schema.BaseMemory] [Required]#
For tracking all the memories that should be accessed.
clear() β None[source]#
Clear context from this session for every memory.
load_memory_variables(inputs: Dict[str, Any]) β D... | https://python.langchain.com/en/latest/reference/modules/memory.html |
4830fa39a4c4-2 | field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l... | https://python.langchain.com/en/latest/reference/modules/memory.html |
4830fa39a4c4-3 | a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\... | https://python.langchain.com/en/latest/reference/modules/memory.html |
4830fa39a4c4-4 | field entity_store: langchain.memory.entity.BaseEntityStore [Optional]#
field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human kee... | https://python.langchain.com/en/latest/reference/modules/memory.html |
4830fa39a4c4-5 | Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
field ai_prefix: str = 'AI'# | https://python.langchain.com/en/latest/reference/modules/memory.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.