id stringlengths 14 16 | text stringlengths 44 2.73k | source stringlengths 49 114 |
|---|---|---|
0869763682fc-1 | PromptLayer:
Install requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher)
Get an API key from promptlayer.com and set it using promptlayer.api_key=<API KEY>
SerpAPI:
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment var... | https://python.langchain.com/en/latest/reference/integrations.html |
0869763682fc-2 | OpenSearch:
Install requirements with pip install opensearch-py
If you want to set up OpenSearch on your local, here
DeepLake:
Install requirements with pip install deeplake
LlamaCpp:
Install requirements with pip install llama-cpp-python
Download model and convert following llama.cpp instructions
Milvus:
Install requi... | https://python.langchain.com/en/latest/reference/integrations.html |
3c2c153eb728-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it wi... | https://python.langchain.com/en/latest/reference/installation.html |
680bebc7e49f-0 | .rst
.pdf
Models
Models#
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models
Embeddings
previous
API References
next
Chat Models
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/reference/models.html |
74698829e3e3-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
Output Parsers
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023. | https://python.langchain.com/en/latest/reference/prompts.html |
78a900575384-0 | .rst
.pdf
Memory
Memory#
pydantic model langchain.memory.ChatMessageHistory[source]#
field messages: List[langchain.schema.BaseMessage] = []#
add_ai_message(message: str) β None[source]#
Add an AI message to the store
add_user_message(message: str) β None[source]#
Add a user message to the store
clear() β None[source]#... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-1 | load_memory_variables(inputs: Dict[str, Any]) β Dict[str, str][source]#
Return history buffer.
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
pydantic model langchain.memory.ConversationEntityMemory[source]#
Entity extractor & summarizer to memory.
field ai_prefix: str = 'AI'#
field chat_... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-2 | field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-3 | a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-4 | field entity_store: langchain.memory.entity.BaseEntityStore [Optional]#
field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human kee... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-5 | Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
field ai_prefix: str = 'AI'# | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-6 | field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-7 | a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-8 | field human_prefix: str = 'Human'#
field k: int = 2#
field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]# | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-9 | field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrati... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-10 | It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversat... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-11 | huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_f... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-12 | field llm: langchain.schema.BaseLanguageModel [Required]#
field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>#
Number of previous utterances to include in the context.
clear() β None[source]#
Clear memory contents.
get_current_entities(input_string: str) β List[str][... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-13 | field memory_key: str = 'history'#
field moving_summary_buffer: str = ''#
clear() β None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) β None[source]#
Save context from this con... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-14 | property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, credential: Any, session_id: str, user_id: str, ttl: Optional[int] = None)[source]#
Chat history backed by Azure CosmosDB.
ad... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-15 | Append the message to the record in DynamoDB
clear() β None[source]#
Clear session memory from DynamoDB
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from DynamoDB
class langchain.memory.InMemoryEntityStore[source]#
Basic in-memory entity store.
clear() β None[source]#
Delete all entities... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-16 | clear() β None[source]#
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, str][source]#
Load memory variables from memory.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) β None[source]#
Nothing should be saved or changed
property memory_variables: List... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-17 | delete(key: str) β None[source]#
Delete entity value from store.
exists(key: str) β bool[source]#
Check if entity exists in store.
property full_key_prefix: str#
get(key: str, default: Optional[str] = None) β Optional[str][source]#
Get entity value from store.
key_prefix: str = 'memory_store'#
recall_ttl: Optional[int]... | https://python.langchain.com/en/latest/reference/modules/memory.html |
78a900575384-18 | field retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]#
VectorStoreRetriever object to connect to.
field return_docs: bool = False#
Whether or not to return the result of querying the database directly.
clear() β None[source]#
Nothing to clear.
load_memory_variables(inputs: Dict[str, Any]) β Dict[... | https://python.langchain.com/en/latest/reference/modules/memory.html |
1bda80c3a399-0 | .rst
.pdf
SearxNG Search
Contents
Quick Start
Searching
Engine Parameters
Search Tips
SearxNG Search#
Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases and
supports the OpenSearch
specification.
More detai... | https://python.langchain.com/en/latest/reference/modules/searx_search.html |
1bda80c3a399-1 | # assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips#
Searx offers a special
search syntax
that can also be used instead of passing engine parameters.
For example the following query:
s = SearxSearchWra... | https://python.langchain.com/en/latest/reference/modules/searx_search.html |
1bda80c3a399-2 | use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described here.
For a list of public SearxNG instances see https://searx.space/
class langchain.utilities.searx_search.SearxResults(data: str)[source]#
Dict like wrappe... | https://python.langchain.com/en/latest/reference/modules/searx_search.html |
1bda80c3a399-3 | field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β List[Dict][source]#
Asynchronously query with json results... | https://python.langchain.com/en/latest/reference/modules/searx_search.html |
1bda80c3a399-4 | Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query β The query to search for.
query_suffix β Extra suffix appended to the query.
engines β List of engines to use for the query.
categories β List of categories to use for the query.
**kwargs β extra param... | https://python.langchain.com/en/latest/reference/modules/searx_search.html |
95d0f83655e7-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
V... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-1 | field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-2 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-3 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forwar... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-4 | If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prom... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-5 | field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-6 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-7 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-8 | Model name to use.
field streaming: bool = False#
Whether to stream the results.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = Non... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-9 | exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kw... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-10 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-11 | Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['a... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-12 | Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-13 | deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: L... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-14 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Paramet... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-15 | for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-16 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-17 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-18 | model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[Lis... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-19 | deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schem... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-20 | classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHE... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-21 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.sche... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-22 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_f... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-23 | To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_i... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-24 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-25 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forwar... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-26 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.sche... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-27 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_f... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-28 | To use, you should have the pyllamacpp python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-29 | A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value t... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-30 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-31 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forwar... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-32 | Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, st... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-33 | exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kw... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-34 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceEndpoi... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-35 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Crea... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-36 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingInt... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-37 | Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Validators
set_callback_manager Β» callback_manager
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Opti... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-38 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creat... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-39 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forwar... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-40 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.sche... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-41 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-42 | Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.LlamaCpp[source]#
Wrapper aroun... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-43 | The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-44 | __call__(prompt: str, stop: Optional[List[str]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.P... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-45 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-46 | Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction:
Once that happens, this interface could change.
It also calls the callback managerβs on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:pro... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-47 | Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the give... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-48 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-49 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.NLPCloud[source]#
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from l... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-50 | Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
field top_p: int = 1#
Total probabili... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-51 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep co... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-52 | save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localn... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-53 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-54 | get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr,... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-55 | Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-56 | Set of special tokens that are allowedγ
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-57 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-58 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-59 | Whether or not to use sampling; use greedy decoding otherwise.
field max_length: Optional[int] = None#
The maximum length of the sequence to be generated.
field max_new_tokens: int = 256#
The maximum number of new tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model param... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-60 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-61 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-62 | Proxy name to use.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
__call__(prompt: str, stop: Optional[List[str]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β la... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-63 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β ... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-64 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_AP... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-65 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-66 | Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntS... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-67 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure ou... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-68 | Validators
build_extra Β» all fields
set_callback_manager Β» callback_manager
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set o... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-69 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... | https://python.langchain.com/en/latest/reference/modules/llms.html |
95d0f83655e7-70 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_... | https://python.langchain.com/en/latest/reference/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.