id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
479f31c64c6b-122
in, even if not explicitly saved on this class. Example from langchain import PipelineAI pipeline = PipelineAI(pipeline_key="") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field pipeline_key: str = ''# The id or tag of the target pipeline fi...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-123
Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-124
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present i...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-125
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PredictionGuard[source]# Wrapper around Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with y...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-126
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given pro...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-127
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep co...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-128
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-129
promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Gen...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-130
Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-131
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of p...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-132
Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) → int# Calculate the maximum number of tokens possible to generate for a model. Parameters mo...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-133
for token in generator: yield token classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PromptLayerOpenAIChat[source]# Wrapper around OpenAI large language models. To use, you should have the openai and ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-134
Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.bas...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-135
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-136
get_token_ids(text: str) → List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = Fals...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-137
Example from langchain.llms import RWKV model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32") # Simplest invocation response = model("Once upon a time, ") Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field CHUNK_LEN: int = 256# Batch size for prompt pr...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-138
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given pro...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-139
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep co...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-140
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-141
The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example from langchain.llms import Replicate replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-142
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ a...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-143
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of p...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-144
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SagemakerEndpoint...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-145
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_n...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-146
Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage# Predict message from m...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-147
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of p...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-148
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedHugging...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-149
"text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Validators raise_deprecation » all fields set_verbose » verbose field device: int = 0# Device to use for inference. -1 for CPU, 0 for GPU, 1 for second ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-150
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given pro...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-151
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep co...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-152
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-153
cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-154
llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators raise_deprecation » all fields set_verbose » verbose field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <f...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-155
Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage# Predict message from m...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-156
Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-157
Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-158
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given pro...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-159
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep co...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-160
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-161
field location: str = 'us-central1'# The default location to use when making API calls. field max_output_tokens: int = 128# Token limit determines the maximum amount of text output from one prompt. field project: Optional[str] = None# The default GCP project to use when making Vertex API calls. field temperature: float...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-162
Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage# Predict message from m...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-163
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of p...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-164
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# W...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-165
field temperature: Optional[float] = None# What sampling temperature to use. field top_p: Optional[float] = None# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. field writer_api_key: Optional[str] = None# Writer API key. field writer_org_id...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-166
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-167
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ...
https://python.langchain.com/en/latest/reference/modules/llms.html
479f31c64c6b-168
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/reference/modules/llms.html
f0475204cd47-0
.rst .pdf Utilities Utilities# General utilities. pydantic model langchain.utilities.ApifyWrapper[source]# Wrapper around Apify. To use, you should have the apify-client python package installed, and the environment variable APIFY_API_TOKEN set with your API key, or pass apify_api_token as a named parameter to the cons...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-1
Return type ApifyDatasetLoader call_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) → langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-2
Set doc_content_chars_max=None if you don’t want to limit the content size. Parameters top_k_results – number of the top-scored document used for the arxiv tool ARXIV_MAX_QUERY_LENGTH – the cut limit on the query used for the arxiv tool. load_max_docs – a limit to the number of loaded documents load_all_available_meta ...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-3
Run commands and return final output. pydantic model langchain.utilities.BingSearchAPIWrapper[source]# Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e field bing_search_url: str [Required]# ...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-4
num_results – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query: str) → str[source]# pydantic model langchain.utilities.GooglePlacesAPIWrapper[source]# Wra...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-5
read the Managing Projects page and create a project in the Google API Console. - Install the library using pip install google-api-python-client The current version of the library is 2.70.0 at this time 2. To create an API key: - Navigate to the APIs & Services→Credentials panel in Cloud Console. - Select Create creden...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-6
Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query: str) → str[source]# Run query through GoogleSearch and parse result. pydantic model langchain.utilities.GoogleSerperAPIWrapper[source]# W...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-7
Wrapper around GraphQL API. To use, you should have the gql python package installed. This wrapper will use the GraphQL API to conduct queries. field custom_headers: Optional[Dict[str, str]] = None# field graphql_endpoint: str [Required]# run(query: str) → str[source]# Run a GraphQL query and get the results. pydantic ...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-8
pydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]# Wrapper for OpenWeatherMap API using PyOWM. Docs for using: Go to OpenWeatherMap and sign up for an API key Save your API KEY into OPENWEATHERMAP_API_KEY env variable pip install pyowm field openweathermap_api_key: Optional[str] = None# field owm: Any...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-9
Execute a DAX command and return the result asynchronously. get_schemas() → str[source]# Get the available schema’s. get_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]# Get information about specified tables. get_table_names() → Iterable[str][source]# Get names of tables available. run(co...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-10
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Validators disable_ssl_warnings » unsecure v...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-11
engines – List of engines to use for the query. categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns {snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category:...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-12
pydantic model langchain.utilities.SerpAPIWrapper[source]# Wrapper around SerpAPI. To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor. Example from langchain impor...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-13
Creating a remote Spark Session via Spark connect. For example: SparkSQL.from_uri(“sc://localhost:15002”) get_table_info(table_names: Optional[List[str]] = None) → str[source]# get_table_info_no_throw(table_names: Optional[List[str]] = None) → str[source]# Get information about specified tables. Follows best practices ...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-14
GET the URL and return the text asynchronously. async apatch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]# PATCH the URL and return the text asynchronously. async apost(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]# POST to the URL and return the text asynchronously. async aput(url: str, ...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-15
account_sid="ACxxx", auth_token="xxx", from_number="+10123456789" ) twilio.run('test', '+12484345508') field account_sid: Optional[str] = None# Twilio account string identifier. field auth_token: Optional[str] = None# Twilio auth token. field from_number: Optional[str] = None# A Twilio phone number in [E.164](h...
https://python.langchain.com/en/latest/reference/modules/utilities.html
f0475204cd47-16
Wrapper around WikipediaAPI. To use, you should have the wikipedia python package installed. This wrapper will use the Wikipedia API to conduct searches and fetch page summaries. By default, it will return the page summaries of the top-k results. It limits the Document content by doc_content_chars_max. field doc_conten...
https://python.langchain.com/en/latest/reference/modules/utilities.html
ee26bc64bd63-0
.rst .pdf Experimental Modules Contents Autonomous Agents Generative Agents Experimental Modules# This module contains experimental modules and reproductions of existing work using LangChain primitives. Autonomous Agents# Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module. class ...
https://python.langchain.com/en/latest/reference/modules/experimental.html
ee26bc64bd63-1
Get the next task. property input_keys: List[str]# Input keys this chain expects. property output_keys: List[str]# Output keys this chain expects. prioritize_tasks(this_task_id: int, objective: str) → List[Dict][source]# Prioritize tasks. class langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores...
https://python.langchain.com/en/latest/reference/modules/experimental.html
ee26bc64bd63-2
Summary of the events in the plan that the agent took. generate_dialogue_response(observation: str, now: Optional[datetime.datetime] = None) → Tuple[bool, str][source]# React to a given observation. generate_reaction(observation: str, now: Optional[datetime.datetime] = None) → Tuple[bool, str][source]# React to a given...
https://python.langchain.com/en/latest/reference/modules/experimental.html
ee26bc64bd63-3
field traits: str = 'N/A'# Permanent traits to ascribe to the character. class langchain.experimental.GenerativeAgentMemory(*, llm: langchain.base_language.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Opt...
https://python.langchain.com/en/latest/reference/modules/experimental.html
ee26bc64bd63-4
The core language model. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Return key-value pairs given the text input to the chain. field memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]# The retriever to fetch related memories. property m...
https://python.langchain.com/en/latest/reference/modules/experimental.html
1abd6a2753af-0
.rst .pdf Document Transformers Document Transformers# Transform documents pydantic model langchain.document_transformers.EmbeddingsRedundantFilter[source]# Filter that drops redundant documents by comparing their embeddings. field embeddings: langchain.embeddings.base.Embeddings [Required]# Embeddings to use for embed...
https://python.langchain.com/en/latest/reference/modules/document_transformers.html
5b25cabdd8f4-0
.md .pdf YouTube Contents ⛓️Official LangChain YouTube channel⛓️ Introduction to LangChain with Harrison Chase, creator of LangChain Videos (sorted by views) YouTube# This is a collection of LangChain videos on YouTube. ⛓️Official LangChain YouTube channel⛓️# Introduction to LangChain with Harrison Chase, creator of ...
https://python.langchain.com/en/latest/additional_resources/youtube.html
5b25cabdd8f4-1
Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha Langchain JS | How to Use GPT-3, GPT-4 to Ref...
https://python.langchain.com/en/latest/additional_resources/youtube.html
5b25cabdd8f4-2
LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemekl...
https://python.langchain.com/en/latest/additional_resources/youtube.html
5b25cabdd8f4-3
⛓️ Build your own custom LLM application with Bubble.io & Langchain (No Code & Beginner friendly) by No Code Blackbox ⛓️ Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude! by Chris Alexiuk ⛓️ LANGCHAIN AI- ConstitutionalChainAI + Databutton AI ASSISTANT Web App by Avra ⛓...
https://python.langchain.com/en/latest/additional_resources/youtube.html
5b25cabdd8f4-4
⛓️ Summarizing and Querying Multiple Papers with LangChain by Automata Learning Lab ⛓️ Using Langchain (and Replit) through Tana, ask Google/Wikipedia/Wolfram Alpha to fill out a table by Stian Håklev ⛓️ Langchain PDF App (GUI) | Create a ChatGPT For Your PDF in Python by Alejandro AO - Software & Ai ⛓️ Auto-GPT with L...
https://python.langchain.com/en/latest/additional_resources/youtube.html
5b25cabdd8f4-5
Videos (sorted by views) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/additional_resources/youtube.html
2d1194c9f16e-0
.ipynb .pdf Model Comparison Model Comparison# Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way...
https://python.langchain.com/en/latest/additional_resources/model_laboratory.html
2d1194c9f16e-1
pink prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"]) model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt) model_lab_with_prompt.compare("New York") Input: New York OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p'...
https://python.langchain.com/en/latest/additional_resources/model_laboratory.html
2d1194c9f16e-2
names = [str(open_ai_llm), str(cohere_llm)] model_lab = ModelLaboratory(chains, names=names) model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?") Input: What is the hometown of the reigning men's U.S. Open champion? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tok...
https://python.langchain.com/en/latest/additional_resources/model_laboratory.html
2d1194c9f16e-3
So the final answer is: Carlos Alcaraz previous Tracing next YouTube By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/additional_resources/model_laboratory.html
e92df267171d-0
.md .pdf Tracing Contents Tracing Walkthrough Changing Sessions Tracing# By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents. First, you should install tracing and set up your environment properly. You can use either a locally hosted...
https://python.langchain.com/en/latest/additional_resources/tracing.html
e92df267171d-1
Changing Sessions# To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to: import os os.environ["LANGCHAIN_TRACING"] = "true" os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually ex...
https://python.langchain.com/en/latest/additional_resources/tracing.html
0276e2e8fde9-0
.rst .pdf Indexes Contents Go Deeper Indexes# Note Conceptual Guide Indexes refer to ways to structure documents so that LLMs can best interact with them. This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains. The most common...
https://python.langchain.com/en/latest/modules/indexes.html
0276e2e8fde9-1
previous Zep Memory next Getting Started Contents Go Deeper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes.html
7974129db4c1-0
.rst .pdf Models Contents Getting Started Go Deeper Models# Note Conceptual Guide This section of the documentation deals with different types of models that are used in LangChain. On this page we will go over the model types at a high level, but we have individual pages for each model type. The pages contain more de...
https://python.langchain.com/en/latest/modules/models.html
109afbd228ed-0
.rst .pdf Agents Contents Action Agents Plan-and-Execute Agents Agents# Note Conceptual Guide Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user’s input. In these types of chains, there is a “agent” which has access to ...
https://python.langchain.com/en/latest/modules/agents.html
109afbd228ed-1
The different abstractions involved in agents are as follows: Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input along with a list of previous steps the agent has taken, and returns either an AgentAction or AgentFinish AgentAction corresponds to the tool to use ...
https://python.langchain.com/en/latest/modules/agents.html
109afbd228ed-2
Agents In this section we cover the different types of agents LangChain supports natively. We then cover how to modify and create your own agents. Toolkits In this section we go over the various toolkits that LangChain supports out of the box, and how to create an agent from them. Agent Executor In this section we go o...
https://python.langchain.com/en/latest/modules/agents.html
293f0adc4476-0
.rst .pdf Memory Memory# Note Conceptual Guide By default, Chains and Agents are stateless, meaning that they treat each incoming query independently (as are the underlying LLMs and chat models). In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions, both at a sh...
https://python.langchain.com/en/latest/modules/memory.html
17f0db6a126c-0
.rst .pdf Chains Chains# Note Conceptual Guide Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with each other or with other experts. LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of...
https://python.langchain.com/en/latest/modules/chains.html
6f6ebe244706-0
.rst .pdf Prompts Contents Getting Started Go Deeper Prompts# Note Conceptual Guide The new way of programming models is through prompts. A “prompt” refers to the input to the model. This input is rarely hard coded, but rather is often constructed from multiple components. A PromptTemplate is responsible for the cons...
https://python.langchain.com/en/latest/modules/prompts.html
b45b62629dd9-0
.ipynb .pdf Callbacks Contents Callbacks How to use callbacks When do you want to use each of these? Using an existing handler Creating a custom handler Async Callbacks Using multiple handlers, passing in handlers Tracing and Token Counting Tracing Token Counting Callbacks# LangChain provides a callbacks system that ...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-1
CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered. class BaseCallbackHandler: """Base callback handler that can be used to handle cal...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-2
def on_tool_end(self, output: str, **kwargs: Any) -> Any: """Run when tool ends running.""" def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when tool errors.""" def on_text(self, text: str, **kwargs: Any) -> Any: """Run on a...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-3
The verbose argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. LLMChain(verbose=True), and it is equivalent to passing a ConsoleCallbackHandler to the callbacks argument of that object and all child objects. This is useful for debugging, as it w...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-4
# First, let's explicitly set the StdOutCallbackHandler in `callbacks` chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler]) chain.run(number=2) # Then, let's use the `verbose` flag to achieve the same result chain = LLMChain(llm=llm, prompt=prompt, verbose=True) chain.run(number=2) # Finally, let's use the req...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-5
My custom handler, token: My custom handler, token: Why My custom handler, token: did My custom handler, token: the My custom handler, token: tomato My custom handler, token: turn My custom handler, token: red My custom handler, token: ? My custom handler, token: Because My custom handler, token: it My custom h...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-6
) -> None: """Run when chain starts running.""" print("zzzz....") await asyncio.sleep(0.3) class_name = serialized["name"] print("Hi! I just woke up. Your llm is starting") async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when chain ends ...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-7
Sync handler being called in a `thread_pool_executor`: token: make Sync handler being called in a `thread_pool_executor`: token: up Sync handler being called in a `thread_pool_executor`: token: everything Sync handler being called in a `thread_pool_executor`: token: ! Sync handler being called in a `thread_pool_exec...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
b45b62629dd9-8
from langchain.callbacks import tracing_enabled from langchain.llms import OpenAI # First, define custom callback handler implementations class MyCustomHandlerOne(BaseCallbackHandler): def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> Any: print(f"on_llm_...
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html