id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
c4e26144932d-1 | embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoin... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-2 | Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one f... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-3 | Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-4 | Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings impor... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-5 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extractio... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-6 | )
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to ... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-7 | Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of par... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-8 | query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/em... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-9 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstr... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-10 | Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-11 | query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
e... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-12 | field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-13 | Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, ... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-14 | embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddi... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-15 | Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-16 | field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embe... | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
c4e26144932d-17 | Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
84873a6d52b2-0 | .rst
.pdf
Utilities
Utilities#
General utilities.
pydantic model langchain.utilities.ApifyWrapper[source]#
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the cons... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-1 | Return type
ApifyDatasetLoader
call_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) → langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-2 | Parameters
top_k_results – number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH – the cut limit on the query used for the arxiv tool.
load_max_docs – a limit to the number of loaded documents
load_all_available_meta –
if True: the metadata of the loaded Documents gets all available meta inf... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-3 | pydantic model langchain.utilities.BingSearchAPIWrapper[source]#
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
field bing_search_url: str [Required]#
field bing_subscription_key: str [Requ... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-4 | Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) → str[source]#
pydantic model langchain.utilities.GooglePlacesAPIWrapper[source]#
Wrapper around Google Places API.
To use, you shou... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-5 | - Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & Services→Credentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-6 | title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) → str[source]#
Run query through GoogleSearch and parse result.
pydantic model langchain.utilities.GoogleSerperAPIWrapper[source]#
Wrapper around the Serper.dev Google Search API.
Y... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-7 | This wrapper will use the GraphQL API to conduct queries.
field custom_headers: Optional[Dict[str, str]] = None#
field graphql_endpoint: str [Required]#
run(query: str) → str[source]#
Run a GraphQL query and get the results.
pydantic model langchain.utilities.LambdaWrapper[source]#
Wrapper for AWS Lambda SDK.
Docs for ... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-8 | Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
field openweathermap_api_key: Optional[str] = None#
field owm: Any = None#
run(location: str) → str[source]#
Get the current weather in... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-9 | get_schemas() → str[source]#
Get the available schema’s.
get_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]#
Get information about specified tables.
get_table_names() → Iterable[str][source]#
Get names of tables available.
run(command: str) → Any[source]#
Execute a DAX command and return ... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-10 | # note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field aiosession: Optional[Any] = None#
field cat... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-11 | **kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Opt... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-12 | To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.clie... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-13 | For example: SparkSQL.from_uri(“sc://localhost:15002”)
get_table_info(table_names: Optional[List[str]] = None) → str[source]#
get_table_info_no_throw(table_names: Optional[List[str]] = None) → str[source]#
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxi... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-14 | PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PUT the URL and return the text asynchronously.
delete(url: str, **kwar... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-15 | field account_sid: Optional[str] = None#
Twilio account string identifier.
field auth_token: Optional[str] = None#
Twilio auth token.
field from_number: Optional[str] = None#
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
84873a6d52b2-16 | fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
field doc_content_chars_max: int = 4000#
field lang: str = 'en'#
field load_all_available_meta: bool = False#
field top_k_results: int = 3#
load(query: str) → List[langchain... | https://python.langchain.com/en/latest/reference/modules/utilities.html |
95f9b51ecced-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ArxivRetriever[source]#
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document]... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-1 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.Clie... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-2 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) → List[lang... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-3 | Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) → List[str][source]#
Run more texts through t... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-4 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.knn.KNNRetriever[source]#
get_relevant_documents(query: str) → ... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-5 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langcha... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-6 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SelfQueryRetriever[source]#
Retriever that wraps around a vector store and use... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-7 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-8 | field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-9 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-10 | class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extr... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
95f9b51ecced-11 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#
A Retriever implementation for the Zep long-term memory store. Search your
user’s long-term chat history with Zep.
Note: Yo... | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3de395b109e4-0 | .md
.pdf
Tracing
Contents
Tracing Walkthrough
Changing Sessions
Tracing#
By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted... | https://python.langchain.com/en/latest/additional_resources/tracing.html |
3de395b109e4-1 | Changing Sessions#
To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to:
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually ex... | https://python.langchain.com/en/latest/additional_resources/tracing.html |
1da9669fd24a-0 | .md
.pdf
YouTube
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
YouTube#
This is a collection of LangChain videos on YouTube.
⛓️Official LangChain YouTube channel⛓️#
Introduction to LangChain with Harrison Chase, creator of ... | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1da9669fd24a-1 | Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder
How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI
Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha
Langchain JS | How to Use GPT-3, GPT-4 to Ref... | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1da9669fd24a-2 | LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde
Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods
BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood
Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemekl... | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1da9669fd24a-3 | ⛓️ QA over documents with Auto vector index selection with Langchain router chains by echohive
⛓️ Build your own custom LLM application with Bubble.io & Langchain (No Code & Beginner friendly) by No Code Blackbox
⛓️ Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude! by C... | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1da9669fd24a-4 | ⛓️ LangChain In Action: Real-World Use Case With Step-by-Step Tutorial by Rabbitmetrics
⛓️ Summarizing and Querying Multiple Papers with LangChain by Automata Learning Lab
⛓️ Using Langchain (and Replit) through Tana, ask Google/Wikipedia/Wolfram Alpha to fill out a table by Stian Håklev
⛓️ Langchain PDF App (GUI) | Cr... | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1da9669fd24a-5 | Model Comparison
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/additional_resources/youtube.html |
4f7fbb9c3514-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way... | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
4f7fbb9c3514-1 | pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p'... | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
4f7fbb9c3514-2 | names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tok... | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
4f7fbb9c3514-3 | So the final answer is:
Carlos Alcaraz
previous
Tracing
next
YouTube
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
306e2ebbea37-0 | .md
.pdf
Deployments
Contents
Streamlit
Gradio (on Hugging Face)
Chainlit
Beam
Vercel
FastAPI + Vercel
Kinsta
Fly.io
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
Deployments#
So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
306e2ebbea37-1 | Chainlit doc on the integration with LangChain
Beam#
This repo serves as a template for how deploy a LangChain with Beam.
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
Vercel#
A minimal example on how to run LangChain on Vercel using Flask.
FastAPI + Ve... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
306e2ebbea37-2 | Databutton#
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memo... | https://python.langchain.com/en/latest/ecosystem/deployments.html |
abd61a994364-0 | .md
.pdf
Quickstart Guide
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add S... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-1 | LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LLMs: Get predictions from a language model#
The most basic building block of LangChain is calling an LLM on some input.
Le... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-2 | This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to forma... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-3 | Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-4 | pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First,... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-5 | Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or age... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-6 | Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The A... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-7 | AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pa... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-8 | You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or mo... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-9 | from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMe... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-10 | agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-11 | '2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from la... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
abd61a994364-12 | conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this on... | https://python.langchain.com/en/latest/getting_started/getting_started.html |
785ce0ed2fdd-0 | .md
.pdf
Concepts
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Concepts#
These are concepts and terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was fi... | https://python.langchain.com/en/latest/getting_started/concepts.html |
785ce0ed2fdd-1 | to respond in a certain way framing the discussion in a context that the model knows of and that
will result in that type of response.
For example, as a conversation between a student and a teacher.
Paper
Self Consistency#
Self Consistency is a decoding strategy that samples a diverse set of reasoning paths and then se... | https://python.langchain.com/en/latest/getting_started/concepts.html |
dd4359954237-0 | .md
.pdf
Tutorials
Contents
Tutorials#
This is a collection of LangChain tutorials mostly on YouTube.
⛓ icon marks a new video [last update 2023-05-15]
#
LangChain AI Handbook By James Briggs and Francisco Ingham
#
LangChain Tutorials by Edrick:
⛓ LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF
La... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
dd4359954237-1 | Question A 300 Page Book (w/ OpenAI + Pinecone)
Workaround OpenAI's Token Limit With Chain Types
Build Your Own OpenAI + LangChain Web App in 23 Minutes
Working With The New ChatGPT API
OpenAI + LangChain Wrote Me 100 Custom Sales Emails
Structured Output From OpenAI (Clean Dirty Data)
Connect OpenAI To +5,000 Tools (L... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
dd4359954237-2 | ⛓ Using LangChain with DuckDuckGO Wikipedia & PythonREPL Tools
⛓ Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)
⛓ LangChain Retrieval QA Over Multiple Files with ChromaDB
⛓ LangChain Retrieval QA with Instructor Embeddings & ChromaDB for PDFs
⛓ LangChain + Retrieval Local LLMs for Retrieval QA - No Ope... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
dd4359954237-3 | Analyze Custom CSV Data with GPT-4 using Langchain
⛓ Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations
⛓ icon marks a new video [last update 2023-05-15]
previous
Concepts
next
Models
Contents
By Harrison Chase
© Copyright 2023, Harrison Chase.
L... | https://python.langchain.com/en/latest/getting_started/tutorials.html |
dd34967fee30-0 | .md
.pdf
Agents
Contents
Create Your Own Agent
Step 1: Create Tools
(Optional) Step 2: Modify Agent
(Optional) Step 3: Modify Agent Executor
Examples
Agents#
Conceptual Guide
Agents can be used for a variety of tasks.
Agents combine the decision making ability of a language model with tools in order to create a syste... | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
dd34967fee30-1 | Modify the output parser. This is necessary if the agent is having trouble parsing the language model output.
(Optional) Step 3: Modify Agent Executor#
This step is usually not necessary, as this is pretty general logic.
Possible reasons you would want to modify this include adding different stopping conditions, or han... | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
6ae9a6aac5f8-0 | .md
.pdf
Extraction
Extraction#
Conceptual Guide
Most APIs and databases still deal with structured information.
Therefore, in order to better work with those, it can be useful to extract structured information from text.
Examples of this include:
Extracting a structured row to insert into a database from a sentence
Ex... | https://python.langchain.com/en/latest/use_cases/extraction.html |
4d8c6a280beb-0 | .md
.pdf
Autonomous Agents
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
MetaPrompt (Original Repo)
Autonomous Agents#
Autonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications com... | https://python.langchain.com/en/latest/use_cases/autonomous_agents.html |
655d432a7947-0 | .md
.pdf
Summarization
Summarization#
Conceptual Guide
Summarization involves creating a smaller summary of multiple longer documents.
This can be useful for distilling long documents into the core pieces of information.
The recommended way to get started using a summarization chain is:
from langchain.chains.summarize ... | https://python.langchain.com/en/latest/use_cases/summarization.html |
4f2ad6e47357-0 | .rst
.pdf
Evaluation
Contents
The Problem
The Solution
The Examples
Other Examples
Evaluation#
Note
Conceptual Guide
This section of documentation covers how we approach and think about evaluation in LangChain.
Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangCh... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
4f2ad6e47357-1 | We intend this to be a collection of open source datasets for evaluating common chains and agents.
We have contributed five datasets of our own to start, but we highly intend this to be a community effort.
In order to contribute a dataset, you simply need to join the community and then you will be able to upload datase... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
4f2ad6e47357-2 | SQL Question Answering (Chinook): A notebook showing evaluation of a question-answering task over a SQL database (the Chinook database).
Agent Vectorstore: A notebook showing evaluation of an agent doing question answering while routing between two different vector databases.
Agent Search + Calculator: A notebook showi... | https://python.langchain.com/en/latest/use_cases/evaluation.html |
5f972fb5be22-0 | .md
.pdf
Question Answering over Docs
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
Question Answering over Docs#
Conceptual Guide
Question answering in this context refers to question answering over your document data.
For question answering over other types ... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
5f972fb5be22-1 | The LLM response will contain the answer to your question, based on the content of the documents.
The recommended way to get started using a question answering chain is:
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain.run(input_documents=docs, question=... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
5f972fb5be22-2 | Additional Related Resources#
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example... | https://python.langchain.com/en/latest/use_cases/question_answering.html |
b9803166eaf9-0 | .md
.pdf
Code Understanding
Contents
Conversational Retriever Chain
Code Understanding#
Overview
LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generat... | https://python.langchain.com/en/latest/use_cases/code.html |
b9803166eaf9-1 | The full tutorial is available below.
Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation.
LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base.
prev... | https://python.langchain.com/en/latest/use_cases/code.html |
390ed1584c6d-0 | .md
.pdf
Chatbots
Chatbots#
Conceptual Guide
Since language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, whic... | https://python.langchain.com/en/latest/use_cases/chatbots.html |
f03b9a043d36-0 | .md
.pdf
Querying Tabular Data
Contents
Document Loading
Querying
Chains
Agents
Querying Tabular Data#
Conceptual Guide
Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
This page covers all resources available in LangChain for working with data in this format.
D... | https://python.langchain.com/en/latest/use_cases/tabular.html |
59829c596002-0 | .md
.pdf
Agent Simulations
Contents
Simulations with One Agent
Simulations with Two Agents
Simulations with Multiple Agents
Agent Simulations#
Agent simulations involve interacting one of more agents with each other.
Agent simulations generally involve two main components:
Long Term Memory
Simulation Environment
Spec... | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
59829c596002-1 | Simulated Environment: PettingZoo: an example of how to create a agent-environment interaction loop for multiple agents with PettingZoo (a multi-agent version of Gymnasium).
Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Pa... | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
ae89d8e34aab-0 | .md
.pdf
Interacting with APIs
Contents
Chains
Agents
Interacting with APIs#
Conceptual Guide
Lots of data and information is stored behind APIs.
This page covers all resources available in LangChain for working with APIs.
Chains#
If you are just getting started, and you have relatively simple apis, you should get st... | https://python.langchain.com/en/latest/use_cases/apis.html |
413264baa068-0 | .ipynb
.pdf
SalesGPT - Your Context-Aware AI Sales Assistant
Contents
SalesGPT - Your Context-Aware AI Sales Assistant
Import Libraries and Set Up Your Environment
SalesGPT architecture
Architecture diagram
Sales conversation stages.
Set up the SalesGPT Controller with the Sales Agent and Stage Analyzer
Set up the AI... | https://python.langchain.com/en/latest/use_cases/agents/sales_agent_with_context.html |
413264baa068-1 | Here is the schematic of the architecture:
Architecture diagram#
Sales conversation stages.#
The agent employs an assistant who keeps it in check as in what stage of the conversation it is in. These stages were generated by ChatGPT and can be easily modified to fit other use cases or modes of conversation.
Introduction... | https://python.langchain.com/en/latest/use_cases/agents/sales_agent_with_context.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.