id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
a9eb6027c785-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from ge... | https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html |
4a68ad35537b-0 | .ipynb
.pdf
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI#
The Forefront platform gives you the ability to fine-tune and use open source large language models.
This notebook goes over how to use Lang... | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
4a68ad35537b-1 | DeepInfra
next
Google Cloud Platform Vertex AI PaLM
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
efd3efc42274-0 | .ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
efd3efc42274-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
8a93e7d5ec24-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langc... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
8a93e7d5ec24-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing business... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
8a93e7d5ec24-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What col... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
f0fb96586572-0 | .ipynb
.pdf
MosaicML
MosaicML#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text completion.
# sign up for an account: https://forms.mosaicml.com/demo?utm_source=la... | https://python.langchain.com/en/latest/modules/models/llms/integrations/mosaicml.html |
4cfd72c65573-0 | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
T... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
4cfd72c65573-1 | Note that only the first output of a model will be returned.
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dog... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
4cfd72c65573-2 | from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
4cfd72c65573-3 | catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwU... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
73012bed4454-0 | .ipynb
.pdf
Beam integration for langchain
Beam integration for langchain#
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instan... | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
73012bed4454-1 | "torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
previous
Banana
next
CerebriumAI
By Harrison Chas... | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
2f2bc3d6e37a-0 | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source a... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
2f2bc3d6e37a-1 | question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
dc5596f5e11a-0 | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to u... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
dc5596f5e11a-1 | previous
Google Cloud Platform Vertex AI PaLM
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
cfa7d3e700a2-0 | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html |
078cd197b3b9-0 | .ipynb
.pdf
Structured Decoding with JSONFormer
Contents
HuggingFace Baseline
JSONFormer LLM Wrapper
Structured Decoding with JSONFormer#
JSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.
It works by filling in the structure tokens and then sa... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
078cd197b3b9-1 | {arg_schema}
EXAMPLES
----
Human: "So what's all this about a GIL?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"
}}
Observation: "The GIL is python's Global Interpreter Lock"
Human: "Could you please write a calculator program ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
078cd197b3b9-2 | original_model = HuggingFacePipeline(pipeline=hf_model)
generated = original_model.predict(prompt, stop=["Observation:", "Human:"])
print(generated)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
'What's the difference between an iterator and an iterable?'
That’s not so impressive, is it? It d... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
2aeceeef82b8-0 | .ipynb
.pdf
Huggingface TextGen Inference
Huggingface TextGen Inference#
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
This notebooks goes over how to use a self hosted LLM using Text Generation Inference... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html |
c26e85575b74-0 | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
c26e85575b74-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
c26e85575b74-2 | )
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generat... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
2586814794ca-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
2586814794ca-1 | prompt = PromptTemplate(template=template, input_variables=["question"])
llm = VertexAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
8b10b74a0c6f-0 | .ipynb
.pdf
OpenAI
Contents
OpenAI
if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
#... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html |
9908098664b6-0 | .ipynb
.pdf
Anyscale
Anyscale#
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
This example goes over how to use LangChain to interact with Anyscale service
import os
os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL
os.environ["ANYSCALE_S... | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
9908098664b6-1 | def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(llm, prompt) for prompt in prompt_list]
results = ray.get(futures)
previous
Aleph Alpha
next
Azure OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
5ec4e390cabb-0 | .ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily col... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
5ec4e390cabb-1 | StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
5ec4e390cabb-2 | Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
5aea72ef55b7-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with Stochastic... | https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
507cc4fc11b0-0 | .ipynb
.pdf
Databricks
Contents
Wrapping a serving endpoint
Wrapping a cluster driver proxy app
Databricks#
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
It supports two endpoint types:
Serving endp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
507cc4fc11b0-1 | # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens
# We strongly recommend not exposing the API token explicitly inside a notebook.
# You can use Databricks secret manager to store your API token securely.
# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-uti... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
507cc4fc11b0-2 | It uses a port number between [3000, 8000] and litens to the driver IP address or simply 0.0.0.0 instead of localhost only.
You have “Can Attach To” permission to the cluster.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
507cc4fc11b0-3 | self.matched = self.stop[i]
return True
return False
def llm(prompt, stop=None, **kwargs):
check_stop = CheckStop(stop)
result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)
return result[0]["generated_text"].rstrip(check_stop.matched)
app = Flask("dolly")
@app.route('/', method... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
507cc4fc11b0-4 | # Use `transform_input_fn` and `transform_output_fn` if the app
# expects a different input schema and does not return a JSON string,
# respectively, or you want to apply a prompt template on top.
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = f... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
b63f87ac3173-0 | .ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from... | https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html |
916736d489a4-0 | .ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>... | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
d7910b7fcb8d-0 | .ipynb
.pdf
C Transformers
C Transformers#
The C Transformers library provides Python bindings for GGML models.
This example goes over how to use LangChain to interact with C Transformers models.
Install
%pip install ctransformers
Load Model
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html |
5cffedc84ed5-0 | .ipynb
.pdf
Structured Decoding with RELLM
Contents
Hugging Face Baseline
RELLM LLM Wrapper
Structured Decoding with RELLM#
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the pro... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
5cffedc84ed5-1 | Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None
That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
bece3f2c0ae9-0 | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepIn... | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
bece3f2c0ae9-1 | llm_chain.run(question)
previous
Databricks
next
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
7ed61496929a-0 | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install gpt4all > /dev/null
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
7ed61496929a-1 | # # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqd... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
76dd217583b1-0 | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
76dd217583b1-1 | import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMC... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
74da2c352e43-0 | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpa... | https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html |
206e2f7c1404-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
206e2f7c1404-1 | The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
460f77423030-0 | .ipynb
.pdf
NLP Cloud
NLP Cloud#
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text g... | https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html |
d1c4c1c82d87-0 | .ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
Standard Cache
Semantic Cache
GPTCache
Momento Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
im... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-1 | llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-2 | Semantic Cache#
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
from langchain.embeddings import OpenAIEmbeddings
from langchain.cache import RedisSemanticCache
langchain.llm_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
)
%... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-3 | cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 21.5 ms, sys... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-4 | Wall time: 8.44 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 866 ms, sys: 20 ms, total: 886 ms
Wall time: 226 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-5 | Wall time: 1.73 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
# When run in the same region as the cache, latencies are single digit ms
llm("Tell me a joke")
CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms
Wall time: 57.9 ms
'\n\nWhy did... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-6 | idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:p... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-7 | llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
sta... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
d1c4c1c82d87-8 | %%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education a... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
b55327981a3c-0 | .ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call... | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
b55327981a3c-1 | 'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
ba5649a217a8-0 | .ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
f... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
ba5649a217a8-1 | print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised t... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
b9f33fe2ef80-0 | .ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI an... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
b9f33fe2ef80-1 | I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How a... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
961bc5ab2774-0 | .ipynb
.pdf
How (and why) to use the human input LLM
How (and why) to use the human input LLM#
Similar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they rece... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-1 | Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:
=====END OF PROMPT===... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-2 | Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the mag... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-3 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-4 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-5 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
961bc5ab2774-6 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
78d285470f77-0 | .ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms i... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
78d285470f77-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
b745b536ed15-0 | .ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with usi... | https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html |
ca481a0093a9-0 | .ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI, and ChatAnthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utili... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
ca481a0093a9-1 | On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a jok... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
ca481a0093a9-2 | Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
S... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
3a7b31851f46-0 | .ipynb
.pdf
Fake Embeddings
Fake Embeddings#
LangChain also provides a fake embedding class. You can use this to test your pipelines.
from langchain.embeddings import FakeEmbeddings
embeddings = FakeEmbeddings(size=1352)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
prev... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/fake.html |
b50b967bc80c-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to use Llama-cpp embeddings within LangChain
!pip install llama-cpp-python
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.e... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html |
52d1fb24c677-0 | .ipynb
.pdf
InstructEmbeddings
InstructEmbeddings#
Let’s load the HuggingFace instruct Embeddings class.
from langchain.embeddings import HuggingFaceInstructEmbeddings
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
load INSTRUCTOR_Transformer
max_seq_length 51... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html |
192cce7dc0fa-0 | .ipynb
.pdf
Cohere
Cohere#
Let’s load the Cohere Embedding class.
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
AzureOpe... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html |
8976371294c3-0 | .ipynb
.pdf
Aleph Alpha
Contents
Asymmetric
Symmetric
Aleph Alpha#
There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric ... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html |
4f3d6e9f9d85-0 | .ipynb
.pdf
Jina
Jina#
Let’s load the Jina Embedding class.
from langchain.embeddings import JinaEmbeddings
embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([t... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/jina.html |
0e2fff1a1f51-0 | .ipynb
.pdf
MiniMax
MiniMax#
MiniMax offers an embeddings service.
This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.
import os
os.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"
os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"
from langchain.embeddings import MiniMaxEm... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/minimax.html |
f3c84578427f-0 | .ipynb
.pdf
Contents
!pip -q install elasticsearch langchain
import elasticsearch
from langchain.embeddings.elasticsearch import ElasticsearchEmbeddings
# Define the model ID
model_id = 'your_model_id'
# Instantiate ElasticsearchEmbeddings using credentials
embeddings = ElasticsearchEmbeddings.from_credentials(
m... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html |
60ea78c204e2-0 | .ipynb
.pdf
MosaicML embeddings
MosaicML embeddings#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.
# sign up for an account: https://forms.mosaicml.c... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/mosaicml.html |
97dfa312263a-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
Let’s load the Hugging Face Embedding class.
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
G... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/huggingfacehub.html |
f824f8925447-0 | .ipynb
.pdf
TensorflowHub
TensorflowHub#
Let’s load the TensorflowHub Embedding class.
from langchain.embeddings import TensorflowHubEmbeddings
embeddings = TensorflowHubEmbeddings()
2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neu... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/tensorflowhub.html |
16291abda185-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
16291abda185-1 | previous
Fake Embeddings
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
14dafd9ef5e5-0 | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with fir... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html |
7611985f30ff-0 | .ipynb
.pdf
ModelScope
ModelScope#
Let’s load the ModelScope Embedding class.
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embeddings = ModelScopeEmbeddings(model_id=model_id)
text = "This is a test document."
query_result = embeddings.embed_query(tex... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/modelscope_hub.html |
fb65ad938940-0 | .ipynb
.pdf
AzureOpenAI
AzureOpenAI#
Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.
# set the environment variables needed for openai package to know to reach out to azure
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https:/... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.