id stringlengths 14 16 | text stringlengths 29 2.73k | source stringlengths 49 117 |
|---|---|---|
c6cdf35b784e-0 | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
T... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
c6cdf35b784e-1 | Note that only the first output of a model will be returned.
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dog... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
c6cdf35b784e-2 | from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
c6cdf35b784e-3 | catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwU... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
b2ec8626b0a7-0 | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to u... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
b2ec8626b0a7-1 | Google Cloud Platform Vertex AI PaLM
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
cea888ac1cdc-0 | .ipynb
.pdf
Amazon Bedrock
Contents
Using in a conversation chain
Amazon Bedrock#
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case
%pip install boto3
fro... | https://python.langchain.com/en/latest/modules/models/llms/integrations/bedrock.html |
e564df3eb57f-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
e564df3eb57f-1 | llm = VertexAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the San Francisco 49ers.\nThe final answer: San Francisco 49ers.'
previous
F... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
d6ae6e20dace-0 | .ipynb
.pdf
Structured Decoding with RELLM
Contents
Hugging Face Baseline
RELLM LLM Wrapper
Structured Decoding with RELLM#
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the pro... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
d6ae6e20dace-1 | Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None
That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
7cb53e667c4a-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langc... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
7cb53e667c4a-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing business... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
7cb53e667c4a-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What col... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
56926ff53e01-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with Stochastic... | https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
fd97a1987941-0 | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepIn... | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
fd97a1987941-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "Can penguins reach the North pole?"
llm_chain.run(question)
"Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.... | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
445313b89a54-0 | .ipynb
.pdf
MosaicML
MosaicML#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text completion.
# sign up for an account: https://forms.mosaicml.com/demo?utm_source=la... | https://python.langchain.com/en/latest/modules/models/llms/integrations/mosaicml.html |
8ce661b71d9a-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
8ce661b71d9a-1 | The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
0c0717be3f89-0 | .ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from... | https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html |
a06029df9d4e-0 | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install gpt4all > /dev/null
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
a06029df9d4e-1 | # # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqd... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
ef8511b0914c-0 | .ipynb
.pdf
PipelineAI
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
PipelineAI#
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
This ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html |
ef8511b0914c-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Petals
next
Basic LLM usage
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Te... | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html |
64a63b65948b-0 | .ipynb
.pdf
Anyscale
Anyscale#
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
This example goes over how to use LangChain to interact with Anyscale service
import os
os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL
os.environ["ANYSCALE_S... | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
64a63b65948b-1 | resp = llm(prompt)
return resp
futures = [send_query.remote(llm, prompt) for prompt in prompt_list]
results = ray.get(futures)
previous
Aleph Alpha
next
Azure OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
c874924f2a60-0 | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html |
28722f954f09-0 | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
28722f954f09-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
28722f954f09-2 | )
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generat... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
289af6aa352c-0 | .ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
289af6aa352c-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Amazon Bedrock
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Temp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
51ac4084a46c-0 | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source a... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
51ac4084a46c-1 | question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
20dbfeda26e6-0 | .ipynb
.pdf
OpenLM
Contents
Setup
Using LangChain with OpenLM
OpenLM#
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset u... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html |
20dbfeda26e6-1 | llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print("""Model: {}
Result: {}""".format(model, result))
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html |
8480ed408398-0 | .ipynb
.pdf
C Transformers
C Transformers#
The C Transformers library provides Python bindings for GGML models.
This example goes over how to use LangChain to interact with C Transformers models.
Install
%pip install ctransformers
Load Model
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html |
c1f44a663278-0 | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
c1f44a663278-1 | import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMC... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
ccec0a0857f5-0 | .ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
f... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
ccec0a0857f5-1 | print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised t... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
2b8910665260-0 | .ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with usi... | https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html |
8db58dce53f9-0 | .ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI, and ChatAnthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utili... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
8db58dce53f9-1 | On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a jok... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
8db58dce53f9-2 | You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
Y... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
b80e41fed43a-0 | .ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI an... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
b80e41fed43a-1 | I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How a... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
cfc7b2cd6579-0 | .ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call... | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
cfc7b2cd6579-1 | 'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
08165be5c401-0 | .ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
Standard Cache
Semantic Cache
GPTCache
Momento Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
im... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-1 | llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-2 | Semantic Cache#
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
from langchain.embeddings import OpenAIEmbeddings
from langchain.cache import RedisSemanticCache
langchain.llm_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
)
%... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-3 | cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 21.5 ms, sys... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-4 | Wall time: 8.44 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 866 ms, sys: 20 ms, total: 886 ms
Wall time: 226 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-5 | Wall time: 1.73 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
# When run in the same region as the cache, latencies are single digit ms
llm("Tell me a joke")
CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms
Wall time: 57.9 ms
'\n\nWhy did... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-6 | idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:p... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-7 | llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
sta... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
08165be5c401-8 | %%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education a... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
7bd6fce3cb05-0 | .ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms i... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
7bd6fce3cb05-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
9b3fe37d86ee-0 | .ipynb
.pdf
How (and why) to use the human input LLM
How (and why) to use the human input LLM#
Similar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they rece... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-1 | Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:
=====END OF PROMPT===... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-2 | Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the mag... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-3 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-4 | =====END OF PROMPT======
These are not relevant articles.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji.
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written a... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-5 | Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series.
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kir... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
9b3fe37d86ee-6 | Thought:These are not relevant articles.
Action: Wikipedia
Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji.
Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
7d45020d986d-0 | .rst
.pdf
Integrations
Integrations#
The examples here all highlight how to integrate with different chat models.
Anthropic
Azure
Google Cloud Platform Vertex AI PaLM
OpenAI
PromptLayer ChatOpenAI
previous
How to stream responses
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Las... | https://python.langchain.com/en/latest/modules/models/chat/integrations.html |
69794a56aa6a-0 | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with chat models.
How to use few shot examples
How to stream responses
previous
Getting Started
next
How to use few shot examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated ... | https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html |
19a01905d5c6-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
LLMChain
Streaming
Getting Started#
This notebook covers how to get started with chat models. The interface is based around messages rather than raw text.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.pro... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
19a01905d5c6-1 | [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligenc... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
19a01905d5c6-2 | system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted mes... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
19a01905d5c6-3 | A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With e... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
3692518f8db4-0 | .ipynb
.pdf
OpenAI
OpenAI#
This notebook covers how to get started with OpenAI chat models.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema impo... | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html |
3692518f8db4-1 | AIMessage(content="J'adore la programmation.", additional_kwargs={})
previous
Google Cloud Platform Vertex AI PaLM
next
PromptLayer ChatOpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html |
3b90e3843203-0 | .ipynb
.pdf
PromptLayer ChatOpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer ChatOpenAI#
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer#
The promp... | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
3b90e3843203-1 | chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track... | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
84a16219f836-0 | .ipynb
.pdf
Azure
Azure#
This notebook goes over how to connect to an Azure hosted OpenAI endpoint
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://${TODO}.openai.azure.com"
API_KEY = "..."
DEPLOYMENT_NAME = "chat"
model = AzureChatOpenAI(
openai_api_ba... | https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html |
06351b9cdb24-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html |
06351b9cdb24-1 | HumanMessage,
SystemMessage
)
chat = ChatVertexAI()
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content='Sure, here is the translat... | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html |
ce2dc56d44a1-0 | .ipynb
.pdf
Anthropic
Contents
ChatAnthropic also supports async and streaming functionality:
Anthropic#
This notebook covers how to get started with Anthropic chat models.
from langchain.chat_models import ChatAnthropic
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
... | https://python.langchain.com/en/latest/modules/models/chat/integrations/anthropic.html |
98e71397c445-0 | .ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abst... | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
98e71397c445-1 | template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh m... | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
84684baeedf7-0 | .ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(s... | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
84684baeedf7-1 | How to use few shot examples
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
882fb64ddf7f-0 | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with fir... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html |
80e6c6d11483-0 | .ipynb
.pdf
TensorflowHub
TensorflowHub#
Let’s load the TensorflowHub Embedding class.
from langchain.embeddings import TensorflowHubEmbeddings
embeddings = TensorflowHubEmbeddings()
2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neu... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/tensorflowhub.html |
99841c93fdb6-0 | .ipynb
.pdf
AzureOpenAI
AzureOpenAI#
Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.
# set the environment variables needed for openai package to know to reach out to azure
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https:/... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html |
bed9c7184688-0 | .ipynb
.pdf
Self Hosted Embeddings
Self Hosted Embeddings#
Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
from langchain.embeddings import (
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddi... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
bed9c7184688-1 | tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
def inference_fn(pipeline, prompt):
# Return last hidden state of the model
if isinstance(prompt, list):
return [emb[... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
b7812bf301cc-0 | .ipynb
.pdf
Cohere
Cohere#
Let’s load the Cohere Embedding class.
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Bedrock ... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html |
2d5f80c687a6-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to use Llama-cpp embeddings within LangChain
!pip install llama-cpp-python
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.e... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html |
f4617ae85499-0 | .ipynb
.pdf
ModelScope
ModelScope#
Let’s load the ModelScope Embedding class.
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embeddings = ModelScopeEmbeddings(model_id=model_id)
text = "This is a test document."
query_result = embeddings.embed_query(tex... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/modelscope_hub.html |
5c35fb9b1955-0 | .ipynb
.pdf
Aleph Alpha
Contents
Asymmetric
Symmetric
Aleph Alpha#
There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric ... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html |
7f61c52ad78e-0 | .ipynb
.pdf
MiniMax
MiniMax#
MiniMax offers an embeddings service.
This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.
import os
os.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"
os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"
from langchain.embeddings import MiniMaxEm... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/minimax.html |
b8f28e5f950e-0 | .ipynb
.pdf
Bedrock Embeddings
Bedrock Embeddings#
%pip install boto3
from langchain.embeddings import BedrockEmbeddings
embeddings = BedrockEmbeddings(credentials_profile_name="bedrock-admin")
embeddings.embed_query("This is a content of the document")
embeddings.embed_documents(["This is a content of the document"])
... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/bedrock.html |
e09f0fc0e148-0 | .ipynb
.pdf
Sentence Transformers Embeddings
Sentence Transformers Embeddings#
SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.
SentenceTransformers is a... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sentence_transformers.html |
e82efbe145af-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
e82efbe145af-1 | previous
Fake Embeddings
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.