id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
93870d4d9ad6-2
personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcl...
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
93870d4d9ad6-3
'```python\ndef is_prime(n):\n """\n Determines if a number is prime.\n\n Args:\n n: The number to be tested.\n\n Returns:\n True if the number is prime, False otherwise.\n """\n\n # Check if the number is 1.\n if n == 1:\n return False\n\n # Check if the number is 2.\n if n == 2:\n return True\n\n...
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
53b6767b7355-0
C Transformers | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/ctransformers
53b6767b7355-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/ctransformers
53b6767b7355-2
= PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is AI?")PreviousCohereNextDatabricksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/ctransformers
d357652bbea9-0
Hugging Face Hub | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-2
········import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENPrepare Examples​from langchain import HuggingFaceHubfrom langchain import PromptTemplate, LLMChainquestion = "Who won the FIFA World Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""p...
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-3
the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994. Question: WhoCamel, by Writer​See Writer's organization page for a list of available models.repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other optionsllm = HuggingFaceHub( repo_id=repo_id, mo...
https://python.langchain.com/docs/integrations/llms/huggingface_hub
8bffd875f53b-0
KoboldAI API | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/koboldai
8bffd875f53b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/koboldai
8bffd875f53b-2
max_length=80)response = llm("### Instruction:\nWhat is the first book of the bible?\n### Response:")PreviousJSONFormerNextLlama-cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/koboldai
226a42d67377-0
Runhouse | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-2
with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ...
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-3
model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu,)llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin'Using a custom load function, we can load a custo...
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-4
| Time to send message: 0.3 seconds 'john w. bush'You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:pipeline = load_pipeline()llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs)Inst...
https://python.langchain.com/docs/integrations/llms/runhouse
1071dcc2a60c-0
TextGen | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/textgen
1071dcc2a60c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/textgen
1071dcc2a60c-2
langchainfrom langchain import PromptTemplate, LLMChainfrom langchain.llms import TextGenlangchain.debug = Truetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url)llm_chain = LLMChain(prompt=promp...
https://python.langchain.com/docs/integrations/llms/textgen
520f4b4df23d-0
AzureML Online Endpoint | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-2
The deployment name of the endpointContent Formatter​The content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one anoth...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-3
return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.ge...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-4
was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Lo...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-5
HaSeul won her first music show trophy with "So What" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was thei...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-6
be stuck up. Try to understand others where they're coming from. Like minded people can build a tribe together.Serializing an LLM​You can also save and load LLM configurationsfrom langchain.llms.loading import load_llmfrom langchain.llms.azureml_endpoint import AzureMLEndpointClientsave_llm = AzureMLOnlineEndpoint( ...
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
e6dda29fe6b8-0
GPT4All | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-2
run locally, download a compatible ggml-formatted model. Download option 1: The gpt4all page has a useful Model Explorer section:Select a model of interestDownload using the UI and move the .bin to the local_path (noted below)For more info, visit https://github.com/nomic-ai/gpt4all.Download option 2: Uncomment the belo...
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-3
verbose=True)# If you want to use a custom model add the backend parameter# Check https://docs.gpt4all.io/gpt4all_python.html for supported backendsllm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl i...
https://python.langchain.com/docs/integrations/llms/gpt4all
41bc4ac4ec9d-0
Baseten | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/baseten
41bc4ac4ec9d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/baseten
41bc4ac4ec9d-2
and follow along with the deployed model's version ID.from langchain.llms import Baseten# Load the modelwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)# Prompt the modelwizardlm("What is the difference between a Wizard and a Sorcerer?")Chained model callsWe can chain together multiple calls to one or multipl...
https://python.langchain.com/docs/integrations/llms/baseten
8d8793de2fb5-0
Writer | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/writer
8d8793de2fb5-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/writer
8d8793de2fb5-2
from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousTongyi QwenNextMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/writer
46d34e9c5525-0
Clarifai | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-2
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass i...
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-3
'Justin Bieber was born on March 1, 1994. So, we need to figure out the Super Bowl winner for the 1994 season. The NFL season spans two calendar years, so the Super Bowl for the 1994 season would have taken place in early 1995. \n\nThe Super Bowl in question is Super Bowl XXIX, which was played on January 29, 1995. The...
https://python.langchain.com/docs/integrations/llms/clarifai
49ca6b066a9e-0
Huggingface TextGen Inference | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
49ca6b066a9e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
49ca6b066a9e-2
typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?")Streaming​from langchain.llms import HuggingFaceTextGenInferencefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = HuggingFaceTextGenInference( inference_server_url="http://localhost...
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
a57a5d854a80-0
CerebriumAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
a57a5d854a80-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
a57a5d854a80-2
See here. You are given a 1 hour free of serverless GPU compute to test different models.os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"Create the CerebriumAI instance​You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.llm = Cerebriu...
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
7efa063ce8d0-0
Replicate | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-2
(21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) ...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-3
Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0# get a token: https://replicate.com/accountfrom getpa...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-4
Dogs do not have the ability to operate complex machinery like cars.\n\t* This is because dogs do not possess the necessary cognitive abilities to understand how to operate a car.\n2. Dogs do not have the physical dexterity or coordination to manipulate the controls of a car.\n\t* This is because dogs do not have the n...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-5
= """Answer the following yes/no question by reasoning step by step. Can a dog drive a car?"""llm(prompt) 'No, dogs are not capable of driving cars since they do not have hands to operate a steering wheel nor feet to control a gas pedal. However, it’s possible for a driver to train their pet in a different behavio...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-6
Streaming Response​You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Replicate( streaming=...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-7
you for the generation up until the stop sequence.import timellm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.01, "max_length": 500, "top_p": 1},)prompt = """User: What is the best way to learn python?Assistant:"""start_...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-8
and "Automate the Boring Stuff with Python" by Al Sweigart. 3. Online communities: Participating in online communities such as Reddit's r/learnpython community or Python communities on Discord can be a great way to get support and feedback as you learn. 4. Practice: The best way to learn Python is by doing. Start...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-9
There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: Stopped output runtime: 3.2350128999969456 secondsChaining Calls​The whole point of langchain is to... chain! Here's an example of how do that.from langchain.chains import S...
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-10
input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)Now let's run it!# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], ver...
https://python.langchain.com/docs/integrations/llms/replicate
1a782c459ffd-0
Manifest | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-2
langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChain_prompt = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate(template=_prompt, input_variables=["text"])text_splitter = CharacterTextSplitter()mp_chain = MapReduceChain.from...
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-3
client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01},)manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01},)manifest3 = Mani...
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-4
'temperature': 0.01} pink PreviousCaching integrationsNextModalCompare HF ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/manifest
75508466fe09-0
PromptLayer OpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-2
can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.Set it as an environment variable called PROMPTLAYER_API_KEY.You also need an OpenAI Key, called OPENAI_API_KEY.from getpass import getpassPROMPTLAYER_API_KEY = getpass() ········os.environ["PROMPTLAYER_API_KE...
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-3
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.PreviousPrediction GuardNextRELLMInstall PromptLayerImportsSet the Environment API KeyUse the PromptLayerOpenAI LLM like normalUsing PromptLayer TrackCommunityDiscordTwitterGitHubPythonJS/TSMo...
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
d12ca5b395ef-0
SageMakerEndpoint | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-2
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlExample​from langchain.docstore.document import Documentexample_doc_1 = """Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.Since she was diag...
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-3
= load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature": 1e-10}, content_handler=content_handler, ), prompt=PROMPT,)chain({"input_documents": docs, "questi...
https://python.langchain.com/docs/integrations/llms/sagemaker
6aa02d5831fe-0
NLP Cloud | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/nlpcloud
6aa02d5831fe-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/nlpcloud
6aa02d5831fe-2
getpass import getpassNLPCLOUD_API_KEY = getpass() ········import osos.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEYfrom langchain.llms import NLPCloudfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=templat...
https://python.langchain.com/docs/integrations/llms/nlpcloud
0a72634c5820-0
OpenLLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/openllm
0a72634c5820-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/openllm
0a72634c5820-2
# Replace with remote host if you are running on a remote serverllm = OpenLLM(server_url=server_url)Optional: Local LLM Inference​You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of...
https://python.langchain.com/docs/integrations/llms/openllm
726322b3977f-0
OpenLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-2
not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass()# Check if HF_API_TOKEN environment variable is setif "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass()Using LangChain with OpenLM​Here we're g...
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-3
a complicated issue, and I don't see any solutions to all this, but it is still far morePreviousOpenLLMNextPetalsSetupUsing LangChain with OpenLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/openlm
0e9aba145aa6-0
Caching integrations | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-2
s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'SQLite Cache​rm .langchain....
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-3
The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user ...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-4
Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."GPTCache​We can use GPTCache for exact match caching OR to cache results based on semantic similarityLet's first start with an example of exact matchfrom gptcache import Cachefrom gptcache.manager.factory import manager_fact...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-5
side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom gptcache.adapter.api import init_similar_cachefrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm =...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-6
prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_c...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-7
You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:from sqlalchemy import Column, Integer, String, Computed, Index, Sequencefrom sqlalchemy import create_enginefrom sqlalchemy.ext.declar...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-8
me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months eac...
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-9
ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russi...
https://python.langchain.com/docs/integrations/llms/llm_caching
76bd44cd747f-0
Modal | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-2
Use modal to run your own custom LLM models instead of depending on LLM APIs.This example goes over how to use LangChain to interact with a modal HTTPS web endpoint.Question-answering with LangChain is another example of how to use LangChain alonside Modal. In that example, Modal runs the LangChain application end-to-e...
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-3
a deployed Modal web endpoint, you can pass its URL into the langchain.llms.modal.Modal LLM class. This class can then function as a building block in your chain.from langchain.llms import Modalfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = P...
https://python.langchain.com/docs/integrations/llms/modal
a1fab997bb4a-0
Prediction Guard | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-2
the output structure/ type of LLMs​template = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! � We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (N...
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-3
= LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)template = """Write a {adjective} poem about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain =...
https://python.langchain.com/docs/integrations/llms/predictionguard
d955ed71cf75-0
Petals | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/petals_example