id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
d955ed71cf75-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/petals_example
d955ed71cf75-2
HUGGINGFACE_API_KEYCreate the Petals instance​You can specify different parameters such as the model name, max new tokens, temperature, etc.# this can take several minutes to download big files!llm = Petals(model_name="bigscience/bloom-petals") Downloading: 1%|� | 40.8M/7.19G [00:24<15:4...
https://python.langchain.com/docs/integrations/llms/petals_example
99260688a203-0
ChatGLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-2
ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both.from langchain.llms import ChatGLMfrom langchain import PromptTemplate, LLMChain# import ostemplate = """{question}"""prompt = PromptTemplate(template=template, input_variables=["question"])# default endpoint_url for a local deploy...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-3
Truellm_chain = LLMChain(prompt=prompt, llm=llm)question = "北京和上海两座�市有什么��?"llm_chain.run(question) ChatGLM payload: {'prompt': '北京和上海两座�市有什么��?', 'temperature': 0.1, 'history': [['我将��国到中国�旅游,出行�希望了解中国的�市', '...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-4
'北京和上海是中国的两个首都,它们在许多方�都有所��。\n\n北京是中国的政治和文化中心,拥有悠久的��和�烂的文化。它是中国最��的�都之一,也是中国��上最�一个�建��的都�。北京有许多著�的�迹和景点,例如紫�...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-5
天安门广场和长�等。\n\n上海是中国最�代化的�市之一,也是中国商业和金�中心。上海拥有许多国际知�的�业和金�机�,�时也有许多著�的景点和�食。上海的外滩是一个��悠久的商业区,拥有许多欧�建筑和�馆。\n\n除此之外...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-6
上海在交通和人å�£æ–¹é�¢ä¹Ÿæœ‰å¾ˆå¤§å·®å¼‚。北京是中国的首都,人å�£ä¼—多,交通拥堵问题较为严é‡�。而上海是中国的商业和金è��中心,人å�£å¯†åº¦è¾ƒä½�,交通相对较为便利。\n\n总的æ�¥è¯´ï¼ŒåŒ—京和上海是两个拥有独特魅力和特点的åŸ�市,å�¯ä»¥æ ¹æ�®è‡ªå·±ç...
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-7
Œæ—¶é—´æ�¥é€‰æ‹©å‰�往其中一座åŸ�市旅游。'PreviousCerebriumAINextClarifaiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright
https://python.langchain.com/docs/integrations/llms/chatglm
99260688a203-8
© 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/chatglm
161cfa46e3df-0
Predibase | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/predibase
161cfa46e3df-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/predibase
161cfa46e3df-2
wine?")print(response)Chain Call Setup​llm = Predibase( model="vicuna-13b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"))SequentialChain​from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate# This is an LLMChain to write a synopsis given a title of a play.template = """You ...
https://python.langchain.com/docs/integrations/llms/predibase
161cfa46e3df-3
replace my-finetuned-LLM with the name of your model in Predibase# response = model("Can you help categorize the following emails into positive, negative, and neutral?")PreviousPipelineAINextPrediction GuardInitial CallChain Call SetupSequentialChainFine-tuned LLM (Use your own fine-tuned LLM from Predibase)CommunityDi...
https://python.langchain.com/docs/integrations/llms/predibase
ce47c26d5670-0
AI21 | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/ai21
ce47c26d5670-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/ai21
ce47c26d5670-2
= LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) '\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'PreviousLLMsNex...
https://python.langchain.com/docs/integrations/llms/ai21
c26f2592eff4-0
DeepInfra | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/deepinfra_example
c26f2592eff4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/deepinfra_example
c26f2592eff4-2
You can print your token with deepctl auth token# get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENCreate the DeepInfra instance​You can also use our open source deepctl tool t...
https://python.langchain.com/docs/integrations/llms/deepinfra_example
c26f2592eff4-3
the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher."PreviousDatabricksNextForefrontAIImportsSet the Environment API KeyCreate the DeepInfra instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangCha...
https://python.langchain.com/docs/integrations/llms/deepinfra_example
68b279a657a4-0
StochasticAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/stochasticai
68b279a657a4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/stochasticai
68b279a657a4-2
langchain.llms import StochasticAIfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = StochasticAI(api_url=YOUR_API_URL)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Wh...
https://python.langchain.com/docs/integrations/llms/stochasticai
774a4426c282-0
Beam | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/beam
774a4426c282-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/beam
774a4426c282-2
"<Your beam client id>"beam_client_secret = "<Your beam client secret>"# Set the environment variablesos.environ["BEAM_CLIENT_ID"] = beam_client_idos.environ["BEAM_CLIENT_SECRET"] = beam_client_secret# Run the beam configure commandbeam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}Install th...
https://python.langchain.com/docs/integrations/llms/beam
6e0bd04b7fc1-0
RELLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/rellm_experimental
6e0bd04b7fc1-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/rellm_experimental
6e0bd04b7fc1-2
Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg."}Human: "What 2 + 5?"AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7."}Human: 'What's the capital of Maryland?'AI Assistant:"""from transformers import pipelinefrom langchain.llms import HuggingFace...
https://python.langchain.com/docs/integrations/llms/rellm_experimental
6e0bd04b7fc1-3
regex=pattern, max_new_tokens=200)generated = model.predict(prompt, stop=["Human:"])print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors.PreviousPromptLayer OpenAINextReplicateHugging Face BaselineRELLM LLM WrapperCommunityDisc...
https://python.langchain.com/docs/integrations/llms/rellm_experimental
858969f36a57-0
Tongyi Qwen | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/tongyi
858969f36a57-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/tongyi
858969f36a57-2
osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.llms import Tongyifrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Tongyi()llm_chain = LLMChain(prompt=prom...
https://python.langchain.com/docs/integrations/llms/tongyi
8aabdd622f56-0
GooseAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/gooseai_example
8aabdd622f56-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/gooseai_example
8aabdd622f56-2
= GOOSEAI_API_KEYCreate the GooseAI instance​You can specify different parameters such as the model name, max tokens generated, temperature, etc.llm = GooseAI()Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prom...
https://python.langchain.com/docs/integrations/llms/gooseai_example
2822079b266b-0
JSONFormer | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental
2822079b266b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental
2822079b266b-2
"""Query the BigCode StarCoder model about coding questions.""" url = "https://api-inference.huggingface.co/models/bigcode/starcoder" headers = { "Authorization": f"Bearer {HF_TOKEN}", "content-type": "application/json", } payload = { "inputs": f"{query}\n\nAnswer:", "temperature...
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental
2822079b266b-3
"What's the difference between an SVM and an LLM?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}}}}Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine."BEG...
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental
2822079b266b-4
"type": "object", "properties": ask_star_coder.args, }, },}from langchain.experimental.llms import JsonFormerjson_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)results = json_former.predict(prompt, stop=["Observation:", "Human:"])print(results) {"action": "ask_star_coder", "a...
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental
bb5ef7a74ba4-0
Aleph Alpha | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/aleph_alpha
bb5ef7a74ba4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/aleph_alpha
bb5ef7a74ba4-2
model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY,)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is AI?"llm_chain.run(question) ' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially c...
https://python.langchain.com/docs/integrations/llms/aleph_alpha
a23c4d46ba88-0
Banana | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/banana
a23c4d46ba88-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/banana
a23c4d46ba88-2
{question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Banana(model_key="YOUR_MODEL_KEY")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousAzureM...
https://python.langchain.com/docs/integrations/llms/banana
ff444a2dcb48-0
Azure OpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/azure_openai_example
ff444a2dcb48-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/azure_openai_example
ff444a2dcb48-2
your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure Op...
https://python.langchain.com/docs/integrations/llms/azure_openai_example
ff444a2dcb48-3
model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperatur...
https://python.langchain.com/docs/integrations/llms/azure_openai_example
9ef6998cc538-0
PipelineAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/pipelineai_example
9ef6998cc538-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/pipelineai_example
9ef6998cc538-2
hours of serverless GPU compute to test different models.os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE"Create the PipelineAI instance​When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing add...
https://python.langchain.com/docs/integrations/llms/pipelineai_example
176f630c1891-0
Cohere | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/cohere
176f630c1891-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/cohere
176f630c1891-2
= LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won th...
https://python.langchain.com/docs/integrations/llms/cohere
3b84d9f5b10b-0
octoai | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/octoai
3b84d9f5b10b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/octoai
3b84d9f5b10b-2
Token from your OctoAI account page.Paste your API key in in the code cell belowimport osos.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] = "https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate"from langchain.llms.octoai_endpoint import OctoAIEndpointfrom langchain import PromptTemplate, ...
https://python.langchain.com/docs/integrations/llms/octoai
3b84d9f5b10b-3
human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and ...
https://python.langchain.com/docs/integrations/llms/octoai
9291743c093d-0
Llama-cpp | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-2
It supports several LLMs.This notebook goes over how to run llama-cpp within LangChain.Installation​There is a bunch of options how to install the llama-cpp package: only CPU usageCPU + GPU (using one of many BLAS backends)Metal GPU (MacOS with Apple Silicon Chip) CPU only installation​pip install llama-cpp-pythonI...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-3
is stable to install the llama-cpp-python library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.Requirements to install the llama-cpp-python,gitpythoncmakeVisual Studio Community (make sure you instal...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-4
CallbackManager([StreamingStdOutCallbackHandler()])# Verbose is required to pass to the callback managerCPU​Llama-v2# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/llama-2-7b-ggml/llama-2-7b-chat.ggmlv3.q4_0.bin", input={"temperature": 0.75, "max_l...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-5
news and your jokes. While I'm the one who's really makin' a difference, with my sat llama_print_timings: load time = 358.60 ms llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second) llama_print_timings: prompt eval time = 613.36...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-6
the truth to light.\nStephen Colbert:\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\nWhile I'm the one who's really makin' a difference, with my sat"Llama-v1# Make sure the model path is corre...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-7
ms / 48 tokens ( 52.58 ms per token) llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we ne...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-8
callback_manager=callback_manager, verbose=True,)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. Fi...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-9
ms per token) llama_print_timings: prompt eval time = 238.04 ms / 49 tokens ( 4.86 ms per token) llama_print_timings: eval time = 10391.96 ms / 255 runs ( 40.75 ms per token) llama_print_timings: total time = 15664.80 ms " We are looking for an NFL team that won the Super Bowl whe...
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-10
layers of the model are offloaded to your Metal GPU, in the most case, set it to 1 is enough for Metaln_batch - how many tokens are processed in parallel, default is 8, set to bigger number.f16_kv - for some reason, Metal only support True, otherwise you will get error such as Asserting on type 0
https://python.langchain.com/docs/integrations/llms/llamacpp
9291743c093d-11
GGML_ASSERT: .../ggml-metal.m:706: false && "not implemented"Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon...
https://python.langchain.com/docs/integrations/llms/llamacpp
df1bd09c9d4f-0
Bedrock | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/bedrock
df1bd09c9d4f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/bedrock
df1bd09c9d4f-2
there!")PreviousBeamNextCerebriumAIUsing in a conversation chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/bedrock
ef28a02df703-0
Hugging Face Local Pipelines | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
ef28a02df703-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
ef28a02df703-2
HuggingFacePipelinellm = HuggingFacePipeline.from_model_id( model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature": 0, "max_length": 64},) WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url...
https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
ef28a02df703-3
Failed to establish a new connection: [Errno 61] Connection refused')) First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placedPreviousHugging...
https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
76caf237e446-0
ForefrontAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/forefrontai_example
76caf237e446-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/forefrontai_example
76caf237e446-2
ForefrontAI instance​You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {ques...
https://python.langchain.com/docs/integrations/llms/forefrontai_example
6eab2d2cb90d-0
OpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/openai
6eab2d2cb90d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/openai
6eab2d2cb90d-2
OPENAI_ORGANIZATIONfrom langchain.llms import OpenAIfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = OpenAI()If you manually want to specify your OpenAI API key and/or organiz...
https://python.langchain.com/docs/integrations/llms/openai
7967da7bf0c2-0
Amazon API Gateway | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example
7967da7bf0c2-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example
7967da7bf0c2-2
the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.LLM​from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"llm = AmazonAPIGateway(api_...
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example
7967da7bf0c2-3
the language model, and the type of agent we want to use.agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)# Now let's test it out!agent.run( """Write a Python script that prints "Hello, world!"""") > Entering new chain... I need to use th...
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example
7967da7bf0c2-4
> Finished chain. '42.43998894277659'PreviousAleph AlphaNextAnyscaleLLMAgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example
5f5eb06e5450-0
Anyscale | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/anyscale
5f5eb06e5450-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/anyscale
5f5eb06e5450-2
PromptTemplate(template=template, input_variables=["question"])llm = Anyscale()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "When was George Washington president?"llm_chain.run(question)With Ray, we can distribute the queries without asyncrhonized implementation. This not only applies to Anyscale LLM model, b...
https://python.langchain.com/docs/integrations/llms/anyscale
f3ec7e5c32e4-0
MosaicML | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/mosaicml
f3ec7e5c32e4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabric...
https://python.langchain.com/docs/integrations/llms/mosaicml
f3ec7e5c32e4-2
model_kwargs={"do_sample": False})llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is one good reason why you should train a large language model on domain specific data?"llm_chain.run(question)PreviousModalNextNLP CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc...
https://python.langchain.com/docs/integrations/llms/mosaicml
619b5efe033c-0
Memory | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/memory/