id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
7cd9c88f0159-1
Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the driver's up to date stats, write them note relaying those st...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
7cd9c88f0159-2
Here are the drivers stats: Conversation rate: 0.4745151400566101 Acceptance rate: 0.055561766028404236 Average Daily Trips: 936 Your response: Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store from langchain.chat_models import ChatOpen...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
7cd9c88f0159-3
user_transaction_metrics = FeatureService( name = "user_transaction_metrics", features = [user_transaction_counts] ) The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace. import tecton workspace = tecton.get_workspace("prod") feature_se...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
7cd9c88f0159-4
kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"] return prompt.format(**kwargs) prompt_template = TectonPromptTemplate(input_variables=["user_id"]) print(prompt_template.format(user_id="user_469998441571")) Given the vendor's up to date transaction stats, writ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
7cd9c88f0159-5
client = ff.Client(host="demo.featureform.com") Prompts# Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions. Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other va...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
7cd9c88f0159-6
Define and Load Features Prompts Use in a chain Featureform Initialize Featureform Prompts Use in a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
6c32f77cf1b0-0
.ipynb .pdf How to create a prompt template that uses few shot examples Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector int...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6c32f77cf1b0-1
"answer": """ Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 """ }, ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6c32f77cf1b0-2
print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6c32f77cf1b0-3
Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6c32f77cf1b0-4
# This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma,...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6c32f77cf1b0-5
suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The m...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
6d039cb11352-0
.ipynb .pdf How to serialize prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File PromptTempalte with OutputParser How to serialize prompts# It is often pref...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
6d039cb11352-1
prompt = load_prompt("simple_prompt.yaml") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading from JSON# This shows an example of loading a PromptTemplate from JSON. !cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
6d039cb11352-2
output: sad - input: tall output: short Loading from YAML# This shows an example of loading a few shot example from YAML. !cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["i...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
6d039cb11352-3
!cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "exampl...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
6d039cb11352-4
Output: short Input: funny Output: Example Prompt from a File# This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path. !cat example_prompt.json { "_type": "prompt", "input_variables": ["in...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
6d039cb11352-5
"_type": "regex_parser" }, "partial_variables": {}, "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:", "template_format": "f-string", "validate_template": true...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
00148f8e8872-0
.ipynb .pdf How to create a custom prompt template Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template How to create a custom prompt template# Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
00148f8e8872-1
def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. from langchain.prompts import String...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
00148f8e8872-2
prompt = fn_explainer.format(function_name=get_source_code) print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the fu...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
650011c4cc74-0
.ipynb .pdf How to work with partial Prompt Templates Contents Partial With Strings Partial With Functions How to work with partial Prompt Templates# A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it ...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
650011c4cc74-1
print(prompt.format(bar="baz")) foobaz Partial With Functions# The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to ha...
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
650011c4cc74-2
Contents Partial With Strings Partial With Functions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html
143d58f4f82a-0
.ipynb .pdf Output Parsers Output Parsers# Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. Output parsers are classes that help structure language model responses. There are two main methods an output parser must impl...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html
143d58f4f82a-1
punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field # S...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html
d96c49d5cecc-0
.ipynb .pdf RetryOutputParser RetryOutputParser# While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. from langchain.prompts...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
d96c49d5cecc-1
23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/version...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
d96c49d5cecc-2
fix_parser.parse(bad_response) Action(action='search', action_input='') Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response. from langchain.output_parsers import RetryWithErrorOutputParser retry_parser = RetryWithErrorOutputParser....
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html
89d1eaa1f860-0
.ipynb .pdf Enum Output Parser Enum Output Parser# This notebook shows how to use an Enum output parser from langchain.output_parsers.enum import EnumOutputParser from enum import Enum class Colors(Enum): RED = "red" GREEN = "green" BLUE = "blue" parser = EnumOutputParser(enum=Colors) parser.parse("red") <C...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html
89d1eaa1f860-1
During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[8], line 2 1 # And raises errors when appropriate ----> 2 parser.parse("yellow") File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputPars...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html
acb1d174f78a-0
.ipynb .pdf OutputFixingParser OutputFixingParser# This output parser wraps another output parser and tries to fix any mistakes The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors. But we can do other things besides throw errors. Specifically, we can pass the mi...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
acb1d174f78a-1
24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is N...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
acb1d174f78a-2
Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(ms...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
0740e80feeee-0
.ipynb .pdf PydanticOutputParser PydanticOutputParser# This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-form...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
0740e80feeee-1
prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) _input = prompt.format_prompt(query=joke_query) output = model(_input.to_string()) parser.parse(output) Joke(...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html
6a8183fd3437-0
.ipynb .pdf Structured Output Parser Structured Output Parser# While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only. from langchain.output_parsers import StructuredOutputParser, ResponseSchema from langchain.prompts import PromptTemplate, ChatPromptTemplate,...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
6a8183fd3437-1
prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) _input = prompt.format_prompt(...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html
c4ac9ccc5060-0
.ipynb .pdf Datetime Datetime# This OutputParser shows out to parse LLM output into datetime format. from langchain.prompts import PromptTemplate from langchain.output_parsers import DatetimeOutputParser from langchain.chains import LLMChain from langchain.llms import OpenAI output_parser = DatetimeOutputParser() templ...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/datetime.html
ac26bb1dc437-0
.ipynb .pdf CommaSeparatedListOutputParser CommaSeparatedListOutputParser# Here’s another parser strictly less powerful than Pydantic/JSON parsing. from langchain.output_parsers import CommaSeparatedListOutputParser from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langch...
https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html
040695d7d131-0
.ipynb .pdf Maximal Marginal Relevance ExampleSelector Maximal Marginal Relevance ExampleSelector# The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddin...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
040695d7d131-1
# This is the number of examples to produce. k=2 ) mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
040695d7d131-2
Give the antonym of every input Input: happy Output: sad Input: sunny Output: gloomy Input: worried Output: previous LengthBased ExampleSelector next NGram Overlap ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html
df338a369dbf-0
.md .pdf How to create a custom example selector Contents Implement custom example selector Use custom example selector How to create a custom example selector# In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples. An ExampleSelector must implemen...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
df338a369dbf-1
# Add new example to the set of examples example_selector.add_example({"foo": "4"}) example_selector.examples # -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}] # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '1'}, {'foo': '4'}], dtype=object) previous Example Selectors...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
36e97fbbb6f3-0
.ipynb .pdf NGram Overlap ExampleSelector NGram Overlap ExampleSelector# The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a th...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
36e97fbbb6f3-1
{"input": "Spot can run.", "output": "Spot puede correr."}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, ...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
36e97fbbb6f3-2
Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can add examples to NGramOverlapExampleSelector as well. new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."} example_selector.add_example(new_example) print(dynamic_prompt.format(se...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
36e97fbbb6f3-3
Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output: # Setting threshold greater than 1.0 example_selector.threshold=1.0+1e-9 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output: previous Maxima...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
d776f7426b1e-0
.ipynb .pdf Similarity ExampleSelector Similarity ExampleSelector# The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain.prompts.exam...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
d776f7426b1e-1
example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. # Input is a feeling, so should select the happy/sad example pr...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html
b7dddab2fbf8-0
.ipynb .pdf LengthBased ExampleSelector LengthBased ExampleSelector# This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while ...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
b7dddab2fbf8-1
# it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, pref...
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
b7dddab2fbf8-2
Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output: previous How to create a custom example selector next Maximal Marginal Relevance ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html
2fef064145cc-0
.rst .pdf Chat Models Chat Models# Note Conceptual Guide Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and out...
https://python.langchain.com/en/latest/modules/models/chat.html
40921b52a82e-0
.rst .pdf Text Embedding Models Text Embedding Models# Note Conceptual Guide This documentation goes over how to use the Embedding class in LangChain. The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is design...
https://python.langchain.com/en/latest/modules/models/text_embedding.html
efdff5a448f3-0
.ipynb .pdf Getting Started Contents Language Models text -> text interface messages -> message interface Getting Started# One of the core value props of LangChain is that it provides a standard interface to models. This allows you to swap easily between models. At a high level, there are two main types of models: La...
https://python.langchain.com/en/latest/modules/models/getting_started.html
efdff5a448f3-1
Language Models text -> text interface messages -> message interface By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/getting_started.html
5df9f3e7a291-0
.rst .pdf LLMs LLMs# Note Conceptual Guide Large Language Models (LLMs) are a core component of LangChain. LangChain is not a provider of LLMs, but rather provides a standard interface through which you can interact with a variety of LLMs. The following sections of documentation are provided: Getting Started: An overvi...
https://python.langchain.com/en/latest/modules/models/llms.html
6e7168377ec2-0
.rst .pdf Integrations Integrations# The examples here are all “how-to” guides for how to integrate with various LLM providers. AI21 Aleph Alpha Anyscale Azure OpenAI Banana Beam integration for langchain Amazon Bedrock CerebriumAI Cohere C Transformers Databricks DeepInfra ForefrontAI Google Cloud Platform Vertex AI P...
https://python.langchain.com/en/latest/modules/models/llms/integrations.html
1a206d5936d7-0
.rst .pdf Generic Functionality Generic Functionality# The examples here all address certain “how-to” guides for working with LLMs. How to use the async API for LLMs How to write a custom LLM wrapper How (and why) to use the fake LLM How (and why) to use the human input LLM How to cache LLM calls How to serialize LLM c...
https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html
ce120e6d0a36-0
.ipynb .pdf Getting Started Getting Started# This notebook goes over how to use the LLM class in LangChain. The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this p...
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
ce120e6d0a36-1
llm_result.generations[-1] [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just f...
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
ce120e6d0a36-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
055918386f2d-0
.ipynb .pdf Beam integration for langchain Beam integration for langchain# Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instan...
https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html
055918386f2d-1
"torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False) llm._deploy() response = llm._call("Running machine learning on a remote GPU") print(response) previous Banana next Amazon Bedrock By Harrison C...
https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html
ec23e5c7b605-0
.ipynb .pdf OpenAI OpenAI# OpenAI offers a spectrum of models with different levels of power suitable for different tasks. This example goes over how to use LangChain to interact with OpenAI models # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() ······...
https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html
f26bb7448e78-0
.ipynb .pdf Azure OpenAI Contents API configuration Deployments Azure OpenAI# This notebook goes over how to use Langchain with Azure OpenAI. The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
f26bb7448e78-1
import openai response = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5 ) !pip install openai import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_VERSION"] = "2022-12-01" os.environ["OPENAI_API_BASE"] = "..." os.environ["OPENAI_API_KEY"] ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
0f02eed8b210-0
.ipynb .pdf Basic LLM usage Contents Basic LLM usage Control the output structure/ type of LLMs Chaining ! pip install predictionguard langchain import os import predictionguard as pg from langchain.llms import PredictionGuard from langchain import PromptTemplate, LLMChain Basic LLM usage# # Optional, add your OpenAI...
https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html
0f02eed8b210-1
pgllm(prompt.format(query="What kind of post is this?")) # With "guarding" or controlling the output of the LLM. See the # Prediction Guard docs (https://docs.predictionguard.com) to learn how to # control the output with integer, float, boolean, JSON, and other types and # structures. pgllm = PredictionGuard(model="...
https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html
0f02eed8b210-2
Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html
25dd0210e2f5-0
.ipynb .pdf ForefrontAI Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain ForefrontAI# The Forefront platform gives you the ability to fine-tune and use open source large language models. This notebook goes over how to use Lang...
https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html
25dd0210e2f5-1
DeepInfra next Google Cloud Platform Vertex AI PaLM Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html
ddec6b3f4625-0
.ipynb .pdf Petals Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiate the LLMChain Run the LLMChain Petals# Petals runs 100B+ language models at home, BitTorrent-style. This notebook goes over how to use Langchain with Petals. Install petals# The p...
https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html
ddec6b3f4625-1
Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous OpenLM next PipelineAI Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiat...
https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html
d0d3b387fe10-0
.ipynb .pdf Databricks Contents Wrapping a serving endpoint Wrapping a cluster driver proxy app Databricks# The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain. It supports two endpoint types: Serving endp...
https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html
d0d3b387fe10-1
# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens # We strongly recommend not exposing the API token explicitly inside a notebook. # You can use Databricks secret manager to store your API token securely. # See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-uti...
https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html
d0d3b387fe10-2
It uses a port number between [3000, 8000] and litens to the driver IP address or simply 0.0.0.0 instead of localhost only. You have “Can Attach To” permission to the cluster. The expected server schema (using JSON schema) is: inputs: {"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"...
https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html
d0d3b387fe10-3
self.matched = self.stop[i] return True return False def llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched) app = Flask("dolly") @app.route('/', method...
https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html
d0d3b387fe10-4
# Use `transform_input_fn` and `transform_output_fn` if the app # expects a different input schema and does not return a JSON string, # respectively, or you want to apply a prompt template on top. def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = f...
https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html
81797160b7ad-0
.ipynb .pdf Huggingface TextGen Inference Huggingface TextGen Inference# Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets. This notebooks goes over how to use a self hosted LLM using Text Generation Inference...
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html
daddbb5a14b3-0
.ipynb .pdf Cohere Cohere# Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. This example goes over how to use LangChain to interact with Cohere models. # Install the package !pip install cohere # get a new token: https://dashboard.cohe...
https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html
daddbb5a14b3-1
llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our inform...
https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html
8bfdcf41eb30-0
.ipynb .pdf NLP Cloud NLP Cloud# The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text g...
https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html
83e5719a69ec-0
.ipynb .pdf Hugging Face Hub Contents Examples StableLM, by Stability AI Dolly, by DataBricks Camel, by Writer Hugging Face Hub# The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily col...
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
83e5719a69ec-1
StableLM, by Stability AI# See Stability AI’s organization page for a list of available models. repo_id = "stabilityai/stablelm-tuned-alpha-3b" # Others include stabilityai/stablelm-base-alpha-3b # as well as 7B parameter versions llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) # ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
83e5719a69ec-2
Contents Examples StableLM, by Stability AI Dolly, by DataBricks Camel, by Writer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
fc3a20aaed76-0
.ipynb .pdf AI21 AI21# AI21 Studio provides API access to Jurassic-2 large language models. This example goes over how to use LangChain to interact with AI21 models. # install the package: !pip install ai21 # get AI21_API_KEY. Use https://studio.ai21.com/account/account from getpass import getpass AI21_API_KEY = getpa...
https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html
174e2fa82dbd-0
.ipynb .pdf Llama-cpp Contents Installation CPU only installation Installation with OpenBLAS / cuBLAS / CLBlast Usage CPU GPU Llama-cpp# llama-cpp is a Python binding for llama.cpp. It supports several LLMs. This notebook goes over how to run llama-cpp within LangChain. Installation# There is a banch of options how t...
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
174e2fa82dbd-1
template = """Question: {question} Answer: Let's work this out in a step by step way to be sure we have the right answer.""" prompt = PromptTemplate(template=template, input_variables=["question"]) # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
174e2fa82dbd-2
llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in tha...
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
174e2fa82dbd-3
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. First, let's look up which year is closest to when Justin Bieber was born: * The year before he was born: 1...
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
174e2fa82dbd-4
llama_print_timings: total time = 15664.80 ms " We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \n\nFirst, let's look up which year is closest to when Justin Bieber was born:\n\n* The year before he was born: 1993\n* The year of his birth: 1994\n* The year ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
901c775a231d-0
.ipynb .pdf Modal Modal# The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer. The Modal itself does not provide any LLMs but only the infrastructure. This example goes over how to use LangChain to interact with Modal. Here is another exam...
https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html
901c775a231d-1
previous Manifest next MosaicML By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html
27f12aec27bf-0
.ipynb .pdf Aleph Alpha Aleph Alpha# The Luminous series is a family of large language models. This example goes over how to use LangChain to interact with Aleph Alpha models # Install the package !pip install aleph-alpha-client # create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token from ge...
https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html
2535554d7de0-0
.ipynb .pdf Structured Decoding with JSONFormer Contents HuggingFace Baseline JSONFormer LLM Wrapper Structured Decoding with JSONFormer# JSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema. It works by filling in the structure tokens and then sa...
https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html
2535554d7de0-1
{arg_schema} EXAMPLES ---- Human: "So what's all this about a GIL?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}" }} Observation: "The GIL is python's Global Interpreter Lock" Human: "Could you please write a calculator program ...
https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html
2535554d7de0-2
original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.predict(prompt, stop=["Observation:", "Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. 'What's the difference between an iterator and an iterable?' That’s not so impressive, is it? It d...
https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html