id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
58b3cb6ae83d-19
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-20
Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.Tool[source]# Tool that takes in function or coroutine directly. field coroutine: Optional[Callable[[...], Awai...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-21
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchp...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-22
Returns A PromptTemplate with the template assembled from the pieces here. classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.age...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-23
langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-24
ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I sh...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-25
Construct a json agent from an LLM and tools.
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-26
langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web reque...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-27
do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', inpu...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-28
Construct a json agent from an LLM and tools. langchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[Lis...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-29
langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, pref...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-30
do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', exam...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-31
Construct a pbi agent from an LLM and tools.
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-32
langchain.agents.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, ...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-33
(remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-34
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. langchain.agents.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-35
langchain.agents.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-36
Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-37
Construct a sql agent from an LLM and tools.
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-38
langchain.agents.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an ...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-39
the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-40
Construct a sql agent from an LLM and tools. langchain.agents.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are ...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-41
Construct a vectorstore router agent from an LLM and tools. langchain.agents.get_all_tool_names() → List[str][source]# Get a list of all possible tool names. langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.base_language.BaseLanguageModel, agent: Optional[langchain.agents...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-42
langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.base_language.BaseLanguageModel] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → List[langchain.tools.base.BaseTool][source]# Load tool...
https://python.langchain.com/en/latest/reference/modules/agents.html
58b3cb6ae83d-43
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/reference/modules/agents.html
7353af31a13a-0
.rst .pdf Text Splitter Text Splitter# Functionality for splitting text. class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]# Implementation of splitting text that looks at characters. split_text(text: str) → List[str][source]# Split incoming text and return chunks. class...
https://python.langchain.com/en/latest/reference/modules/text_splitter.html
7353af31a13a-1
Attempts to split the text along Python syntax. class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any)[source]# Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find ...
https://python.langchain.com/en/latest/reference/modules/text_splitter.html
7353af31a13a-2
Create documents from a list of texts. classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]# Text splitter that uses HuggingFace tokenizer to count length. classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, all...
https://python.langchain.com/en/latest/reference/modules/text_splitter.html
45b9debc9367-0
.rst .pdf Python REPL Python REPL# For backwards compatibility. pydantic model langchain.python.PythonREPL[source]# Simulates a standalone Python REPL. field globals: Optional[Dict] [Optional] (alias '_globals')# field locals: Optional[Dict] [Optional] (alias '_locals')# run(command: str) → str[source]# Run command wit...
https://python.langchain.com/en/latest/reference/modules/python.html
6959eddf9f89-0
.rst .pdf Memory Memory# class langchain.memory.CassandraChatMessageHistory(contact_points: List[str], session_id: str, port: int = 9042, username: str = 'cassandra', password: str = 'cassandra', keyspace_name: str = 'chat_history', table_name: str = 'message_store')[source]# Chat message history that stores history in...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-1
clear() → None[source]# Clear context from this session for every memory. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Load all vars from sub-memories. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this session for every memory. property memor...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-2
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-3
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for ext...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-4
field entity_store: langchain.memory.entity.BaseEntityStore [Optional]# field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human kee...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-5
Knowledge graph memory for storing conversation memory. Integrates with external knowledge graph to store and retrieve information about knowledge triples in the conversation. field ai_prefix: str = 'AI'#
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-6
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-7
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for ext...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-8
field human_prefix: str = 'Human'# field k: int = 2# field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]#
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-9
field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrati...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-10
Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is bak...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-11
field llm: langchain.base_language.BaseLanguageModel [Required]# field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'># Number of previous utterances to include in the context. clear() → None[source]# Clear memory contents. get_current_entities(input_string: str) → Lis...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-12
field memory_key: str = 'history'# field moving_summary_buffer: str = ''# clear() → None[source]# Clear memory contents. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. prune() → None[source]# Prune buffer if it exceeds max token limit save_context(inputs: Dict[str, Any], ...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-13
Save context from this conversation to buffer. Pruned. property buffer: List[langchain.schema.BaseMessage]# String buffer of memory. class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_stri...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-14
class langchain.memory.FileChatMessageHistory(file_path: str)[source]# Chat message history that stores history in a local file. Parameters file_path – path of the local file to store the messages. add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in the local file clea...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-15
Exception – Unexpected response. clear() → None[source]# Remove the session’s messages from the cache. Raises SdkException – Momento service or network error. Exception – Unexpected response. classmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Confi...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-16
Append the message to the record in PostgreSQL clear() → None[source]# Clear session memory from PostgreSQL property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from PostgreSQL pydantic model langchain.memory.ReadOnlySharedMemory[source]# A memory wrapper that is read-only and cannot be changed....
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-17
field ttl: Optional[int] = 86400# clear() → None[source]# Delete all entities from store. delete(key: str) → None[source]# Delete entity value from store. exists(key: str) → bool[source]# Check if entity exists in store. get(key: str, default: Optional[str] = None) → Optional[str][source]# Get entity value from store. ...
https://python.langchain.com/en/latest/reference/modules/memory.html
6959eddf9f89-18
Return key-value pairs given the text input to the chain. If None, return all memories save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Nothing should be saved or changed, my memory is set in stone. property memory_variables: List[str]# Input keys this memory class will load dynamically. py...
https://python.langchain.com/en/latest/reference/modules/memory.html
4711ab987ae1-0
.rst .pdf Output Parsers Output Parsers# pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]# Parse out comma separated lists. get_format_instructions() → str[source]# Instructions on how the LLM output should be formatted. parse(text: str) → List[str][source]# Parse the output of an LLM call...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
4711ab987ae1-1
Parameters text – output of language model Returns structured output pydantic model langchain.output_parsers.ListOutputParser[source]# Class to parse the output of an LLM call to a list. abstract parse(text: str) → List[str][source]# Parse the output of an LLM call. pydantic model langchain.output_parsers.OutputFixingP...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
4711ab987ae1-2
and parses it into some structure. Parameters text – output of language model Returns structured output pydantic model langchain.output_parsers.PydanticOutputParser[source]# field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]# get_format_instructions() → str[source]# Instructions on how the LLM ...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
4711ab987ae1-3
Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt and the completion to another LLM, and telling it the completion did not satisfy criteria in the prompt. field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]# field retry_chain: langchain.cha...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
4711ab987ae1-4
Parameters completion – output of language model prompt – prompt value Returns structured output pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]# Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another ...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
4711ab987ae1-5
Parameters text – output of language model Returns structured output parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]# Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser w...
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
07f67b6fe060-0
.rst .pdf Document Compressors Document Compressors# pydantic model langchain.retrievers.document_compressors.CohereRerank[source]# field client: Client [Required]# field model: str = 'rerank-english-v2.0'# field top_n: int = 3# async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Seq...
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
07f67b6fe060-1
similarity_threshold must be specified. Defaults to 20. field similarity_fn: Callable = <function cosine_similarity># Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. field simi...
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
07f67b6fe060-2
Compress page content of raw documents. classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) → langchain.retrievers.docu...
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
8421cefce063-0
.rst .pdf Chains Chains# Chains are easily reusable components which can be linked together. pydantic model langchain.chains.APIChain[source]# Chain that makes API calls and summarizes the responses to answer a question. Validators raise_deprecation » all fields set_verbose » verbose validate_api_answer_prompt » all fi...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-1
field requests_wrapper: TextRequestsWrapper [Required]# classmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-2
pydantic model langchain.chains.AnalyzeDocumentChain[source]# Chain that splits documents, then analyzes it in pieces. Validators raise_deprecation » all fields set_verbose » verbose field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]# field text_splitter: langchain.te...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-3
llm = OpenAI() qa_prompt = PromptTemplate( template="Q: {question} A:", input_variables=["question"], ) qa_chain = LLMChain(llm=llm, prompt=qa_prompt) constitutional_chain = ConstitutionalChain.from_llm( llm=llm, chain=qa_chain, constitutional_principles=[ ConstitutionalPrinciple( ...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-4
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'i...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-5
model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-6
'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by t...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-7
is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", '...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-8
precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question abou...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-9
are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always bet...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-10
solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Cr...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-11
identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Y...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-12
Create a chain from an LLM. classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]# property input_keys: List[str]# Defines the input keys. property output_keys: List[str]# Defines the output keys. pydantic model langchain.chains.C...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-13
field retriever: BaseRetriever [Required]# Index to connect to. classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-14
property input_keys: List[str]# Input keys this chain expects. property output_keys: List[str]# Output keys this chain expects. pydantic model langchain.chains.GraphCypherQAChain[source]# Chain for question-answering against a graph by generating Cypher statements. Validators raise_deprecation » all fields set_verbose ...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-15
field qa_chain: LLMChain [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-16
set_verbose » verbose field entity_extraction_chain: LLMChain [Required]# field graph: NetworkxEntityGraph [Required]# field qa_chain: LLMChain [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-17
pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]# Generate hypothetical document for query, and then embed that. Based on https://arxiv.org/abs/2212.10496 Validators raise_deprecation » all fields set_verbose » verbose field base_embeddings: Embeddings [Required]# field llm_chain: LLMChain [Required...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-18
field llm_chain: LLMChain [Required]# field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no nee...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-19
[Deprecated] classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-20
field prompt: BasePromptTemplate [Required]# Prompt object to use. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → List[Dict[str, str]][source]# Utilize the LLM generate method for speed...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-21
Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackMa...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-22
Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]][source]# Call predict and then parse the ...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-23
[Deprecated] field list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', vali...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-24
[Deprecated] Prompt to use when questioning the documents. classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-25
raise_deprecation » all fields raise_deprecation » all fields set_verbose » verbose field llm: Optional[BaseLanguageModel] = None# [Deprecated] LLM wrapper to use. field llm_chain: LLMChain [Required]# field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables=...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-26
[Deprecated] Prompt to use to translate to python if necessary. classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expres...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-27
field requests_wrapper: TextRequestsWrapper [Optional]# field text_length: int = 8000# pydantic model langchain.chains.LLMSummarizationCheckerChain[source]# Chain for question-answering with self-verification. Example from langchain import OpenAI, LLMSummarizationCheckerChain llm = OpenAI(temperature=0.0) checker_chain...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-28
[Deprecated] field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-29
Maximum number of times to check the assertions. Default to double-checking. field revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true ...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-30
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bull...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-31
validate_template=True), are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true,...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-32
pydantic model langchain.chains.MapReduceChain[source]# Map-reduce chain. Validators raise_deprecation » all fields set_verbose » verbose field combine_documents_chain: BaseCombineDocumentsChain [Required]# Chain to use to combine documents. field text_splitter: TextSplitter [Required]# Text splitter to use. classmetho...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-33
Chain interacts with an OpenAPI endpoint using natural language. Validators raise_deprecation » all fields set_verbose » verbose field api_operation: APIOperation [Required]# field api_request_chain: LLMChain [Required]# field api_response_chain: Optional[LLMChain] = None# field param_mapping: _ParamMapping [Required]#...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-34
raise_deprecation » all fields raise_deprecation » all fields set_verbose » verbose field get_answer_expr: str = 'print(solution())'# field llm: Optional[BaseLanguageModel] = None# [Deprecated] field llm_chain: LLMChain [Required]#
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-35
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\n    """Olivia has $23. She bought five bagels for $...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-36
solution():\n    """There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?"""\n    computers_initial = 9\n    computers_per_day = 5\n    num_days = 4  # 4 days between monday and thursday\n    computers_added = c...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-37
= 12\n    denny_lollipops = jason_lollipops_initial - jason_lollipops_after\n    result = denny_lollipops\n    return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\n    """Leah had 32 chocolate...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-38
15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?"""\n    trees_initial = 15\n    trees_after = 21\n    trees_added = trees_after - trees_initial\n    result = trees_added\n    return result\n\n\n\n\n\...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-39
[Deprecated] field python_globals: Optional[Dict[str, Any]] = None# field python_locals: Optional[Dict[str, Any]] = None# field return_intermediate_steps: bool = False# field stop: str = '\n\n'# classmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal....
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-40
Chain for question-answering against an index. Example from langchain.llms import OpenAI from langchain.chains import RetrievalQA from langchain.faiss import FAISS from langchain.vectorstores.base import VectorStoreRetriever retriever = VectorStoreRetriever(vectorstore=FAISS(...)) retrievalQA = RetrievalQA.from_llm(llm...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-41
[Deprecated] LLM wrapper to use. field llm_chain: LLMChain [Required]# field prompt: Optional[BasePromptTemplate] = None# [Deprecated] Prompt to use to translate natural language to SQL. field query_checker_prompt: Optional[BasePromptTemplate] = None# The prompt template that should be used by the query checker field r...
https://python.langchain.com/en/latest/reference/modules/chains.html
8421cefce063-42
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input ques...
https://python.langchain.com/en/latest/reference/modules/chains.html