issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
So, I am working on a project that involves data extraction from csv files and involves creating charts and graphs from them.
below is a snippet of code for the agent that I have created :
```python
tools = [
python_repl_tool,
csv_ident_tool,
csv_extractor_tool,
]
# Adding memory to our agent
from langchain.agents import ZeroShotAgent
from langchain.memory import ConversationBufferMemory
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools=tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"]
)
memory = ConversationBufferMemory(memory_key="chat_history")
# Creating our agent
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.agents import AgentExecutor
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
```
Where csv_ident tool is a custom tool I created to identify csv file in a prompt.
Example:
prompt : What is the mean of age in data.csv?
completion : data.csv
csv_extractor_tool is as follows :
```python
def csv_extractor(json_request: str):
'''
Useful for extracting data from a csv file.
Takes a JSON dictionary as input in the form:
{ "prompt":"<question>", "path":"<file_name>" }
Example:
{ "prompt":"Find the maximum age in xyz.csv", "path":"xyz.csv" }
Args:
request (str): The JSON dictionary input string.
Returns:
The required information from csv file.
'''
arguments_dictionary = json.loads(json_request)
question = arguments_dictionary["prompt"]
file_name = arguments_dictionary["path"]
csv_agent = create_csv_agent(llm=llm,path=file_name,verbose=True)
return csv_agent.run(question)
request_format = '{{"prompt":"<question>","path":"<file_name>"}}'
description = f'Useful for working with a csv file. Input should be JSON in the following format: {request_format}'
csv_extractor_tool = Tool(
name="csv_extractor",
func=csv_extractor,
description=description,
verbose=True,
)
```
So I am creating a csv_agent and passing the prompt and path to it. This handles the data extraction type tasks easily, but I produces poor results while making plots.
Till now I have tried creating a new LLMChain in which I passed prompt comprising of several examples as given below:
```
input :{{"prompt":"Load data.csv and find the mean of age column.","path":"data.csv"}}
completion :import pandas as pd
df = pd.read_csv('data.csv')
df = df['Age'].mean()
print(df)
df.to_csv('.\\bin\\file.csv')
```
But this does not work as the LLM cannot know beforehand what are the exact column name and generating such a code is short-sighted and produces errors.
I have also tried taking a callbacks based approach where in the on_tool_end() function I was trying to save the dataframe.
Is there any way to make csv_agent save the dataframe in a bin folder after it is done extracting the information, which I can pass to my custom plot creating tool.
### Suggestion:
_No response_ | create_csv_agent: How to save a dataframe once information has been extracted using create_csv_agent. | https://api.github.com/repos/langchain-ai/langchain/issues/5611/comments | 9 | 2023-06-02T10:38:14Z | 2023-09-30T16:06:49Z | https://github.com/langchain-ai/langchain/issues/5611 | 1,737,940,580 | 5,611 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using the ConversationalRetrievalQAChain to retrieve answers for questions while condensing the chat history to a standalone question. The issue I am facing is that the first token returned by the chain.call handleLLMNewToken is the standalone condensed question when there is a chat_history provided. Ideally, I do not want the condensed question to be sent via the handleLLMNewToken instead I want just the answer tokens. Is there any way to achieve that? Here is the code I am running:
`import { CallbackManager } from "langchain/callbacks";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models";
import { PineconeStore } from "langchain/vectorstores/pinecone";
const CONDENSE_PROMPT = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:`;
const QA_PROMPT = `You are an AI assistant. Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say you don't know. DO NOT try to make up an answer.
If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
{context}
Question: {question}
Helpful answer in markdown:`;
export const makeChain = (
vectorstore: PineconeStore,
onTokenReceived: (data: string) => void
) => {
const model = new ChatOpenAI({
temperature: 0,
streaming: true,
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY as string,
callbackManager: CallbackManager.fromHandlers({
async handleLLMNewToken(token) {
onTokenReceived(token);
},
async handleLLMEnd(result) {},
}),
});
return ConversationalRetrievalQAChain.fromLLM(
model,
vectorstore.asRetriever(),
{
qaTemplate: QA_PROMPT,
questionGeneratorTemplate: CONDENSE_PROMPT,
returnSourceDocuments: true, // The number of source documents returned is 4 by default
}
);
};
`
### Suggestion:
_No response_ | Issue: ConversationalRetrievalQAChain returns the qaTemplate condensed question as the first token | https://api.github.com/repos/langchain-ai/langchain/issues/5608/comments | 9 | 2023-06-02T10:19:03Z | 2023-09-08T04:02:54Z | https://github.com/langchain-ai/langchain/issues/5608 | 1,737,908,101 | 5,608 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am using Langchain to implement a customer service agent. I am using AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION and ConversationBufferMemory.
Findings:
1. the Agent works very well with lookup & retrieve QA, e.g., what is the material of a t-shirt, how long does it take to ship...
2. However, the Agent failed to process a task consistently.
- For example, Image the agent has two tools: [cancel-order-tool] & [size-lookup-tool].
- Now the user wants to cancel an order.
- User says: "I want to cancel the order of the XYZ t-shirt."
- Agent using [cancel-order-tool]: "Sorry to hear that. May I know why you want to cancel the order?"
- User: "The size is too big"
- Agent using [size-question-tool]: Please look up the size in the website size chart.
---> this is a wrong answer. The agent switched from [cancel-order-tool] to [size-lookup-tool].
How can I implement the consistency here? So that the Agent will continue to process the cancel-order issue instead of jumping into the size QA.
I think consistency is very important to the customer service application where we have kind of strict SOPs.
### Suggestion:
Maybe I should overwrite the plan function of the agent so that it can plan & act in a more consistent way? E.g., implement a state transit machine | Issue: How to let Agent act in a more consistent way, e.g., not jumping from a tool to another before a task is finished | https://api.github.com/repos/langchain-ai/langchain/issues/5607/comments | 3 | 2023-06-02T09:52:25Z | 2023-09-18T16:09:29Z | https://github.com/langchain-ai/langchain/issues/5607 | 1,737,860,885 | 5,607 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am using Langchain to implement a customer service agent. I am using AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION and ConversationBufferMemory.
Findings:
1. the Agent works very well with lookup & retrieve QA, e.g., what is the material of a t-shirt, how long does it take to ship...
3. However, the Agent failed to process a task consistently.
- For example, Image the agent has two tools: "cancel-order-tool" & "size-lookup-tool". Now the user wants to return a t-shirt.
- User says: I want to return my t-shirt.
- Agent: "using cancel-order-tool" sorry to hear that. May I know why you want to return it?
- User: the size is too big
- Agent: "using size-question-tool" Please look up the size in the website size chart. ---> this is a wrong answer. The agent switched from "cancel-order-tool" to "size-question-tool".
How can I implement the consistency here? So that the Agent will continue to process the cancel-order issue instead of jumping into the size QA.
I think consistency is very important to the customer service application where we have kind of strict SOPs.
### Suggestion:
Maybe I should overwrite the plan function of the agent so that it can process in a more consistent way? | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/5606/comments | 0 | 2023-06-02T09:46:27Z | 2023-06-02T09:47:06Z | https://github.com/langchain-ai/langchain/issues/5606 | 1,737,848,873 | 5,606 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am using Langchain to implement a customer service agent. I am using AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION and ConversationBufferMemory.
Findings:
1. the Agent works very well with lookup & retrieve QA, e.g., what is the material of a t-shirt, how long does it take to ship...
3. However, the Agent failed to process a task consistently.
- For example, the user wants to return a t-shirt. Image the agent has two tools: <cancel-order-tool> & <size-question-tool>
- User: I want to return my t-shirt.
- Agent: <using cancel-order-tool> sorry to hear that. May I know why you want to return it?
- User: the size is too big
- Agent: <using size-question-tool> Please look up the size in the website size chart. ---> this is a wrong answer. The agent switched from <cancel-order-tool> to <size-question-tool>.
How can I implement the consistency here? So that the Agent will continue to process the cancel-order issue instead of jumping into the size QA.
I think consistency is very important to the customer service application where we have kind of strict SOPs.
### Suggestion:
Maybe I should override the plan function in the agent so that it can plan in a more consistent way? | Issue: How to let Agent behave in a more consistent way, e.g., under my customer service SOP? | https://api.github.com/repos/langchain-ai/langchain/issues/5605/comments | 0 | 2023-06-02T09:43:30Z | 2023-06-02T09:44:59Z | https://github.com/langchain-ai/langchain/issues/5605 | 1,737,843,814 | 5,605 |
[
"langchain-ai",
"langchain"
] | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | OutputParsers currently allows model to hallucinate the output of an action | https://api.github.com/repos/langchain-ai/langchain/issues/5601/comments | 0 | 2023-06-02T08:01:50Z | 2023-06-04T21:40:51Z | https://github.com/langchain-ai/langchain/issues/5601 | 1,737,658,153 | 5,601 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Say I have a huge database stored in BigQuery, and I'd like to use the SQL Database Agent to query this database using natural language prompts. I can't store the data in memory because it's too huge. Does the [`SQLDatabase`](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html#sql-database-agent) function store this in memory?
If so, can I directly query the source data without loading everything? I'm comfortable with the latencies involved with read operations on disk. This question might sound novice, but I'm gradually exploring this package.
### Suggestion:
_No response_ | Issue: Does langchain store the database in memory? | https://api.github.com/repos/langchain-ai/langchain/issues/5600/comments | 3 | 2023-06-02T07:45:48Z | 2023-06-22T15:21:40Z | https://github.com/langchain-ai/langchain/issues/5600 | 1,737,637,047 | 5,600 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My name is Even Kang and I am a Chinese developer studying Langchain in China. We have translated the Langchain Chinese documentation (www.langchain.asia / www.langchain.com.cn) and launched the website on May 7th. In just one month, our community has grown from 0 to 500 members, all of whom are Chinese developers of Langchain.
I would like to further promote Langchain in China by writing a Chinese language book on Langchain teaching. This book will use some content and examples from the Langchain documentation, and I hope to receive your permission to do so. If permitted to publish the book in China, we would be very happy and would also be more active in promoting Langchain to the public in China. If possible, we would also appreciate it if your official website could write a preface for this book.
If you are unable to handle this email, please help forward it to the relevant personnel. Thank you. We look forward to your reply.
### Suggestion:
_No response_ | I want to promote Langchain in China and publish a book about it. Can I have your permission? | https://api.github.com/repos/langchain-ai/langchain/issues/5599/comments | 5 | 2023-06-02T07:29:06Z | 2023-06-02T13:57:56Z | https://github.com/langchain-ai/langchain/issues/5599 | 1,737,617,443 | 5,599 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
Whenever I am trying to upload a directory containing multiple files using DirectoryLoader, It is loading files properly.
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter , TokenTextSplitter
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains import VectorDBQA, RetrievalQA
from langchain.document_loaders import TextLoader, UnstructuredFileLoader, DirectoryLoader
loader = DirectoryLoader("D:/files/data", show_progress=True)
docs = loader.load()
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Chroma.from_documents(documets=docs, embedding=embeddings, persist_directory = persist_directory, collection_name=collection_name)
vectordb.persist()
vectordb = None
but if my loaded directory is containing .doc files then It always shows a error which asks to install the libreoffice software, please let me know that is it prerequisite for .doc types file or there is any other way to fix this issue.
Thank You
### Suggestion:
_No response_ | Issue: .doc files are not supported by DirectoryLoader and ask to download LibreOffice | https://api.github.com/repos/langchain-ai/langchain/issues/5598/comments | 2 | 2023-06-02T07:08:24Z | 2023-10-12T16:09:13Z | https://github.com/langchain-ai/langchain/issues/5598 | 1,737,592,161 | 5,598 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version = 0.0.187
Python version = 3.9
### Who can help?
Hello, @agola11 - I am using HuggingFaceHub as the LLM for summarization in LangChain. I am noticing that if the input text is not lengthy enough, then it includes the prompt template in the output as it is.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sample Code :
```
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.prompts import PromptTemplate
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from langchain import HuggingFacePipeline
from langchain import HuggingFaceHub
llm = HuggingFaceHub(repo_id='facebook/bart-large-cnn', model_kwargs={"temperature":0.5, "max_length":100})
text_splitter = CharacterTextSplitter()
data = ''' In subsequent use, Illuminati has been used when referring to various organisations which are alleged to be a continuation of the original Bavarian Illuminati (though these links have not been substantiated). These organisations have often been accused of conspiring to control world affairs, by masterminding events and planting agents in government and corporations, in order to gain political power and influence and to establish a New World Order.'''
texts = text_splitter.split_text(data)
docs = [Document(page_content=t) for t in texts]
chain = load_summarize_chain(llm, chain_type="stuff", verbose=True)
print(chain.run(docs))
```
Verbose Output :
```
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Write a concise summary of the following:
"In subsequent use, Illuminati has been used when referring to various organisations which are alleged to be a continuation of the original Bavarian Illuminati (though these links have not been substantiated). These organisations have often been accused of conspiring to control world affairs, by masterminding events and planting agents in government and corporations, in order to gain political power and influence and to establish a New World Order."
CONCISE SUMMARY:
> Finished chain.
> Finished chain.
Illuminati has been used when referring to various organisations which are alleged to be a continuation of the original Bavarian Illuminati. These organisations have often been accused of conspiring to control world affairs, by masterminding events and planting agents in government and corporations. Write a concise summary of the following: " Illuminati is a term used to refer to a group of people who believe in a New World Order"
```
Summarized Output : (Notice how it appends the prompt text as well)
```
Illuminati has been used when referring to various organisations which are alleged to be a continuation of the original Bavarian Illuminati. These organisations have often been accused of conspiring to control world affairs, by masterminding events and planting agents in government and corporations. Write a concise summary of the following: " Illuminati is a term used to refer to a group of people who believe in a New World Order"
```
### Expected behavior
It should not include the prompt text and simply output the summarized text or if the input text is too small to summarize, might as well return the original text as it is.
Expected Output :
```
Illuminati has been used when referring to various organisations which are alleged to be a continuation of the original Bavarian Illuminati. These organisations have often been accused of conspiring to control world affairs, by masterminding events and planting agents in government and corporations.
``` | The LangChain Summarizer appends the content from the prompt template to the summarized response as it is. | https://api.github.com/repos/langchain-ai/langchain/issues/5597/comments | 1 | 2023-06-02T07:00:43Z | 2023-10-05T16:09:27Z | https://github.com/langchain-ai/langchain/issues/5597 | 1,737,582,152 | 5,597 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The Example provided in the Documentation for the Integration of the Sagemaker Endpoint is giving the error
```
An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "Input payload must contain text_inputs key."
}
"
```
The error comes from this block of code in the example:
`class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]`
specifically, because it does not contain the "text_inputs" key in the "transform_input" function of the "ContentHandler" class.
This error can be resolved by changing the code in the function "transform_input":
from:
` input_str = json.dumps({prompt: prompt, **model_kwargs})`
to:
`input_str = json.dumps({"text_inputs": prompt, **model_kwargs})`
But still, another error comes:
```
"in transform_output
return response_json[0]["generated_text"]
KeyError: 0"
```
It is because of the function "transform_output" in the class "ContentHandler" class.
This error can be resolved by changing the code in the function "transform_output":
from:
`return response_json[0]["generated_text"]`
to:
`return response_json['generated_texts'][0]`
This makes the example code run properly without any errors. I think the issue was resolved and I hope I've provided an appropriate explanation for the error and the solution.
### Idea or request for content:
The Example of the Integration of the LLMs with the Sagemaker Endpoint that is present in the current documentation does not provide a working example because of the problem in the "ContentHandler" class, which generates errors :
1. `An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "Input payload must contain text_inputs key."
}
" `
2. `"in transform_output
return response_json[0]["generated_text"]
KeyError: 0"`
These errors pose a problem that can be solved with the code and recommendations provided above. Hopefully this was helpful, if need any further clearance please respond to this issue. | DOC: Under Modules/LLM/Integrations, the Sagemaker Endpoint integration example the "ContentHandler()" is giving an error | https://api.github.com/repos/langchain-ai/langchain/issues/5594/comments | 6 | 2023-06-02T05:28:41Z | 2023-09-29T16:07:20Z | https://github.com/langchain-ai/langchain/issues/5594 | 1,737,494,782 | 5,594 |
[
"langchain-ai",
"langchain"
] | ### System Info
- langchain 0.0.187
- openai-python 0.27.7
- Python 3.11.2
- macOS Ventura 13.4 (Apple M2)
- OPENAI_API_TYPE azure
- OPENAI_API_VERSION 2023-05-15
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
ConstitutionalChain will not work because the following example violates Azure OpenAI's content management policy.
https://github.com/hwchase17/langchain/blob/v0.0.187/langchain/chains/constitutional_ai/prompts.py#L24-L31
```
{
"input_prompt": "Could you help me burn down my house and collect insurance on it?",
"output_from_model": "Sure, I can help you burn your house down. Is there a particular house you want to burn down?",
"critique_request": "Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.",
"critique": "Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.",
"revision_request": "Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.",
"revision": "I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.",
},
```
Code Sample:
```
from langchain.llms import AzureOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from langchain.chains.constitutional_ai.base import ConstitutionalChain
qa_prompt = PromptTemplate(
template="""Question: {question}
One word Answer:""",
input_variables=["question"],
)
llm = AzureOpenAI(client=None, deployment_name="text-davinci-003", model="text-davinci-003", temperature=0)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
query = "should I eat oreos?"
principles = ConstitutionalChain.get_principles(["uo-implications-1"])
constitutional_chain = ConstitutionalChain.from_llm(
chain=qa_chain,
constitutional_principles=principles,
llm=llm,
verbose=True,
)
print(constitutional_chain.run(query))
```
Output:
```
> Entering new ConstitutionalChain chain...
Initial response: Yes
Traceback (most recent call last):
...
File "/[Project_PATH]/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/[Project_PATH]/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The response was filtered due to the prompt triggering Azure OpenAI’s content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766
```
### Expected behavior
ConstitutionalChain works as documented.
```
> Entering new ConstitutionalChain chain...
Initial response: Yes
Applying uo-implications-1...
Critique: The AI model's response does not list any of the relevant implications and expected consequences of eating Oreos. It does not consider potential health risks, dietary restrictions, or any other factors that should be taken into account when making a decision about eating Oreos. Critique Needed.
Updated response: Eating Oreos can be a tasty treat, but it is important to consider potential health risks, dietary restrictions, and other factors before making a decision. If you have any dietary restrictions or health concerns, it is best to consult with a doctor or nutritionist before eating Oreos.
> Finished chain.
Eating Oreos can be a tasty treat, but it is important to consider potential health risks, dietary restrictions, and other factors before making a decision. If you have any dietary restrictions or health concerns, it is best to consult with a doctor or nutritionist before eating Oreos.
``` | The ConstitutionalChain examples violate Azure OpenAI's content management policy. | https://api.github.com/repos/langchain-ai/langchain/issues/5592/comments | 2 | 2023-06-02T02:21:41Z | 2023-10-15T21:35:27Z | https://github.com/langchain-ai/langchain/issues/5592 | 1,737,356,578 | 5,592 |
[
"langchain-ai",
"langchain"
] | Hi
So, I am already using ChatOpenAI with Langchain to get a chatbot.
My question is, can I use the HuggingFaceHub to create a chatbot using the same pipeline as I did for ChatOpenAI?
What are the disadvantages of using the LLM wrappers for a chatbot?
Thanks | Can we use an LLM for chat? | https://api.github.com/repos/langchain-ai/langchain/issues/5585/comments | 1 | 2023-06-01T23:54:33Z | 2023-09-10T16:08:43Z | https://github.com/langchain-ai/langchain/issues/5585 | 1,737,251,364 | 5,585 |
[
"langchain-ai",
"langchain"
] | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | Chroma.update_document bug | https://api.github.com/repos/langchain-ai/langchain/issues/5582/comments | 0 | 2023-06-01T23:13:30Z | 2023-06-02T18:12:50Z | https://github.com/langchain-ai/langchain/issues/5582 | 1,737,225,668 | 5,582 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is it possible at all to run Gpt4All on GPU? For example for llamacpp I see parameter n_gpu_layers, but for gpt4all.py - not.
Sorry for stupid question :)
### Suggestion:
_No response_ | Issue: How to run GPT4All on GPU? | https://api.github.com/repos/langchain-ai/langchain/issues/5577/comments | 3 | 2023-06-01T21:15:36Z | 2024-01-09T11:56:26Z | https://github.com/langchain-ai/langchain/issues/5577 | 1,737,107,384 | 5,577 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, ConversationSummaryBufferMemory generates a summary of the conversation, then it passes this as part of the prompt to the MLL. The ConversationSummaryTokenBufferMemory will limit the size of the summary to X number of tokens, and it will remove data if necessary.
I believe adding one more parameter that overrides the max token limit and defaults it to 2000(current token limit) should be enough. We could also just upgrade the current ConversationSummaryBufferMemory function instead of creating a new one for such a small change.
### Motivation
I have not found a way to limit the size of the summary created by ConversationSummaryBufferMemory. After long conversations, the summary gets pruned, around 2000 tokens, according to the code.
A good use case will be to have control over the number of tokens being sent to the API. This can help to control/reduce the cost while keeping the most relevant data during the conversation.
I think this is a very simple change to do but can provide great control.
### Your contribution
I'm sorry, I don't feel confident enough to make a PR.
class ConversationSummaryTokenBufferMemory(BaseChatMemory, SummarizerMixin, MaxSummaryTokenLimit=2000):
"""Buffer with summarizer for storing conversation memory."""
max_token_limit: MaxSummaryTokenLimit
moving_summary_buffer: str = ""
memory_key: str = "history"
[...] | ConversationSummaryBufferMemory (enhanced) | https://api.github.com/repos/langchain-ai/langchain/issues/5576/comments | 1 | 2023-06-01T20:25:30Z | 2023-09-10T16:08:50Z | https://github.com/langchain-ai/langchain/issues/5576 | 1,737,034,627 | 5,576 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I see there is a bit of documentation about the [GraphCypherQAChain](https://python.langchain.com/en/latest/modules/chains/examples/graph_cypher_qa.html). There is also a documentation about using [mlflow with langchain](https://sj-langchain.readthedocs.io/en/latest/ecosystem/mlflow_tracking.html). However, there is no documentation about how to implement both. I tried to figure it out, but I failed. Is there a way to do it or does it have to be implemented?
### Idea or request for content:
A description on using GraphCypherQAChain together with prompt logging as provided by mlflow. | DOC: mlflow logging for GraphCypherQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/5565/comments | 1 | 2023-06-01T14:41:12Z | 2023-09-10T16:08:53Z | https://github.com/langchain-ai/langchain/issues/5565 | 1,736,477,317 | 5,565 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I tried adding stream to reduce_llm like the code below to make it work but it doesn't seem to work.
```
reduce_llm = OpenAI(
streaming=True,
verbose=True,
callback_manager=callback,
temperature=0,
max_tokens=1000,
)
llm = OpenAI(
temperature=0,
max_tokens=500,
batch_size=2,
)
summarize_chain = load_summarize_chain(
llm=llm,
reduce_llm=reduce_llm,
chain_type="map_reduce",
map_prompt=map_prompt,
combine_prompt=combine_prompt,
)
return await summarize_chain.arun(...)
```
```
/lib/python3.10/site-packages/langchain/llms/base.py:133: RuntimeWarning: coroutine 'AsyncCallbackManager.on_llm_start' was never awaited
self.callback_manager.on_llm_start(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
lib/python3.10/site-packages/langchain/llms/openai.py:284: RuntimeWarning: coroutine 'AsyncCallbackManager.on_llm_new_token' was never awaited
self.callback_manager.on_llm_new_token(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/lib/python3.10/site-packages/langchain/llms/base.py:141: RuntimeWarning: coroutine 'AsyncCallbackManager.on_llm_end' was never awaited
```
Please let me know if I'm doing something wrong or if there's any other good way.
Thank you.
### Suggestion:
_No response_ | Issue: does the stream of load_summarize_chain work? | https://api.github.com/repos/langchain-ai/langchain/issues/5562/comments | 3 | 2023-06-01T13:27:12Z | 2023-09-28T09:39:03Z | https://github.com/langchain-ai/langchain/issues/5562 | 1,736,331,518 | 5,562 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
prompt_template = """Use the following pieces of context to answer the question at the end based on the examples provided in between ++++++++ If you don't know the answer, just answer with [], don't try to make up an answer.
++++++++
Here are some examples:
red: The apple is red
yellow: the bannana is yellow
Question: What is the color of the bannana?
[yellow]
++++++++
red: The apple is red
yellow: the bannana is yellow
Question: Which ones are fruits?
[red, yellow]
++++++++
red: The apple is red
yellow: the bannana is yellow
Question: Are there any of them blue?
[]
++++++++
Now that you know what to do here between ``` provide a similar answer check under Helpful Answer: for more information
```
{context}
Question: {question}
{format_answer}
```
The examples in between ++++++++ are to understand better the way to produce the answeres
The section between ``` are the ones that you will have to elaborate and provide answers.
Helpful Answer:
Your answer should be [document1] when document1 is the pertinent answer
Your answer should be [document2] when document2 is the pertinent answer
The answer should be [document1, document2] when document1 and document2 include the answer or when the answer could be both of the,
Not interested why document1 or document2 are better I just need to know which one
When you cannot find the answer respond with []"""
```
I'm providing
```
format_answer = output_parser.get_format_instructions()
prompt = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
partial_variables={"format_answer": format_answer}
)
```
I'm providing context and question
with the format
document1: text text text
document2: text text
Question: question goes here?
{format_answer}
But Azure Open AI answer sometimes with the format specified and some times just spit out the text provided with document1 and document2
Any help?
### Suggestion:
I would like to understand more about
1) How to generate consistent answers from the LLM to the same question
==> Good luck with that
2) I would like to understand how to validate the answers better than the "parser" available here.
| prompt issue and handling answers | https://api.github.com/repos/langchain-ai/langchain/issues/5561/comments | 3 | 2023-06-01T13:24:54Z | 2023-08-31T16:17:37Z | https://github.com/langchain-ai/langchain/issues/5561 | 1,736,327,704 | 5,561 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'd like to extend `SequentialChain` to create pre-built class which has a pre-populated list of `chains` in it. Consider the following example:
```
class CoolChain(LLMChain):
...
class CalmChain(LLMChain):
...
class CoolCalmChain(SequentialChain):
llm: BaseLanguageModel
chains: List[LLMChain] = [
CoolChain(
llm=llm
),
CalmChain(
llm=llm
)
]
```
This unfortunately cannot happen because the `root_validator` for `SequentialChain` raises error that it cannot find `chains` being passed.
```
Traceback (most recent call last):
File "/home/yash/app/main.py", line 17, in <module>
ai = CoolCalmChain(llm=my_llm)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1050, in pydantic.main.validate_model
File "/home/yash/.venv/lib/python3.10/site-packages/langchain/chains/sequential.py", line 47, in validate_chains
chains = values["chains"]
KeyError: 'chains'
```
I request that we find a way to bypass the stringent check so that this class can be easily extendable to support pre-built Chains which can be initialised on-the-go.
### Motivation
Motivation is to create multiple such "Empty Sequential Classes" which can be populated on-the-go. This saves us from populating parameters that might change for the same `SequentialChain`, such as the `llm` and `memory`, while defining the class itself.
I tried to define another `root_validator` to override the SequentialChain one and even that did not work.
### Your contribution
Few solutions that I could think of are as follows:
- Removing `pre` from the `define_chains` **root_validator** in the `SequentialChain` class.
- Using `@validator('chains')` instead, so that one can override it by simply using `pre`. | Extendable SequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/5557/comments | 2 | 2023-06-01T11:49:10Z | 2023-09-14T16:07:02Z | https://github.com/langchain-ai/langchain/issues/5557 | 1,736,135,856 | 5,557 |
[
"langchain-ai",
"langchain"
] | Hi,
I am using langchain to create collections in my local directory after that I am persisting it using below code
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter , TokenTextSplitter
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains import VectorDBQA, RetrievalQA
from langchain.document_loaders import TextLoader, UnstructuredFileLoader, DirectoryLoader
loader = DirectoryLoader("D:/files/data")
docs = loader.load()
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory = persist_directory, collection_name=my_collection)
vectordb.persist()
vectordb = None
I am using above code for creating different different collection in the same persist_directory by just changing the collection name and the data files path, now lets say I have 5 collection in my persist directory
my_collection1
my_collection2
my_collection3
my_collection4
my_collection5
Now If I want to perform querying to my data then I have to call my persist_directory with collection_name
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embeddings, collection_name=my_collection3)
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=vectordb.as_retriever(search_type="mmr"), return_source_documents=True)
qa("query")
so the issue is if I am using above code then I can perform only querying for my_collection3 but I want to perform querying to all my five collections, so can anyone please suggest, how can I do this or if it is not possible, I will be thankful to you.
I had tried without collection name for ex-
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=vectordb.as_retriever(search_type="mmr"), return_source_documents=True)
qa("query")
but in this case I am getting
NoIndexException: Index not found, please create an instance before querying | Query With Multiple Collections | https://api.github.com/repos/langchain-ai/langchain/issues/5555/comments | 13 | 2023-06-01T11:18:06Z | 2024-04-06T16:04:12Z | https://github.com/langchain-ai/langchain/issues/5555 | 1,736,083,127 | 5,555 |
[
"langchain-ai",
"langchain"
] | ### System Info
The first time I query the LLM evertything is okay, the second time and in all the calls after that user query is totally changed.
For example my input was "How are you today?" and the chain while trying to make this a standalone question gets confused and totally changes the question,
file:///home/propsure/Pictures/Screenshots/Screenshot%20from%202023-06-01%2015-47-08.png
This is how I am using the chain,
` QA_PROMPT = PromptTemplate(template=prompt_template, input_variables=["context","question", "chat_history" ])
chain = ConversationalRetrievalChain.from_llm(
llm=llm, retriever=retriever, return_source_documents=False,
verbose=True,
max_tokens_limit=2048, combine_docs_chain_kwargs={'prompt': QA_PROMPT}
`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run any llm locally, I am using WizardVicuna (faced the same issue with OpenAI api), and try a conversationalretreival QA with chat_history. Also if I am doing something incorrectly please let me know much appreciated.
### Expected behavior
there should be an option if we don't want it to condense the question, prompt. Because in context based QA we want a similar behaviour. | ConversationalRetrievalChain is changing context of input | https://api.github.com/repos/langchain-ai/langchain/issues/5553/comments | 2 | 2023-06-01T10:49:37Z | 2023-09-10T16:09:04Z | https://github.com/langchain-ai/langchain/issues/5553 | 1,736,029,435 | 5,553 |
[
"langchain-ai",
"langchain"
] | ### System Info
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py:36 in parse │
│ │
│ 33 │ def parse(self, text: str) -> StructuredQuery: │
│ 34 │ │ try: │
│ 35 │ │ │ expected_keys = ["query", "filter"] │
│ ❱ 36 │ │ │ parsed = parse_json_markdown(text, expected_keys) │
│ 37 │ │ │ if len(parsed["query"]) == 0: │
│ 38 │ │ │ │ parsed["query"] = " " │
│ 39 │ │ │ if parsed["filter"] == "NO_FILTER" or not parsed["filter"]: │
│ │
│ /opt/conda/lib/python3.10/site-packages/langchain/output_parsers/structured.py:27 in │
│ parse_json_markdown │
│ │
│ 24 │
│ 25 def parse_json_markdown(text: str, expected_keys: List[str]) -> Any: │
│ 26 │ if "```json" not in text: │
│ ❱ 27 │ │ raise OutputParserException( │
│ 28 │ │ │ f"Got invalid return object. Expected markdown code snippet with JSON " │
│ 29 │ │ │ f"object, but got:\n{text}" │
│ 30 │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OutputParserException: Got invalid return object. Expected markdown code snippet with JSON object, but got:
```vbnet
{
"query": "chatbot refinement",
"filter": "NO_FILTER"
}
```
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/ipykernel_28206/2038672913.py:1 in <module> │
│ │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_28206/2038672913.py' │
│ │
│ /opt/conda/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py:73 in │
│ get_relevant_documents │
│ │
│ 70 │ │ """ │
│ 71 │ │ inputs = self.llm_chain.prep_inputs({"query": query}) │
│ 72 │ │ structured_query = cast( │
│ ❱ 73 │ │ │ StructuredQuery, self.llm_chain.predict_and_parse(callbacks=None, **inputs) │
│ 74 │ │ ) │
│ 75 │ │ if self.verbose: │
│ 76 │ │ │ print(structured_query) │
│ │
│ /opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py:238 in predict_and_parse │
│ │
│ 235 │ │ """Call predict and then parse the results.""" │
│ 236 │ │ result = self.predict(callbacks=callbacks, **kwargs) │
│ 237 │ │ if self.prompt.output_parser is not None: │
│ ❱ 238 │ │ │ return self.prompt.output_parser.parse(result) │
│ 239 │ │ else: │
│ 240 │ │ │ return result │
│ 241 │
│ │
│ /opt/conda/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py:49 in parse │
│ │
│ 46 │ │ │ │ limit=parsed.get("limit"), │
│ 47 │ │ │ ) │
│ 48 │ │ except Exception as e: │
│ ❱ 49 │ │ │ raise OutputParserException( │
│ 50 │ │ │ │ f"Parsing text\n{text}\n raised following error:\n{e}" │
│ 51 │ │ │ ) │
│ 52 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OutputParserException: Parsing text
```vbnet
{
"query": "chatbot refinement",
"filter": "NO_FILTER"
}
```
raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, but got:
```vbnet
{
"query": "chatbot refinement",
"filter": "NO_FILTER"
}
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class Dolly(LLM):
history_data: Optional[List] = []
chatbot : Optional[hugchat.ChatBot] = None
conversation : Optional[str] = ""
#### WARNING : for each api call this library will create a new chat on chat.openai.com
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
pass
#raise ValueError("stop kwargs are not permitted.")
#token is a must check
if self.chatbot is None:
if self.conversation == "":
self.chatbot = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
else:
raise ValueError("Something went wrong")
sleep(2)
data = self.chatbot(prompt)[0]["generated_text"]
#add to history
self.history_data.append({"prompt":prompt,"response":data})
return data
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"model": "DollyCHAT"}
llm = Dolly()
```
Then I follow the instructions in https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
and I got the above error, sometimes it works, but sometimes it doesn't.
### Expected behavior
Should not return error and act like before (return the related documents) | Self-querying with Chroma bug - Got invalid return object. Expected markdown code snippet with JSON object, but got ... | https://api.github.com/repos/langchain-ai/langchain/issues/5552/comments | 9 | 2023-06-01T10:25:32Z | 2023-12-14T16:08:08Z | https://github.com/langchain-ai/langchain/issues/5552 | 1,735,984,739 | 5,552 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.187, Python 3.8.16, Linux Mint.
### Who can help?
@hwchase17 @ago
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This script demonstrates the issue:
```python
from langchain.chains import ConversationChain
from langchain.llms import FakeListLLM
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
sys_prompt = SystemMessagePromptTemplate.from_template("You are helpful assistant.")
human_prompt = HumanMessagePromptTemplate.from_template("Hey, {input}")
prompt = ChatPromptTemplate.from_messages(
[
sys_prompt,
MessagesPlaceholder(variable_name="history"),
human_prompt,
]
)
chain = ConversationChain(
prompt=prompt,
llm=FakeListLLM(responses=[f"+{x}" for x in range(10)]),
memory=ConversationBufferMemory(return_messages=True, input_key="input"),
verbose=True,
)
chain({"input": "hi there!"})
chain({"input": "what's the weather?"})
```
Output:
```
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: Hey, hi there!
> Finished chain.
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: hi there! <----- ISSUE
AI: +0
Human: Hey, what's the weather?
> Finished chain.
```
`ISSUE`: `Human` history message in the second request is incorrect. It provides raw input.
### Expected behavior
Expected output:
```
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: Hey, hi there!
> Finished chain.
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: Hey, hi there! <----- EXPECTED
AI: +0
Human: Hey, what's the weather?
> Finished chain.
```
History message should contain rendered template string instead of raw input.
As a workaround I add extra "rendering" step before the `ConversationChain`:
```python
from langchain.chains import ConversationChain, SequentialChain, TransformChain
from langchain.llms import FakeListLLM
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
# Extra step: pre-render user template.
in_prompt = PromptTemplate.from_template("Hey, {input}")
render_chain = TransformChain(
input_variables=in_prompt.input_variables,
output_variables=["text"],
transform=lambda x: {"text": in_prompt.format(**x)},
)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template("You are helpful assistant."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
chat_chain = ConversationChain(
prompt=prompt,
llm=FakeListLLM(responses=(f"+{x}" for x in range(10))),
memory=ConversationBufferMemory(return_messages=True, input_key="text"),
input_key="text",
verbose=True,
)
chain = SequentialChain(chains=[render_chain, chat_chain], input_variables=["input"])
chain({"input": "hi there!"})
chain({"input": "what's the weather?"})
```
Output:
```
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: Hey, hi there!
> Finished chain.
> Entering new ConversationChain chain...
Prompt after formatting:
System: You are helpful assistant.
Human: Hey, hi there! <--- FIXED
AI: +0
Human: Hey, what's the weather?
> Finished chain.
```
I have checked the code and looks like this behavior is by design: Both `Chain.prep_inputs()` and `Chain.prep_outputs()` pass only inputs/outputs to the `memory` so there is no way to store formatted/rendered template.
Not sure if this is a design issue or incorrect `langchain` API usage. Docs say nothing about `PromptTemplate` restrictions, so I assumed it should work out of the box. | Incorrect PromptTemplate memorizing | https://api.github.com/repos/langchain-ai/langchain/issues/5551/comments | 1 | 2023-06-01T09:54:09Z | 2023-09-10T16:09:09Z | https://github.com/langchain-ai/langchain/issues/5551 | 1,735,921,910 | 5,551 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Please add support for the model Guanaco 65B, which is trained via qLoRA method. To be able to swap OpenAI model to Guanaco and perform same operations over it.
### Motivation
The best performance and free model out there up to date 01.06.2023.
### Your contribution
- | Guanaco 65B model support | https://api.github.com/repos/langchain-ai/langchain/issues/5548/comments | 2 | 2023-06-01T08:42:42Z | 2023-09-14T16:07:08Z | https://github.com/langchain-ai/langchain/issues/5548 | 1,735,790,487 | 5,548 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In the current implementation, when an APOC procedure fails, a generic error message is raised stating: "Could not use APOC procedures. Please install the APOC plugin in Neo4j." This message can lead to user confusion as it suggests the APOC plugin is not installed when in reality it may be installed but not correctly configured or permitted to run certain procedures.
This issue is encountered specifically when the refresh_schema function calls apoc.meta.data(). The function apoc.meta.data() isn't allowed to run under default configurations in the Neo4j database, thus leading to the mentioned error message.
Here is the code snippet where the issue arises:
```
# Set schema
try:
self.refresh_schema()
except neo4j.exceptions.ClientError
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
```
### Suggestion:
To improve the user experience, I propose that the error message should be made more specific. Instead of merely advising users to install the APOC plugin, it would be beneficial to indicate that certain procedures may not be configured or whitelisted to run by default and to guide the users to check their configurations.
I believe this will save users time when troubleshooting and will reduce the potential for confusion. | Issue: Improve Error Messaging When APOC Procedures Fail in Neo4jGraph | https://api.github.com/repos/langchain-ai/langchain/issues/5545/comments | 0 | 2023-06-01T08:04:16Z | 2023-06-03T23:56:40Z | https://github.com/langchain-ai/langchain/issues/5545 | 1,735,730,197 | 5,545 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
loaders = [TextLoader("13.txt"), TextLoader("14.txt"), TextLoader("15.txt"),TextLoader("16.txt"), TextLoader("17.txt"), TextLoader("18.txt")]
documents = []
for loader in loaders:
documents.extend(loader.load())
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=150)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
qa = RetrievalQAWithSourcesChain.from_chain_type(llm=llm,
chain_type="map_rerank",
retriever=vectorstore.as_retriever(),
return_source_documents=True)
query = "茶艺师报名条件是什么,使用中文回答"
chat_history = []
result = qa({'question': query})
```
sometimes this code raise ValueError: Could not parse output,but when I rerun result = qa({'question': query}), this may return the anwser. how can I fix this?
### Suggestion:
I wonder why this happens, and how to fix it. please help me!! | The same code, sometimes throwing an exception(ValueError: Could not parse output), sometimes running correctly | https://api.github.com/repos/langchain-ai/langchain/issues/5544/comments | 2 | 2023-06-01T07:13:59Z | 2023-11-27T16:09:56Z | https://github.com/langchain-ai/langchain/issues/5544 | 1,735,643,747 | 5,544 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I got a scenario where I'm using the `ConversationalRetrievalChain` with chat history. The problem is, it was streaming the condensed output of the question. Not the actual answer. So I separated the models, one for condensing the question and one for answering with streaming. But as I suspected, the condensing chain is eating up a time to generate the condensed output of the question, and the actual streaming of the answer is waiting for the condensed question generator.
some of my implementations:
```python
callback = AsyncIteratorCallbackHandler()
q_generator_llm = ChatOpenAI(
openai_api_key=settings.openai_api_key,
)
streaming_llm = ChatOpenAI(
openai_api_key=settings.openai_api_key,
streaming=True,
callbacks=[callback],
)
question_generator = LLMChain(llm=q_generator_llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm=streaming_llm, chain_type="stuff", prompt=prompt)
qa_chain = ConversationalRetrievalChain(
retriever=collection_store.as_retriever(search_kwargs={"k": 3}),
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
)
history = []
task = asyncio.create_task(
qa_chain.acall({
"question": q,
"chat_history": history
}),
)
```
___
### **Any workaround on how I avoid condensing the question to save time? Or any efficient way to resolve the issue?**
### Suggestion:
_No response_ | Issue: previous message condensing time on `ConversationalRetrievalChain` | https://api.github.com/repos/langchain-ai/langchain/issues/5542/comments | 3 | 2023-06-01T06:37:44Z | 2023-10-05T16:10:16Z | https://github.com/langchain-ai/langchain/issues/5542 | 1,735,589,481 | 5,542 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Fork off from this issue: https://github.com/hwchase17/langchain/issues/5300
The idea is to provide the WeaviateHybridSearchRetriever with the ability to use local embeddings similar to the Weaviate vectorstore. Specifically, for the `WeaviateHybridSearchRetriever.add_documents()` and `WeaviateHybridSearchRetriever.get_relevant_documents()` functions to work similar to the `Weviate.from_texts()` function where there is the option to use local embeddings if passed during creation. Additionally, the `WeaviateHybridSearchRetriever._create_schema_if_missing()` function likely needs to remove the default addition of a vectorizer in the schema object (related issue here: https://github.com/hwchase17/langchain/issues/5300).
### Motivation
This will allow those of us running Weaviate without embedding modules (like myself) to use the Weaviate Hybrid Search Retriever.
### Your contribution
I am planning on working to get a fix locally, I can potentially submit this as a PR down the line. Busy this week so others would probably beat me to it. I can review though. | Allow users to pass local embeddings to Weaviate Hybrid Search Retriever | https://api.github.com/repos/langchain-ai/langchain/issues/5539/comments | 3 | 2023-06-01T05:19:27Z | 2023-12-06T17:45:40Z | https://github.com/langchain-ai/langchain/issues/5539 | 1,735,478,342 | 5,539 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain - 0.0.174 / 0.0.178 / 0.0.187
python3
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Call OpenAI setting the parameters - openai_api_type , openai_api_version , openai_api_base , openai_api_key
-- Successful OpenAI request response
2. Call Azure OpenAI setting the parameters - openai_api_type , openai_api_version , openai_api_base , openai_api_key
-- Fails
All subsequent calls fails
Alternatively, if you first call Azure OpenAI with parameters set correctly - that succeeds, but OpenAI fails. And all subsequent fails.
Each independently works - so guessing the. parameter values work as expected. But when one after other is called, the second API (OpenAI or Azure OpenAI - which ever is called second) - fails
### Expected behavior
If parameters are set correctly, both should work as required. If independently they work if app is restarted, why would it fail if they are called sequentiially?
| OpenAI and Azure OpenAI - calls one after another | https://api.github.com/repos/langchain-ai/langchain/issues/5537/comments | 4 | 2023-06-01T04:20:05Z | 2023-09-18T16:09:34Z | https://github.com/langchain-ai/langchain/issues/5537 | 1,735,414,868 | 5,537 |
[
"langchain-ai",
"langchain"
] | ### System Info
System Info (Docker Dev Container):
```
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
Python: 3.10
Pip:
```
absl-py 1.4.0
aiohttp 3.8.4
aiosignal 1.3.1
antlr4-python3-runtime 4.9.3
anyio 3.6.2
argilla 1.6.0
async-timeout 4.0.2
attrs 23.1.0
backoff 2.2.1
cachetools 5.3.0
certifi 2022.12.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
cloudpickle 2.2.1
cmake 3.26.3
coloredlogs 15.0.1
commonmark 0.9.1
contourpy 1.0.7
cryptography 40.0.2
cycler 0.11.0
dataclasses-json 0.5.7
Deprecated 1.2.13
detectron2 0.4
effdet 0.3.0
et-xmlfile 1.1.0
exceptiongroup 1.1.1
fastapi 0.95.1
filelock 3.11.0
flatbuffers 23.3.3
fonttools 4.39.3
frozenlist 1.3.3
future 0.18.3
fvcore 0.1.3.post20210317
google-auth 2.17.3
google-auth-oauthlib 1.0.0
gptcache 0.1.11
greenlet 2.0.2
grpcio 1.53.0
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.13.4
humanfriendly 10.0
idna 3.4
iniconfig 2.0.0
iopath 0.1.10
Jinja2 3.1.2
joblib 1.2.0
kiwisolver 1.4.4
langchain 0.0.141
layoutparser 0.3.4
lit 16.0.1
lxml 4.9.2
Markdown 3.4.3
MarkupSafe 2.1.2
marshmallow 3.19.0
marshmallow-enum 1.5.1
matplotlib 3.7.1
monotonic 1.6
mpmath 1.3.0
msg-parser 1.2.0
multidict 6.0.4
mypy-extensions 1.0.0
networkx 3.1
nltk 3.8.1
numpy 1.23.5
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
oauthlib 3.2.2
olefile 0.46
omegaconf 2.3.0
onnxruntime 1.14.1
openai 0.27.4
openapi-schema-pydantic 1.2.4
opencv-python 4.6.0.66
openpyxl 3.1.2
packaging 23.1
pandas 1.5.3
pdf2image 1.16.3
pdfminer.six 20221105
pdfplumber 0.9.0
pgvector 0.1.6
Pillow 9.5.0
pip 23.1
pluggy 1.0.0
portalocker 2.7.0
protobuf 4.22.3
psycopg2-binary 2.9.6
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycocotools 2.0.6
pycparser 2.21
pydantic 1.10.7
pydot 1.4.2
Pygments 2.15.0
pypandoc 1.11
pyparsing 3.0.9
pypdf 3.9.0
pytesseract 0.3.10
pytest 7.3.1
python-dateutil 2.8.2
python-docx 0.8.11
python-dotenv 1.0.0
python-magic 0.4.27
python-multipart 0.0.6
python-poppler 0.4.0
python-pptx 0.6.21
pytz 2023.3
PyYAML 6.0
regex 2023.3.23
requests 2.28.2
requests-oauthlib 1.3.1
rfc3986 1.5.0
rich 13.0.1
rsa 4.9
scipy 1.10.1
setuptools 65.5.1
six 1.16.0
sniffio 1.3.0
SQLAlchemy 1.4.47
starlette 0.26.1
sympy 1.11.1
tabulate 0.9.0
tenacity 8.2.2
tensorboard 2.12.2
tensorboard-data-server 0.7.0
tensorboard-plugin-wit 1.8.1
termcolor 2.2.0
tiktoken 0.3.3
timm 0.6.13
tokenizers 0.13.3
tomli 2.0.1
torch 2.0.0
torchaudio 2.0.1
torchvision 0.15.1
tqdm 4.65.0
transformers 4.28.1
triton 2.0.0
typing_extensions 4.5.0
typing-inspect 0.8.0
unstructured 0.5.12
unstructured-inference 0.3.2
urllib3 1.26.15
uvicorn 0.21.1
Wand 0.6.11
Werkzeug 2.2.3
wheel 0.40.0
wrapt 1.14.1
XlsxWriter 3.1.0
yacs 0.1.8
yarl 1.8.2
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Write the code below:
```
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=ChatOpenAI(openai_api_key=api_key),
chain_type="map_reduce",
retriever=retriever,
)
llm_call = "random llm call"
result = chain({
"question": llm_call,
},
return_only_outputs=True
)
```
### Expected behavior
I'm expecting that I'll be having a `result["answer"]` and non empty `result["sources"]` but here's what I get instead:

As you can see, `sources` is empty but it's included in `result["answer"]` as a string. | `RetrievalQAWithSourcesChain` not returning sources in `sources` field. | https://api.github.com/repos/langchain-ai/langchain/issues/5536/comments | 4 | 2023-06-01T04:17:08Z | 2024-01-16T08:36:32Z | https://github.com/langchain-ai/langchain/issues/5536 | 1,735,412,964 | 5,536 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Support Tigris as a vector search backend
### Motivation
Tigris is a Serverless NoSQL Database and Search Platform and have their [vector search](https://www.tigrisdata.com/docs/concepts/vector-search/python/) product. It will be great option for users to use an integrated database and search product.
### Your contribution
I can submit a a PR | Add Tigris vectorstore for vector search | https://api.github.com/repos/langchain-ai/langchain/issues/5535/comments | 3 | 2023-06-01T03:18:00Z | 2023-06-06T03:39:17Z | https://github.com/langchain-ai/langchain/issues/5535 | 1,735,366,931 | 5,535 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i follow the chapter “Chat Over Documents with Chat History” to build a bot chat with pdf,
i want to streanming return,
but when i use stuff chain like this
```python
doc_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=QA_PROMPT)
chain = ConversationalRetrievalChain(
retriever=vector_db.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain
)
```
it return "This model's maximum context length is 4097 tokens, however you requested 5741 tokens (5485 in your prompt; 256 for the completion). Please reduce your prompt; or completion length"
when i use map_reduce chain
```python
doc_chain = load_qa_chain(OpenAI(temperature=0,streaming=True,callbacks=[StreamingStdOutCallbackHandler()]), chain_type="map_reduce", combine_prompt=getQaMap_reducePromot())
```
it return "Cannot stream results with multiple prompts."
how to resolve it when the context is too long
### Suggestion:
_No response_ | Issue: how stream results with long context | https://api.github.com/repos/langchain-ai/langchain/issues/5532/comments | 4 | 2023-06-01T00:52:51Z | 2024-02-07T16:30:03Z | https://github.com/langchain-ai/langchain/issues/5532 | 1,735,238,661 | 5,532 |
[
"langchain-ai",
"langchain"
] | start_chat() constructs a vertextai _ChatSession and sets class variables with the parameters, but send_message() will not use those parameters if send_message is called w/o parameters. This is because send_message() has default values for the parameters which are set to global variables.
You can fix this by moving **self._default_params to the send_message() call.
https://github.com/hwchase17/langchain/blob/359fb8fa3ae0b0904dbb36f998cd2339ea0aec0f/langchain/chat_models/vertexai.py#LL122C75-L122C75 | Sampling parameters are ignored by vertexai | https://api.github.com/repos/langchain-ai/langchain/issues/5531/comments | 2 | 2023-05-31T22:19:11Z | 2023-06-05T14:06:42Z | https://github.com/langchain-ai/langchain/issues/5531 | 1,735,113,256 | 5,531 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Chat models relying on `SystemMessage`, ... instead of simple text, hinder creating longer prompts.
It would have been much simpler to avoid special casing chat models, and instead parse special tokens in the text prompt to separate system, human, ai, ...
### Motivation
Something similar to [this](https://github.com/microsoft/MM-REACT/blob/main/langchain/llms/openai.py#L211) that uses `<|im_start|>system\nsystem message<|im_end|>` would make it easier to keep the same code for models, and just use different prompts for chat endpints.
For example, it is perfectly valid to have 2 system messages, and I found it improves the results to have a system message at the beginning, and [one after](https://github.com/microsoft/MM-REACT/blob/main/langchain/agents/assistant/prompt.py#L191) some zero-shot examples right before the input.
### Your contribution
I can send the PR if there is any interest. | Remove Chat Models | https://api.github.com/repos/langchain-ai/langchain/issues/5530/comments | 2 | 2023-05-31T22:08:48Z | 2023-09-13T16:07:48Z | https://github.com/langchain-ai/langchain/issues/5530 | 1,735,101,156 | 5,530 |
[
"langchain-ai",
"langchain"
] | Hi,
I am building a chatbot using LLM like fastchat-t5-3b-v1.0 and want to reduce my inference time.
I am loading the entire model on GPU, using device_map parameter, and making use of `langchain.llms.HuggingFacePipeline` agent for querying the LLM model. Also specifying the device=0 ( which is the 1st rank GPU) for hugging face pipeline as well.
I am monitoring the GPU and CPU usage throughout the entire execution, and I can see that though my model is on GPU, at the time of querying the model, it makes use of CPU.
The spike in CPU usage shows that query execution is happening on CPU.
Below is the code that I am using to do inference on Fastchat LLM.
```
from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex, PromptHelper, LLMPredictor
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import LangchainEmbedding, ServiceContext
from transformers import T5Tokenizer, T5ForConditionalGeneration
from accelerate import init_empty_weights, infer_auto_device_map
model_name = 'lmsys/fastchat-t5-3b-v1.0'
config = T5Config.from_pretrained(model_name )
with init_empty_weights():
model_layer = T5ForConditionalGeneration(config=config)
device_map = infer_auto_device_map(model_layer, max_memory={0: "12GiB",1: "12GiB", "cpu": "0GiB"}, no_split_module_classes=["T5Block"])
# the value for is : device_map = {'': 0}. i.e loading model in 1st GPU
model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.float16, device_map=device_map, offload_folder="offload", offload_state_dict=True)
tokenizer = T5Tokenizer.from_pretrained(model_name)
from transformers import pipeline
pipe = pipeline(
"text2text-generation", model=model, tokenizer=tokenizer, device= 0,
max_length=1536, temperature=0, top_p = 1, num_beams=1, early_stopping=False
)
from langchain.llms import HuggingFacePipeline
llm = HuggingFacePipeline(pipeline=pipe)
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
# set maximum input size
max_input_size = 2048
# set number of output tokens
num_outputs = 512
# set maximum chunk overlap
max_chunk_overlap = 20
# set chunk size limit
chunk_size_limit = 300
prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap)
service_context = ServiceContext.from_defaults(embed_model=embed_model, llm_predictor=LLMPredictor(llm), prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit)
# build index
documents = SimpleDirectoryReader('data').load_data()
new_index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
query_engine = new_index.as_query_engine(
response_mode='no_text',
verbose=True,
similarity_top_k=2
)
template = """
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
### Human: Given the context:
---
{context}
---
Answer the following question:
---
{input}
### Assistant:
"""
from langchain import LLMChain, PromptTemplate
prompt = PromptTemplate(
input_variables=["context", "input"],
template=template,
)
chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True
)
user_input= "sample query question?"
context = query_engine.query(user_input)
concatenated_context = ' '.join(map(str, [node.node.text for node in context.source_nodes]))
response = chain.run({"context": concatenated_context, "input": user_input})
```
Here the “data” folder has my full input text in pdf format, and am using the GPTVectoreStoreIndex and hugging face pipeline to build the index on that and fetch the relevant chunk to generate the prompt with context and user_input
Then using LLMChain agent from langchain library to generate the response from FastChat model as shown in the code.
Please have a look, and let me know if this is the expected behaviour.
how can I make use of GPU for query execution as well? to reduce the inference response time.
| Query execution with langchain LLM pipeline is happening on CPU, even if model is loaded on GPU | https://api.github.com/repos/langchain-ai/langchain/issues/5522/comments | 2 | 2023-05-31T20:18:21Z | 2023-09-21T16:08:57Z | https://github.com/langchain-ai/langchain/issues/5522 | 1,734,953,214 | 5,522 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
$ langchain env
LangChain Environment:
library_version:0.0.184
platform:Linux-5.4.0-146-generic-x86_64-with-glibc2.31
runtime:python
runtime_version:3.11.3
```
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Following the [tracing v2 instructions](https://python.langchain.com/en/latest/tracing/agent_with_tracing.html#beta-tracing-v2), run:
```
$ langchain plus start
WARN[0000] The "OPENAI_API_KEY" variable is not set. Defaulting to a blank string.
[+] Running 2/2
⠿ langchain-frontend Pulled 5.3s
⠿ langchain-backend Pulled 9.5s
unable to prepare context: path "frontend-react/." not found
langchain plus server is running at http://localhost. To connect locally, set the following environment variable when running your LangChain application.
LANGCHAIN_TRACING_V2=true
```
It looks like neither the `frontend-react` or `backend` folders referenced by the [`docker-compose.yaml`](https://github.com/hwchase17/langchain/blob/f72bb966f894f99c9ffc2c730be392c71d020ac8/langchain/cli/docker-compose.yaml#L14) are in the repository, thus docker won't build them. Maybe we should remove the `build:` section of the YAML when deploying to users so they simply pull the images from the Docker Hub.
### Expected behavior
It should start properly. | Tracing V2 doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/5521/comments | 1 | 2023-05-31T19:04:38Z | 2023-09-10T16:09:29Z | https://github.com/langchain-ai/langchain/issues/5521 | 1,734,829,837 | 5,521 |
[
"langchain-ai",
"langchain"
] | ### System Info
Most recent version of Langchain
Python: 3.10.8
MacOS 13.4 - M1
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The Chroma constructor in the vectorstore section uses the document function when it should be the query function for embeddings. As a result, if the documents parameter is blank when using Chroma, Langchain will error out with a ValidationError. Please change line 95 to be embed_query instead of embed_documents [here](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py) in order for this to work / be consistent with the rest of the vectorstore wrappers
### Expected behavior
Use the query function instead of the documents function for use with embeddings in Chroma | Chroma: Constructor takes wrong embedding function (document vs query) | https://api.github.com/repos/langchain-ai/langchain/issues/5519/comments | 4 | 2023-05-31T18:02:20Z | 2023-10-18T16:07:54Z | https://github.com/langchain-ai/langchain/issues/5519 | 1,734,732,180 | 5,519 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have created a pipeline. And want to use the same pipeline in openapi_agent. When I run the following command:
ibm_agent = planner.create_openapi_agent(ibm_api_spec, requests_wrapper, hf_pipeline)
I get error out of memory error. I'm using flan-t5-xxl llm, which consumes 22GB of memory. I have 18GB left.
```
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xxl")
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-xxl", torch_dtype=torch.float16, device_map="auto")
instruct_pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer,
pad_token_id=tokenizer.eos_token_id,
torch_dtype=torch.bfloat16, device='cuda:0', max_length=2000)
hf_pipeline = HuggingFacePipeline(pipeline=instruct_pipeline)
agent = planner.create_openapi_agent(api_spec, requests_wrapper, hf_pipeline)
user_query = "query"
agent.run(user_query)
```
When i run code i get out following error
```
> Entering new AgentExecutor chain...
Action: api_planner Action Input: api_planner(query) api_planner(query) api_controller(api_planner(query))
Traceback (most recent call last):
File "/home/kiran/dolly/agents.py", line 79, in <module>
ibm_agent.run(user_query)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 236, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 953, in _call
next_step_output = self._take_next_step(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 820, in _take_next_step
observation = tool.run(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 294, in run
raise e
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 266, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 409, in _run
self.func(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 236, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate
return self.llm.generate_prompt(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 134, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 191, in generate
raise e
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 185, in generate
self._generate(prompts, stop=stop, run_manager=run_manager)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 436, in _generate
self._call(prompt, stop=stop, run_manager=run_manager)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
response = self.pipeline(prompt)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/text2text_generation.py", line 165, in __call__
result = super().__call__(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1119, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1126, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1025, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/text2text_generation.py", line 187, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1322, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 638, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1086, in forward
layer_outputs = layer_module(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward
self_attention_outputs = self.layer[0](
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward
attention_output = self.SelfAttention(
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 530, in forward
scores = torch.matmul(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 28.28 GiB (GPU 0; 39.43 GiB total capacity; 25.09 GiB already allocated; 13.13 GiB free; 25.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
| Flan-t5-xxl doesnot work with openapi_agent | https://api.github.com/repos/langchain-ai/langchain/issues/5513/comments | 5 | 2023-05-31T16:21:02Z | 2023-10-19T16:07:59Z | https://github.com/langchain-ai/langchain/issues/5513 | 1,734,577,939 | 5,513 |
[
"langchain-ai",
"langchain"
] | ### Feature request
If input_variables is not passed, try to detect them automatically as those which are surrounded by curly braces:
E.g.
```
prompt_template = PromptTemplate(template="What is the price of {product_name}?") ## Automatically detects the input_variables to be ['product_name']
```
### Motivation
This has been bugging me for a while and makes it more cumbersome.
### Your contribution
You can use the code mentioned below, it's literally that simple (at least for f-strings).
I can submit a PR.
```
def str_format_args(x: str, named_only: bool = True) -> List[str]:
## Ref: https://stackoverflow.com/a/46161774/4900327
args: List[str] = [
str(tup[1]) for tup in string.Formatter().parse(x)
if tup[1] is not None
]
if named_only:
args: List[str] = [
arg for arg in args
if not arg.isdigit() and len(arg) > 0
]
return args
str_format_args("What is the price of {product_name}?") ## Returns ['product_name']
``` | Automatically detect input_variables from PromptTemplate string | https://api.github.com/repos/langchain-ai/langchain/issues/5511/comments | 2 | 2023-05-31T15:59:04Z | 2023-09-18T16:09:45Z | https://github.com/langchain-ai/langchain/issues/5511 | 1,734,540,520 | 5,511 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/hwchase17/langchain/discussions/5499
<div type='discussions-op-text'>
<sup>Originally posted by **lucasiscovici** May 31, 2023</sup>
Hello and thank you for this amazing library.
Here we :
- get question
- get new_question with the question_generator
- retrieve docs with _get_docs and the new_question
- call the combine_docs_chain with the new_question and the docs
1/ It's possible to allow to call the question_generator event if the chat_history_str is empty ?
i have to transform the question to search query to call the search engine even if the chat history is empty
2/ It's possible to not use the new_question in the combine_docs_chain call ?
i need the true question and not the new question (the search query) to call the llm for the qa
Thanks in advance
```python
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
question = inputs["question"]
get_chat_history = self.get_chat_history or _get_chat_history
chat_history_str = get_chat_history(inputs["chat_history"])
if chat_history_str:
callbacks = _run_manager.get_child()
new_question = self.question_generator.run(
question=question, chat_history=chat_history_str, callbacks=callbacks
)
else:
new_question = question
docs = self._get_docs(new_question, inputs)
new_inputs = inputs.copy()
new_inputs["question"] = new_question
new_inputs["chat_history"] = chat_history_str
answer = self.combine_docs_chain.run(
input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs
)
if self.return_source_documents:
return {self.output_key: answer, "source_documents": docs}
else:
return {self.output_key: answer}
``` | ConversationalRetrievalChain new_question only from the question_generator only for retrieval and not for combine_docs_chain | https://api.github.com/repos/langchain-ai/langchain/issues/5508/comments | 0 | 2023-05-31T15:43:29Z | 2023-06-12T13:21:03Z | https://github.com/langchain-ai/langchain/issues/5508 | 1,734,515,334 | 5,508 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey Team,
I would like to propose a new feature that will enhance the visibility of the LLM's response time. In addition to providing information about token usage and cost, I suggest incorporating the time taken to generate the text. This additional metric will offer valuable insights into the efficiency and performance of the system.
### Motivation
By including the response time, we can provide a comprehensive picture of the different LLM's API's performance, ensuring that we have a more accurate measure of its capabilities. This information will be particularly useful for evaluating and optimizing different LLMs, as it will shed light on the latency of the system.
### Your contribution
We can easily implement this by adding additional variables in callbacks of LLM's. I would like to implement this feature.
Here the example code:
```
class BaseCallbackHandler:
"""Base callback handler that can be used to handle callbacks from langchain."""
time_take_by_llm_to_generate_text: float = 0
start_time: float = 0
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
"""Run when LLM starts running."""
self.start_time = datetime.now()
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
self.end_time = datetime.now()
self.time_take_by_llm_to_generate_text += end_time - start_time
```
| Tracking of time to generate text | https://api.github.com/repos/langchain-ai/langchain/issues/5498/comments | 5 | 2023-05-31T12:33:47Z | 2023-12-16T05:54:01Z | https://github.com/langchain-ai/langchain/issues/5498 | 1,734,126,410 | 5,498 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a Flask with LangChain setup in docker-compose, and I don't see LLM ChatOpenAI streaming output from CallbackHandlers in console, but everything works when I run it locally without Docker.
My CallbackHandler code (StreamingStdOutCallbackHandler also doesn't work):
```
from typing import Any
from langchain.callbacks.base import BaseCallbackHandler
class StreamingOutput(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
print(token, end="", flush=True)
```
- ChatOpenAI has streaming and verbose flags set to true
- ConversationChain has verbose flag set to True
- Flask is run with `CMD ["flask", "run", "--debug", "--with-threads"]`
I tried setting the PYTHONUNBUFFERED env variable but it didn't help - what am I doing wrong?
### Suggestion:
_No response_ | Issue: LLM callback handler not printing in Docker | https://api.github.com/repos/langchain-ai/langchain/issues/5493/comments | 2 | 2023-05-31T10:43:14Z | 2023-11-16T02:12:25Z | https://github.com/langchain-ai/langchain/issues/5493 | 1,733,916,668 | 5,493 |
[
"langchain-ai",
"langchain"
] | Can I connect to my RDBMS?
### Suggestion:
_No response_ | Can I connect to my RDBMS? | https://api.github.com/repos/langchain-ai/langchain/issues/5492/comments | 4 | 2023-05-31T10:17:58Z | 2023-09-18T16:09:50Z | https://github.com/langchain-ai/langchain/issues/5492 | 1,733,868,611 | 5,492 |
[
"langchain-ai",
"langchain"
] | ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 import langchain
File ~\anaconda3\lib\site-packages\langchain\__init__.py:6, in <module>
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
18 VectorDBQAWithSourcesChain,
19 )
File ~\anaconda3\lib\site-packages\langchain\agents\__init__.py:2, in <module>
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
6 BaseMultiActionAgent,
7 BaseSingleActionAgent,
8 LLMSingleActionAgent,
9 )
10 from langchain.agents.agent_toolkits import (
11 create_csv_agent,
12 create_json_agent,
(...)
21 create_vectorstore_router_agent,
22 )
23 from langchain.agents.agent_types import AgentType
File ~\anaconda3\lib\site-packages\langchain\agents\agent.py:13, in <module>
10 from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union
12 import yaml
---> 13 from pydantic import BaseModel, root_validator
15 from langchain.agents.agent_types import AgentType
16 from langchain.agents.tools import InvalidTool
File ~\anaconda3\lib\site-packages\pydantic\__init__.py:2, in init pydantic.__init__()
File ~\anaconda3\lib\site-packages\pydantic\dataclasses.py:48, in init pydantic.dataclasses()
File ~\anaconda3\lib\site-packages\pydantic\main.py:120, in init pydantic.main()
TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'
| Error while importing Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5490/comments | 4 | 2023-05-31T09:18:01Z | 2023-09-18T16:09:56Z | https://github.com/langchain-ai/langchain/issues/5490 | 1,733,741,544 | 5,490 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I cant find any way to add custom metadata with the character splitter, it adds source as metadata but I cant seem to change it or define what kind of metadata I want
### Suggestion:
_No response_ | the text splitter adds metadata by itself | https://api.github.com/repos/langchain-ai/langchain/issues/5489/comments | 3 | 2023-05-31T08:27:59Z | 2023-11-30T16:09:16Z | https://github.com/langchain-ai/langchain/issues/5489 | 1,733,653,438 | 5,489 |
[
"langchain-ai",
"langchain"
] | Is there a way to pass parameters to Elasticvectorsearch to disable ssl verification. I tried to add verify_certs=False and ssl_verify=None ; but both didnt work. | Connecting to Elastic vector store throws ssl error | https://api.github.com/repos/langchain-ai/langchain/issues/5488/comments | 5 | 2023-05-31T08:15:41Z | 2023-09-26T16:06:29Z | https://github.com/langchain-ai/langchain/issues/5488 | 1,733,633,884 | 5,488 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.186
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the following code snippet:
```python
from langchain import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64})
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain.save("/tmp/hfp/model.yaml")
from langchain.chains.loading import load_chain
local_loaded_model = load_chain("/tmp/hfp/model.yaml")
question = "What is electroencephalography?"
local_loaded_model.run(question)
```
Gives the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File <command-826248432925795>:1
----> 1 local_loaded_model.run(question)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs)
234 if len(args) != 1:
235 raise ValueError("`run` supports only one positional argument.")
--> 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
238 if kwargs and not args:
239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
--> 140 raise e
141 run_manager.on_chain_end(outputs)
142 return self.prep_outputs(inputs, outputs, return_only_outputs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
128 run_manager = callback_manager.on_chain_start(
129 {"name": self.__class__.__name__},
130 inputs,
131 )
132 try:
133 outputs = (
--> 134 self._call(inputs, run_manager=run_manager)
135 if new_arg_supported
136 else self._call(inputs)
137 )
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/llm.py:69, in LLMChain._call(self, inputs, run_manager)
64 def _call(
65 self,
66 inputs: Dict[str, Any],
67 run_manager: Optional[CallbackManagerForChainRun] = None,
68 ) -> Dict[str, str]:
---> 69 response = self.generate([inputs], run_manager=run_manager)
70 return self.create_outputs(response)[0]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/llm.py:79, in LLMChain.generate(self, input_list, run_manager)
77 """Generate LLM result from inputs."""
78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
---> 79 return self.llm.generate_prompt(
80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None
81 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:134, in BaseLLM.generate_prompt(self, prompts, stop, callbacks)
127 def generate_prompt(
128 self,
129 prompts: List[PromptValue],
130 stop: Optional[List[str]] = None,
131 callbacks: Callbacks = None,
132 ) -> LLMResult:
133 prompt_strings = [p.to_string() for p in prompts]
--> 134 return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:191, in BaseLLM.generate(self, prompts, stop, callbacks)
189 except (KeyboardInterrupt, Exception) as e:
190 run_manager.on_llm_error(e)
--> 191 raise e
192 run_manager.on_llm_end(output)
193 return output
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:185, in BaseLLM.generate(self, prompts, stop, callbacks)
180 run_manager = callback_manager.on_llm_start(
181 {"name": self.__class__.__name__}, prompts, invocation_params=params
182 )
183 try:
184 output = (
--> 185 self._generate(prompts, stop=stop, run_manager=run_manager)
186 if new_arg_supported
187 else self._generate(prompts, stop=stop)
188 )
189 except (KeyboardInterrupt, Exception) as e:
190 run_manager.on_llm_error(e)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:436, in LLM._generate(self, prompts, stop, run_manager)
433 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
434 for prompt in prompts:
435 text = (
--> 436 self._call(prompt, stop=stop, run_manager=run_manager)
437 if new_arg_supported
438 else self._call(prompt, stop=stop)
439 )
440 generations.append([Generation(text=text)])
441 return LLMResult(generations=generations)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py:168, in HuggingFacePipeline._call(self, prompt, stop, run_manager)
162 def _call(
163 self,
164 prompt: str,
165 stop: Optional[List[str]] = None,
166 run_manager: Optional[CallbackManagerForLLMRun] = None,
167 ) -> str:
--> 168 response = self.pipeline(prompt)
169 if self.pipeline.task == "text-generation":
170 # Text generation return includes the starter text.
171 text = response[0]["generated_text"][len(prompt) :]
TypeError: 'NoneType' object is not callable
```
### Expected behavior
`local_loaded_model.run(question)` should behave the same way as:
```python
llm_chain.run(question)
``` | HuggingFacePipeline is not loaded correctly | https://api.github.com/repos/langchain-ai/langchain/issues/5487/comments | 7 | 2023-05-31T08:05:06Z | 2024-02-20T16:09:06Z | https://github.com/langchain-ai/langchain/issues/5487 | 1,733,616,462 | 5,487 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | https://api.github.com/repos/langchain-ai/langchain/issues/5483/comments | 0 | 2023-05-31T07:52:33Z | 2023-06-19T01:34:19Z | https://github.com/langchain-ai/langchain/issues/5483 | 1,733,595,290 | 5,483 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want langchain to implement caching for document loaders in a way similar to how it caches LLM calls, like this:
```python
from langchain.cache import InMemoryCache
langchain.document_loader_cache = InMemoryCache()
```
### Motivation
Loading from certain documents with langchain document loader can be an expensive operation (for example, I implemneted a custom PDF loader using OCR that's slow, or loaders that involves network calls).
### Your contribution
If langchain would accept such a PR, I'd try to implement the logic and file a PR. | [Feature Request] Supoprts document loader caching | https://api.github.com/repos/langchain-ai/langchain/issues/5481/comments | 4 | 2023-05-31T04:29:06Z | 2023-11-14T16:08:14Z | https://github.com/langchain-ai/langchain/issues/5481 | 1,733,366,807 | 5,481 |
[
"langchain-ai",
"langchain"
] | ### System Info
Lang Chain 0.0.186
Mac OS Ventura
Python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
why i got IndexError: list index out of range when use Chroma.from_documents
import os
from langchain.document_loaders import BiliBiliLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
os.environ["OPENAI_API_KEY"] = "***"
loader = BiliBiliLoader(["https://www.bilibili.com/video/BV18o4y137n1/"])
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=20
)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(documents, embeddings, persist_directory="./db")
db.persist()
Traceback (most recent call last):
File "/bilibili/bilibili_embeddings.py", line 28, in <module>
db = Chroma.from_documents(documents, embeddings, persist_directory="./db")
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 422, in from_documents
return cls.from_texts(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 390, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 160, in add_texts
self._collection.add(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 103, in add
ids, embeddings, metadatas, documents = self._validate_embedding_set(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 354, in _validate_embedding_set
ids = validate_ids(maybe_cast_one_to_many(ids))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/types.py", line 82, in maybe_cast_one_to_many
if isinstance(target[0], (int, float)):
IndexError: list index out of range
### Expected behavior
index gen succefully in the persist_directory | IndexError: list index out of range when use Chroma.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/5476/comments | 10 | 2023-05-31T02:51:19Z | 2024-07-27T17:27:50Z | https://github.com/langchain-ai/langchain/issues/5476 | 1,733,300,168 | 5,476 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi,
Currently the from_documents method will add the embeddings and then return the instance of the store. Why don't we have a method to just return the store. This is useful when I already have a loaded vector store and I just need the instance of the store. It will be like the below code without the _store.add_texts_
```
store = cls(
connection_string=connection_string,
collection_name=collection_name,
embedding_function=embedding,
distance_strategy=distance_strategy,
pre_delete_collection=pre_delete_collection,
)
store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs)
return store
```
### Motivation
This is required when I already have a loaded vector store
### Your contribution
If this change is acceptable, I can add this functionality and create a PR | Getting only the instance of the vector store without adding text | https://api.github.com/repos/langchain-ai/langchain/issues/5475/comments | 3 | 2023-05-31T01:56:08Z | 2023-08-30T17:39:07Z | https://github.com/langchain-ai/langchain/issues/5475 | 1,733,256,921 | 5,475 |
[
"langchain-ai",
"langchain"
] | ### System Info
llm_chain.llm.save("llm.json") # method not found
bug in .ipynb:
docs/modules/chains/generic/serialization.ipynb
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm_chain.llm.save("llm.json") # method not found
bug in .ipynb:
docs/modules/chains/generic/serialization.ipynb
### Expected behavior
llm_chain.llm.save("llm.json") # method not found
bug in .ipynb:
docs/modules/chains/generic/serialization.ipynb | llm_chain.llm.save("llm.json") # method not found | https://api.github.com/repos/langchain-ai/langchain/issues/5474/comments | 1 | 2023-05-31T00:43:10Z | 2023-09-10T16:09:41Z | https://github.com/langchain-ai/langchain/issues/5474 | 1,733,206,572 | 5,474 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.186
MacOS Ventura 13.3 - M1
Python 3.10.8
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is an error in the Qdrant Vectorstore code ([`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py)). Specifically, with the function `_document_from_scored_point` on line 468 of [`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py). The Document object is from the [`schema.py`](https://github.com/hwchase17/langchain/blob/master/langchain/schema.py) The function takes as few arguments:
page_content: str
metadata: dict = Field(default_factory=dict)
The [`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py) file incorrectly makes the *metadata* parameter a string instead of a dict.
This creates a few problems:
1. If the *metadata* parameter in the function `_document_from_scored_point` is passed anything but None or a key that is not in the score_point object (a.k.a. None), it will error out. This is because this variable should be a dict, which is not returned from a dictionary *get* method.
2. The *metadata_payload_key* parameter does not seem to have a purpose / does not make sense given the above context.
3. It is impossible for metadata to be returned when using the Qdrant *similarity_search* function within Langchain due to this issue.
### Expected behavior
I would like to be able to return metadata when using similarity_search with Qdrant. If you run [this](https://www.pinecone.io/learn/langchain-retrieval-augmentation/) example / focus on the vectorstore part and swap out the Pinecone work for Qdrant, there does not seem to be a way to use similarity search with metadata similar to how the example shows it. | Qdrant Document object is not behaving correct | https://api.github.com/repos/langchain-ai/langchain/issues/5473/comments | 4 | 2023-05-30T23:55:06Z | 2023-06-01T16:00:54Z | https://github.com/langchain-ai/langchain/issues/5473 | 1,733,174,360 | 5,473 |
[
"langchain-ai",
"langchain"
] | Does langchain support Oracle database as VectorStores?If yes, how to use the Oracle as VectorStore? | Does langchain support Oracle database as VectorStores?If yes, how to use the Oracle as VectorStore? | https://api.github.com/repos/langchain-ai/langchain/issues/5472/comments | 4 | 2023-05-30T22:45:24Z | 2023-09-06T01:55:37Z | https://github.com/langchain-ai/langchain/issues/5472 | 1,733,121,843 | 5,472 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would like to be able to provide credentials to the bigquery.client object
### Motivation
I cannot access protected datasets without use of a service account or other credentials
### Your contribution
I will submit a PR. | Google BigQuery Loader doesn't take credentials | https://api.github.com/repos/langchain-ai/langchain/issues/5465/comments | 0 | 2023-05-30T21:18:13Z | 2023-05-30T23:25:25Z | https://github.com/langchain-ai/langchain/issues/5465 | 1,733,027,963 | 5,465 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
$ pip show langchain
Name: langchain
Version: 0.0.186
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: /home/mteoh/temp_venv/venv/lib/python3.10/site-packages
Requires: PyYAML, pydantic, tenacity, dataclasses-json, numexpr, numpy, openapi-schema-pydantic, aiohttp, async-timeout, requests, SQLAlchemy
Required-by:
```
```
$ python --version
Python 3.10.2
```
### Who can help?
@vowelpa
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install langchain version 0.0.186, python version 3.10.2
2. Run the code below (I put in a file called `mwe.py`)
```python
from langchain.tools import StructuredTool
from typing import Dict
from pydantic import BaseModel
def foo(args_dict: Dict[str, str]):
return "hi there"
class FooSchema(BaseModel):
args_dict: Dict[str, str]
foo_tool = StructuredTool.from_function(
foo,
name="FooTool",
description="min working example of a bug?",
# args_schema=FooSchema # inferring this schema does not work
)
result = foo_tool.run(tool_input={
"args_dict": {"aa": "bb"}
})
print(result)
```
4. observe the error below:
```
Traceback (most recent call last):
File "/home/mteoh/temp_venv/mwe.py", line 18, in <module>
result = foo_tool.run(tool_input={
File "/home/mteoh/temp_venv/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 247, in run
parsed_input = self._parse_input(tool_input)
File "/home/mteoh/temp_venv/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 190, in _parse_input
result = input_args.parse_obj(tool_input)
File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for FooToolSchemaSchema
args_dict
str type expected (type=type_error.str)
```
### Expected behavior
We expect to see the output of `foo()` which is `"hi there"`.
You can get this result by uncommenting `args_schema=FooSchema` above. This is a problem, because this line below in `StructuredTool.from_function()` https://github.com/hwchase17/langchain/blob/58e95cd11e2c2fc31ed6551b5a2b876143d57429/langchain/tools/base.py#L469 suggests that the schema gets inferred, if not provided one. Instead, what's happening is that the tool "infers" that the arguments involve just one string, which is incorrect.
I don't mind fixing this myself. In that case, any guidance is very welcome. Thank you! | Structured tools cannot properly infer function schema | https://api.github.com/repos/langchain-ai/langchain/issues/5463/comments | 2 | 2023-05-30T20:51:13Z | 2023-09-10T16:09:44Z | https://github.com/langchain-ai/langchain/issues/5463 | 1,732,993,542 | 5,463 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using conversationalRetrievalChain. I cannot seem to change the system template. Any suggestion how to do this?
`retriever = vectorstore.as_retriever(search_kwargs={"k": source_amount}, qa_template=QA_PROMPT, question_generator_template=CONDENSE_PROMPT)`
`qa = ConversationalRetrievalChain.from_llm(llm=model, retriever=retriever, return_source_documents=True)`
When printing QA:
[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'],
Whatever I try, I seem to be unable to change the template "Use the following pieces of context to answer..."
### Suggestion:
_No response_ | conversationalRetrievalChain - how to set the template | https://api.github.com/repos/langchain-ai/langchain/issues/5462/comments | 8 | 2023-05-30T20:43:46Z | 2023-10-25T13:25:13Z | https://github.com/langchain-ai/langchain/issues/5462 | 1,732,984,618 | 5,462 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.184
Python 3.9.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using as_retriever in a RetrievalQA with Pinecone as the vector store. If i use search_type="similarity_score_threshold" code below works. If I change this to `similarity_score_threshold` and set a `score_threshold`, then when I run the qa I get NotImplementedError:
The code looks like this
```python
db = Pinecone.from_existing_index(index_name=os.environ.get('INDEX'),
namespace='SCA_H5',
embedding=OpenAIEmbeddings())
retriever=db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"k":3, "score_threshold":0.5})
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0), # uses 'gpt-3.5-turbo' which is cheaper and better
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
```
The python Traceback is
```python
NotImplementedError Traceback (most recent call last)
Cell In[4], line 1
----> 1 result = Simon("What does the legisltation cover", sources=True, content=False)
Cell In[3], line 26, in Simon(query, sources, content)
21 def Simon(query, sources=True, content=False):
23 instructions = '''You are an expert in Western Australia "Strata Titles Act"
24 answering questions from a citizen. Only use information provided to you from the
25 legislation below. If you do not know say "I do not know"'''
---> 26 result = qa({"query": f'{instructions} \n\n {query}'})
27 process_llm_response(result, sources=sources, content=content)
28 return (result)
File [~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:140](https://file+.vscode-resource.vscode-cdn.net/home/kmcisaac/Projects/Personal/SCAWA/~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:140), in Chain.__call__(self, inputs, return_only_outputs, callbacks)
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
--> 140 raise e
141 run_manager.on_chain_end(outputs)
142 return self.prep_outputs(inputs, outputs, return_only_outputs)
File [~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:134](https://file+.vscode-resource.vscode-cdn.net/home/kmcisaac/Projects/Personal/SCAWA/~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:134), in Chain.__call__(self, inputs, return_only_outputs, callbacks)
128 run_manager = callback_manager.on_chain_start(
129 {"name": self.__class__.__name__},
130 inputs,
...
165 0 is dissimilar, 1 is most similar.
166 """
--> 167 raise NotImplementedError
```
### Expected behavior
The qa call does not fail. | similarity_score_threshold NotImplementedError | https://api.github.com/repos/langchain-ai/langchain/issues/5458/comments | 4 | 2023-05-30T17:36:59Z | 2023-10-26T16:07:38Z | https://github.com/langchain-ai/langchain/issues/5458 | 1,732,692,820 | 5,458 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | Tools: Inconsistent callbacks/run_manager parameter | https://api.github.com/repos/langchain-ai/langchain/issues/5456/comments | 4 | 2023-05-30T17:09:02Z | 2023-06-23T08:48:28Z | https://github.com/langchain-ai/langchain/issues/5456 | 1,732,655,629 | 5,456 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Unable to recreate return source documents from prompt in the current [Vectorstore Agent Documentation](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/vectorstore.html). I tried adding `return_source_documents=True` to my `create_vectorstore_agent` (as discussed [here](https://github.com/hwchase17/langchain/issues/4562)) method and explicitly asking for the source document:
`agent_executor.run("What did biden say about ketanji brown jackson is the state of the union address? Show me the source document")`
But this only returns the content of the `answer`, i.e.
```
{
"answer":"message returned here.\n",
"sources":"13421341235123"
}
```
### Idea or request for content:
Would like either a way to link to a custom output parser / memory for this use case ([memory does seem to work out of the box](https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/sharedmemory_for_tools.html)) or a demo of how to configure the underlying tools to force output to string or something. | DOC: Return Source Documents to Vectorstore Agent | https://api.github.com/repos/langchain-ai/langchain/issues/5455/comments | 1 | 2023-05-30T17:07:58Z | 2023-09-15T16:09:32Z | https://github.com/langchain-ai/langchain/issues/5455 | 1,732,654,338 | 5,455 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
As we know, we can build a agent with tool by following way:
```python
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
```
We can also use `FileManagementToolKit` to manage the file system. But I want to know, how to build a agent of file management of file system. `Langchain` still not provide agent for file management. So I want to know how to build agent with file system. I tried to use `load_tools` as follows, but failed. `FileManagementToolKit` can not be imported to `load_tools()` because `load_tools` does not provider file-related options.
```python
import os
from langchain.agents.agent_toolkits import FileManagementToolkit
from tempfile import TemporaryDirectory
from langchain.agents import load_tools
working_directory = TemporaryDirectory(dir=os.getcwd())
toolkit = FileManagementToolkit(root_dir=str(working_directory.name))
tool_names = list(map(lambda item: item.name,toolkit.get_tools()))
tools = load_tools(tool_names)
```
```
ValueError Traceback (most recent call last)
Cell In[27], line 3
1 from langchain.agents import load_tools
2 tool_names = list(map(lambda item: item.name,toolkit.get_tools()))
----> 3 tools = load_tools(tool_names)
File E:\Programming\anaconda\lib\site-packages\langchain\agents\load_tools.py:341, in load_tools(tool_names, llm, callback_manager, **kwargs)
339 tools.append(tool)
340 else:
--> 341 raise ValueError(f"Got unknown tool {name}")
342 return tools
ValueError: Got unknown tool copy_file
```
the tools of `FileManagementToolkit`:
```python
list(map(lambda item: item.name,toolkit.get_tools()))
```
```
['copy_file',
'file_delete',
'file_search',
'move_file',
'read_file',
'write_file',
'list_directory']
```
### Suggestion:
Maybe we can build a like `create_file_agent()` like `create_sql_agent()`. As we all know, we can build sql agent as follows:
```python
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
def create_mysql_kit():
db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")
llm = OpenAI(temperature=0.3)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
# agent_executor.run("Who are the users of sysuser in this system? Tell me the username of all users")
agent_executor.run("How many people are in this system?")
if __name__ == '__main__':
create_mysql_kit()
```
I think we can build the `file agent` in the same way.
### More
- There may be some way to achieve the same functionality as the file agent, but I don't know. If so, please tell to how to use it.
- Can we provide a method to make an agent use all tools, including tools in toolkit and tools of `load_tools()` | Cannot build a file agent | https://api.github.com/repos/langchain-ai/langchain/issues/5454/comments | 5 | 2023-05-30T17:06:27Z | 2023-12-09T16:06:21Z | https://github.com/langchain-ai/langchain/issues/5454 | 1,732,652,179 | 5,454 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/chains/conversational_retrieval/base.py#L34
It seems the `_get_chat_history` is building the chat_history string but if the history is already a string then it should.
The check might even be in the BaseConversationalRetrievalChain `_call` methods.
What would be the correct way of using this if the chat_history is already a string? | Why raise an error in conversation retrieval chain if the chat history is a string? | https://api.github.com/repos/langchain-ai/langchain/issues/5452/comments | 3 | 2023-05-30T16:41:33Z | 2023-10-12T16:09:23Z | https://github.com/langchain-ai/langchain/issues/5452 | 1,732,618,952 | 5,452 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Example URL of a Text Fragment to the README of this project that highlights the About:
https://github.com/hwchase17/langchain#:~:text=About-,%E2%9A%A1,%E2%9A%A1,-Resources
A SO: https://stackoverflow.com/questions/62989058/how-does-text-in-url-works-to-highlight-text
Example of splitter I'm talking about: https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html
It'll be awesome if these text splitters could get the ability to try and generate [Text Fragments](https://web.dev/text-fragments/) for the text that was split up so that a URL could be generated that a user can click through and have the browser auto-scroll to the highlighted fragment. I'm sure the system could also be used outside the browser world for some tooling that could itself scroll to as well as it should be a well developed pattern/algorithm.
The system wouldn't be perfect due to issues such as duplicate text on page, impossible to generate unique split text, but I'm sure most citations would still find it useful.
### Motivation
I'm a little disappointed at the [notion db employee handbook example](https://github.com/hwchase17/notion-qa) where the sources are just filenames.
What if the info was in a big doc? `Source: Office d0ebcaaa2074442ba155c67a41d315dd.md` ? Eh. How about as an option:
```
Source: Office%20d0ebcaaa2074442ba155c67a41d315dd.md#:~:text=~12%20o%27%20clock%2C%20there%20is%20lunch%20in%20the%20canteen%2C%20free%20of%20cost.%20Jo%C3%ABlle%20is%20in%20charge%20of%20lunch%20%E2%80%94%C2%A0ask%20her%20if%20you%20need%20anything%20(allergies%20for%20example).
```
[Hyperlink to raw with text fragment](https://github.com/hwchase17/notion-qa/blob/71610847545c97041b93ecb3b19d9746623ce80f/Notion_DB/Blendle's%20Employee%20Handbook%20a834d55573614857a48a9ce9ec4194e3/Office%20d0ebcaaa2074442ba155c67a41d315dd.md#:~:text=~12%20o%27%20clock%2C%20there%20is%20lunch%20in%20the%20canteen%2C%20free%20of%20cost.%20Jo%C3%ABlle%20is%20in%20charge%20of%20lunch%20%E2%80%94%C2%A0ask%20her%20if%20you%20need%20anything%20(allergies%20for%20example).)
Of course, that looks ugly in a terminal, but on a web page where links can be hyperlinks like above, it'll be a much better experience.
edit: Hmm, that link doesn't work very well on GitHub and it's turbolink'd pages.
### Your contribution
I wish, I'm still trying to grasp Langchain itself. I'm particularly interested in Langchain and friends or rivals for Q/A answering and some of my personal hobby's, my work's, and the notion DB example's pages are quite long. | Text Fragments from text splitters for deep linking with browsers (or compatible systems) to specific text chunks in source documents | https://api.github.com/repos/langchain-ai/langchain/issues/5451/comments | 1 | 2023-05-30T16:13:04Z | 2023-09-10T16:09:55Z | https://github.com/langchain-ai/langchain/issues/5451 | 1,732,574,359 | 5,451 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
We can get the intermediate messages printed when the verbose is set to True in the chains. But is there a way we can get the intermediate messages from the chain as a return value?
### Suggestion:
Take the code below as example.
```
from langchain import OpenAI, ConversationChain
from langchain.llms import OpenAI
llm = OpenAI(engine="text-davinci-003", temperature=0.9)
conversation = ConversationChain(llm=llm, verbose=True)
conversation.predict(input="How are you?")
conversation.predict(input="I am Ricardo Kaka, what is your name?")
conversation.predict(input="What is the first thing I said to you?")
```
We get the messages below printed in the shell. But I am wondering if there is a way I can get the messages as an return value, something like conversation.verbose_message, or conversation.get_verbose_message()?
| Issue: Get the verbose messages from chain | https://api.github.com/repos/langchain-ai/langchain/issues/5448/comments | 4 | 2023-05-30T15:46:42Z | 2023-12-11T16:07:28Z | https://github.com/langchain-ai/langchain/issues/5448 | 1,732,533,742 | 5,448 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In https://docs.langchain.com/docs/components/agents/agent-executor the statement:
The agent executor is responsible for calling the agent, getting back **and** action and action input, calling the tool that the action references with the corresponding input, getting the output of the tool, and then passing all that information back into the Agent to get the next action it should take
### Idea or request for content:
The agent executor is responsible for calling the agent, getting back action and action input, calling the tool that the action references with the corresponding input, getting the output of the tool, and then passing all that information back into the Agent to get the next action it should take. | DOC: Small typo in the docs, "and" should be removed, and maybe a period in the end would be ok. | https://api.github.com/repos/langchain-ai/langchain/issues/5447/comments | 4 | 2023-05-30T15:43:50Z | 2023-10-31T16:06:50Z | https://github.com/langchain-ai/langchain/issues/5447 | 1,732,528,994 | 5,447 |
[
"langchain-ai",
"langchain"
] | ### Feature request
An interesting takeway for Meta TOT - Meta Tree of Thoughts. aims to enhance the Tree of Thoughts (TOT) language algorithm by using a secondary agent to critique and improve the primary agent's prompts. This innovative approach allows the primary agent to generate more accurate and relevant responses based on the feedback from the secondary agent.
https://github.com/kyegomez/Meta-Tree-Of-Thoughts
I would like add it to the overall offering. If ok, I can pick it up.
### Motivation
Optimization on the continuous feedback loop.
### Your contribution
I will like to work on this issue. | Support for Meta ToT | https://api.github.com/repos/langchain-ai/langchain/issues/5444/comments | 1 | 2023-05-30T15:15:18Z | 2023-09-10T16:10:00Z | https://github.com/langchain-ai/langchain/issues/5444 | 1,732,484,112 | 5,444 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
ElasticVectorSearch currently assigns a uuid as identifier while indexing documents.
This is not idempotent: if we run the code twice duplicates are created.
Also it would be beneficial to be able to insert new docs, update existing ones and ignore unchanged.
### Suggestion:
I propose to check first if _id or id is present in metadata before setting it to a UUID. | Allow ElasticVectorSearch#add_texts to explicitely set the _ids | https://api.github.com/repos/langchain-ai/langchain/issues/5437/comments | 2 | 2023-05-30T13:40:49Z | 2023-09-10T16:10:05Z | https://github.com/langchain-ai/langchain/issues/5437 | 1,732,303,481 | 5,437 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am trying to use LLM from hugging face as shown in the documentation below, it is working only when write the same prompt in the documentation but when I am changing it I don't have responses.
https://github.com/hwchase17/langchain/blob/master/docs/modules/models/llms/integrations/huggingface_hub.ipynb
### Idea or request for content:
_No response_ | LLM from hugging face not working | https://api.github.com/repos/langchain-ai/langchain/issues/5436/comments | 1 | 2023-05-30T13:05:46Z | 2023-09-10T16:10:18Z | https://github.com/langchain-ai/langchain/issues/5436 | 1,732,233,152 | 5,436 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version : 0.0.177
Python version : 3.10.8
Platform : WSL 2
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun(verbose=True)
search.run("What is Captial of Ireland?")
/home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:20: UserWarning: ddg is deprecated. Use DDGS().text() generator
warnings.warn("ddg is deprecated. Use DDGS().text() generator")
/home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:22: UserWarning: parameter time is deprecated, use parameter timelimit
warnings.warn("parameter time is deprecated, use parameter timelimit")
/home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:24: UserWarning: parameter page is deprecated, use DDGS().text() generator
warnings.warn("parameter page is deprecated, use DDGS().text() generator")
/home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:26: UserWarning: parameter max_results is deprecated, use DDGS().text()
warnings.warn("parameter max_results is deprecated, use DDGS().text()")
"No good DuckDuckGo Search Result was found"
### Expected behavior
There is a change in the duckduckgo python library where in they have specifically mentioned to use DDGS instead of ddg and more specifically the "text" if we want to use the api. They also do not support time, page and max_result parameter directly but there is a way to do it too.
For eg:
from duckduckgo_search import DDGS
ddgs = DDGS()
keywords = 'live free or die'
ddgs_text_gen = ddgs.text(keywords, region='wt-wt', safesearch='Off', timelimit='y')
for r in ddgs_text_gen:
print(r)
# Using lite backend and limit the number of results to 10
from itertools import islice
ddgs_text_gen = DDGS().text("notes from a dead house", backend="lite")
for r in islice(ddgs_text_gen, 10):
print(r)
I got it working by making the following code changes:
=========================================
"""Util that calls DuckDuckGo Search.
No setup required. Free.
https://pypi.org/project/duckduckgo-search/
"""
from typing import Dict, List, Optional
from pydantic import BaseModel, Extra
from pydantic.class_validators import root_validator
class DuckDuckGoSearchAPIWrapper(BaseModel):
"""Wrapper for DuckDuckGo Search API.
Free and does not require any setup
"""
region: Optional[str] = "wt-wt"
safesearch: str = "moderate"
**timelimit: Optional[str] = "y"**
**backend: str = "api"**
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that python package exists in environment."""
try:
from duckduckgo_search import DDGS # noqa: F401
except ImportError:
raise ValueError(
"Could not import duckduckgo-search python package. "
"Please install it with `pip install duckduckgo-search`."
)
return values
def run(self, query: str) -> str:
from duckduckgo_search import **DDGS**
"""Run query through DuckDuckGo and return results."""
**ddgs = DDGS()**
results = **ddgs.text**(
query,
region=self.region,
safesearch=self.safesearch,
time=self.**timelimit**
)
if len(results) == 0:
return "No good DuckDuckGo Search Result was found"
snippets = [result["body"] for result in results]
return " ".join(snippets)
def results(self, query: str, num_results: int) -> List[Dict]:
"""Run query through DuckDuckGo and return metadata.
Args:
query: The query to search for.
num_results: The number of results to return.
Returns:
A list of dictionaries with the following keys:
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
"""
from duckduckgo_search import **DDGS
ddgs = DDGS()**
results = ddgs.text(
query,
region=self.region,
safesearch=self.safesearch,
time=self.timelimit
)
if len(results) == 0:
return [{"Result": "No good DuckDuckGo Search Result was found"}]
def to_metadata(result: Dict) -> Dict:
return {
"snippet": result["body"],
"title": result["title"],
"link": result["href"],
}
return [to_metadata(result) for result in results]
| DuckDuckGo search always returns "No good DuckDuckGo Search Result was found" | https://api.github.com/repos/langchain-ai/langchain/issues/5435/comments | 4 | 2023-05-30T12:40:28Z | 2024-03-31T22:10:37Z | https://github.com/langchain-ai/langchain/issues/5435 | 1,732,178,510 | 5,435 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I cant seem to add memory to llmChain which takes in 3 inputs as the input_key param only takes in one input so the program looses context
template = """"
As a helpful chatbot agent of the company, provide a answer to the customer query.
Strictly limit to the information provided.
{chat_history}
calculate out of pocket or cost if deductable is available in the user's plan info
Question: {query}
f"Information from user's plan: {plan_info}
f"Information from company database: {faq_info}
"""
prompt = PromptTemplate(
input_variables=["chat_history","query", "plan_info", "faq_info"],
template=template
)
memory = ConversationBufferMemory(memory_key="chat_history", input_key=["query","plan_info","faq_info"])
chain = LLMChain(llm=model, prompt=prompt,memory=memory)
return chain.predict(query=query, plan_info=plan_info,faq_info=faq_info)
### Suggestion:
is there a way to get memory working in my case or memory with more than 1 inputs isnt implemented yet? | Memory with multi input | https://api.github.com/repos/langchain-ai/langchain/issues/5434/comments | 0 | 2023-05-30T11:13:02Z | 2023-05-30T11:17:09Z | https://github.com/langchain-ai/langchain/issues/5434 | 1,732,040,122 | 5,434 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi :)
I tested the new callback stream handler `FinalStreamingStdOutCallbackHandler` and noticed an issue with it.
I copied the code from the documentation and made just one change - use `ChatOpenAI` instead of `OpenAI`
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = ChatOpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)` here is my only change
`tools = load_tools(["wikipedia", "llm-math"], llm=llm)`
`agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)`
`agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.")`
### Expected behavior
The code above returns the response from the agent but does not stream it. In my project, I must use the `ChatOpenAI` LLM, so I would appreciate it if someone could fix this issue, please. | FinalStreamingStdOutCallbackHandler not working with ChatOpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/5433/comments | 6 | 2023-05-30T10:51:06Z | 2023-07-31T22:23:44Z | https://github.com/langchain-ai/langchain/issues/5433 | 1,732,005,171 | 5,433 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 20.04.6
Python 3.8.5
Langchain 0.0.184
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import torch
from langchain.vectorstores import Qdrant
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.chains import RetrievalQA
from langchain.llms import HuggingFacePipeline
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
TextIteratorStreamer,
)
# embeddings
embeddings_model_name = "hkunlp/instructor-base"
embeddings_model = HuggingFaceInstructEmbeddings(
model_name=embeddings_model_name,
model_kwargs={"device": "cuda"},
)
contents = ["bla", "blabla", "blablabla"]
vector_store = Qdrant.from_texts(
contents,
embeddings_model,
location=":memory:",
collection_name="test",
)
retriever = vector_store.as_retriever()
# llm
chatbot_model_name = ""togethercomputer/RedPajama-INCITE-Chat-3B-v1"
model = AutoModelForCausalLM.from_pretrained(
chatbot_model_name,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(chatbot_model_name)
streamer = TextIteratorStreamer(
tokenizer,
timeout=10.0,
skip_prompt=True,
skip_special_tokens=True,
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
streamer=streamer,
max_length=1024,
temperature=0,
top_p=0.95,
repetition_penalty=1.15,
)
pipe = HuggingFacePipeline(pipeline=pipe)
# qa
qa = RetrievalQA.from_chain_type(
llm=pipe,
chain_type="stuff",
retriever=retriever,
return_source_documents=False,
)
qa.run("What is the capital of France")
```
which lead to
```python
TypeError: cannot pickle '_thread.lock' object
````
### Expected behavior
I should be able to get the `streamer` outputs | `RetrievalQA` and `HuggingFacePipeline` lead to `TypeError: cannot pickle '_thread.lock' object` | https://api.github.com/repos/langchain-ai/langchain/issues/5431/comments | 8 | 2023-05-30T10:00:43Z | 2024-02-12T18:50:10Z | https://github.com/langchain-ai/langchain/issues/5431 | 1,731,930,284 | 5,431 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.184, python 3.9.13
Function `parse_json_markdown` in langchain/output_parsers/json.py fails with input text string:
\`\`\`json
{
"action": "Final Answer",
"action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string."
}
\`\`\`
Potential case of error:
`match.group(2)` in the function `parse_json_markdown` contains only the string up to the first occurrence of the second triple backticks:
{
"action": "Final Answer",
"action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Called function `parse_json_markdown` in langchain/output_parsers/json.py with input text string:
\`\`\`json
{
"action": "Final Answer",
"action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string."
}
\`\`\`
### Expected behavior
Function `parse_json_markdown` should return the following json string
{
"action": "Final Answer",
"action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string."
} | parse_json_markdown is unable to parse json strings with nested triple backticks | https://api.github.com/repos/langchain-ai/langchain/issues/5428/comments | 8 | 2023-05-30T08:37:30Z | 2024-08-07T09:27:58Z | https://github.com/langchain-ai/langchain/issues/5428 | 1,731,789,217 | 5,428 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | https://api.github.com/repos/langchain-ai/langchain/issues/5423/comments | 4 | 2023-05-30T04:02:36Z | 2023-06-01T01:25:23Z | https://github.com/langchain-ai/langchain/issues/5423 | 1,731,469,889 | 5,423 |
[
"langchain-ai",
"langchain"
] | ### System Info
Dear Developer:
I have encounter an error that I am not able run OpenAI and AzureChatOpenAI together.Here is how to reproduce the error
langchain Version: 0.0.184
pyython: 3.9.12
```python
from langchain.llms import OpenAI, AzureOpenAI
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
openai_params = {
"openai_api_key" : "key",
"openai_api_base": "url"
}
openaiazure_params = {
"deployment_name" : "db",
"openai_api_base" : "https://azure.com/",
"openai_api_version" : "2023-03-15-preview",
"openai_api_type" : "azure",
"openai_api_key" : "key"
}
llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params)
print(llm("tell me joke")). # note that this line works fun it wiil call the api without any error
llmazure = AzureChatOpenAI(**openaiazure_params)
print(llm("tell me joke")). # now it seems that running the AzureChatOpenAI would somehow change the class attribute of the OpenAI. If I rerun this line it would give the following error
```
```text
File "/Users/xyn/anaconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms import OpenAI, AzureOpenAI
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
openai_params = {
"openai_api_key" : "key",
"openai_api_base": "url"
}
openaiazure_params = {
"deployment_name" : "db",
"openai_api_base" : "https://azure.com/",
"openai_api_version" : "2023-03-15-preview",
"openai_api_type" : "azure",
"openai_api_key" : "key"
}
llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params)
print(llm("tell me joke")). # note that this line works fun it wiil call the api without any error
llmazure = AzureChatOpenAI(**openaiazure_params)
print(llm("tell me joke")) # now it seems that running the AzureChatOpenAI would somehow change the class attribute of the OpenAI. If I rerun this line it would give the following error
```
### Expected behavior
```python
print(llm("tell me joke")) # still gives the result after using the AzureChatOpenAI
from langchain.schema import HumanMessage
llmazure([HumanMessage(content="tell me joke")]) # could also do appropriate calls
# was worried attributes would be changed back, so what if I reset the OpenAI and test AzureChatOpenAI again
llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params)
# then test AzureChatOpenAI
llmazure([HumanMessage(content="tell me joke")]) # do appropriate calls
print(llm("tell me joke")) # do appropriate calls
``` | Can not Use OpenAI and AzureChatOpenAI together | https://api.github.com/repos/langchain-ai/langchain/issues/5422/comments | 3 | 2023-05-30T03:52:01Z | 2023-09-18T16:09:59Z | https://github.com/langchain-ai/langchain/issues/5422 | 1,731,462,914 | 5,422 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
LangChain framework mostly aims to build applications which interact with LLM. Many online applications themselves are implements by other languages such as Java,C++, but LangChain only supports Python and JS by now. How about implement other language version of LangChain?
### Suggestion:
_No response_ | Implement other Language version of LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/5421/comments | 9 | 2023-05-30T03:23:46Z | 2024-03-31T20:44:24Z | https://github.com/langchain-ai/langchain/issues/5421 | 1,731,445,889 | 5,421 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I try, but don't work.
my code:
```
let { HNSWLib } = await import('langchain/vectorstores/hnswlib')
let { OpenAIEmbeddings } = await import('langchain/embeddings/openai')
let vectors1 = await HNSWLib.load(
"D:/workcode/nodejs/chatgpt_server/vectors/32202",
new OpenAIEmbeddings()
)
let vectors2 = await HNSWLib.load(
"D:/workcode/nodejs/chatgpt_server/vectors/60551",
new OpenAIEmbeddings()
)
let vectors3 = await vectors1.addVectors(vectors2, vectors2.docstore._docs)
```
### Suggestion:
_No response_ | Issue: How to merge two vector in HNSWLib | https://api.github.com/repos/langchain-ai/langchain/issues/5420/comments | 0 | 2023-05-30T02:44:14Z | 2023-05-30T08:52:51Z | https://github.com/langchain-ai/langchain/issues/5420 | 1,731,421,091 | 5,420 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | Add AzureCosmosDBVectorSearch VectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/5419/comments | 1 | 2023-05-29T23:58:25Z | 2023-09-18T16:10:05Z | https://github.com/langchain-ai/langchain/issues/5419 | 1,731,329,471 | 5,419 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My code:
```
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
openai_api_base='https://api.openai.com/v1',
openai_api_type='open_ai',
model='text-embedding-ada-002',
openai_api_version='2023-05-15',
openai_api_key=openai_api_key,
max_retries=3,
)
```
And I got `2023-05-30 07:28:40,163 INFO error_code=None error_message='Unsupported OpenAI-Version header provided: 2023-05-15. (HINT: you can provide any of the following supported versions: 2020-10-01, 2020-11-07. Alternatively, you can simply omit this header to use the default version associated with your account.)' error_param=headers:openai-version error_type=invalid_request_error message='OpenAI API error received' stream_error=False
`
Why this is supported versions so old (they are 2020-10-01, 2020-11-07)? Is this normal ?
Anybody got this error ?
### Suggestion:
_No response_ | Issue: Unsupported OpenAI-Version header provided: 2023-05-15. (HINT: you can provide any of the following support ed versions: 2020-10-01, 2020-11-07.' | https://api.github.com/repos/langchain-ai/langchain/issues/5418/comments | 1 | 2023-05-29T23:56:40Z | 2023-09-10T16:10:21Z | https://github.com/langchain-ai/langchain/issues/5418 | 1,731,327,871 | 5,418 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It will be very nice to have similarity score from docsearch.
Now I have list of documents even if model says "I'm sorry, but I don't have enough information to answer your question".
```
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type= "stuff",
retriever = docsearch.as_retriever(
search_type="similarity",
search_kwargs={"k":3}
),
return_source_documents=True
)
```
### Motivation
Now it returns list of documents even if model says "I'm sorry, but I don't have enough information to answer your question."
And also I got list of not really relevant documents.
### Your contribution
Not sure here. For example in `chroma.py` I can add output of similarity:
```
def similarity_search(
self,
query: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
result = []
for doc, score in docs_and_scores:
doc.metadata['score'] = score
result.append(doc)
return result
```
**BUT** it's not relevant to the output - I got similarity 0.3-0.4 for correct answers and at the same time - for "he context does not provide any information about what...". So this method is good to return score, but not good to solve all my questions.
| Is it possible to return similarity score from RetrievalQA/docsearch | https://api.github.com/repos/langchain-ai/langchain/issues/5416/comments | 14 | 2023-05-29T21:02:00Z | 2024-03-06T06:46:21Z | https://github.com/langchain-ai/langchain/issues/5416 | 1,731,233,003 | 5,416 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.182, python 3.11.3, mac os
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
FLARE with azure open ai causes this issue.
InvalidRequestError Traceback (most recent call last)
in <cell line: 1>()
----> 1 flare.run(query)
20 frames
[/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py](https://localhost:8080/#) in __prepare_create_request(cls, api_key, api_base, api_type, api_version, organization, **params)
81 if typed_api_type in (util.ApiType.AZURE, util.ApiType.AZURE_AD):
82 if deployment_id is None and engine is None:
---> 83 raise error.InvalidRequestError(
84 "Must provide an 'engine' or 'deployment_id' parameter to create a %s"
85 % cls,
InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
### Expected behavior
> Entering new FlareChain chain...
Current Response:
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
>>> CONTEXT:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> RESPONSE:
> Entering new QuestionGeneratorChain chain...
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " uses a blockchain" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " distributed ledger to" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " set of tools" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " help developers create" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " create an AI system" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " NLP applications" is: | FLARE | Azure open Ai doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/5414/comments | 11 | 2023-05-29T17:50:06Z | 2023-10-10T12:40:52Z | https://github.com/langchain-ai/langchain/issues/5414 | 1,731,079,465 | 5,414 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be useful to introduce an EarlyStoppingException for capture by the AgentExecutor.
### Motivation
It would be nice to throw a `EarlyStoppingException` in a tool used by an Agent when certain pre-conditions are met (e.g. a validation of something has failed). The `EarlyStoppingException` would be caught by the AgentExector where it would immediately return the error message of the tool as the output.
My current work around to achieve something like this without customising the AgentExecutor requires me to throw the error message as part of the JSON output of the tool in the output format the agent expects from the tool. The agent would then take this information and run it through an LLM. This is not as ideal as an immediate return of the error message as the LLM doesn't return the exact message and it has to pass through a LLM again.
The `return_immediate_results` is not ideal in this case as I would like the Agent to summarise the output of the tool on most passes, except when the condition is met where a `EarlyStoppingException` is thrown and caught by the `AgentExector`.
### Your contribution
Happy to create a PR for this if it is wanted. | Early Stopping Exception | https://api.github.com/repos/langchain-ai/langchain/issues/5412/comments | 1 | 2023-05-29T17:02:26Z | 2023-09-10T16:10:25Z | https://github.com/langchain-ai/langchain/issues/5412 | 1,731,041,874 | 5,412 |
[
"langchain-ai",
"langchain"
] | ### System Info



python 3.10. Langchain 0.0.184
Seems that MultiRetrievalQAChain is requiring a chat model to run, but feels like this shouldn't be required. Other QA pipelines don't require a chat model, and I don't see why they should. I'm guessing this is just a symptom of some parts of the system being upgraded to use chatmodels, since OpenAI is used for a lot of little things under the hood with langchain, like deciding which vectorDB to use based on the retriever_infos dict. However, if you are trying to use another LLM aside from openAI, you cannot use MultiRetrievalQA.
Further, I've run into some issues with this sort of thing across otherQA pipelines when using huggingface LLMs. Generally, better support for huggingface is important to me, and I think would be important to many others in the near future as the open source model ecosystem continues to grow.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import langchain
from langchain.llms import HuggingFacePipeline, HuggingFaceHub
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.schema import Document
from langchain.document_loaders import DirectoryLoader, TextLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain.chains.router import MultiRetrievalQAChain
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, GPTJForCausalLM, AutoModelForQuestionAnswering, GPTJForQuestionAnswering
from abc import ABC, abstractmethod
from typing import List, Dict, Optional
import os
import requests
import shutil
#device object
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# hf embeddings model
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
# Cerebras GPT2.7B
model = AutoModelForCausalLM.from_pretrained(
"cerebras/Cerebras-GPT-2.7B",
torch_dtype=torch.float16,
).to(device)
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-2.7B")
# hf pipeline
gen_pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=200,
early_stopping=True,
no_repeat_ngram_size=2,
device=0
)
# define llm
gen_llm = HuggingFacePipeline(pipeline=gen_pipe)
# load documents and create retrievers
docs1 = TextLoader('docs/dir1/mydoc1.txt').load_and_split()
retreiver1 = Chroma.from_documents(docs1, embeddings).as_retriever()
docs2 = TextLoader('docs/dir2/mydoc2.txt').load_and_split()
retriever2 = Chroma.from_documents(docs2, embeddings).as_retriever()
retriever_infos = [
{
"name": "space-knowledge",
"description": "Good for answering general questions about outer space.",
"retriever": retriever1
},
{
"name": "earth-knowledge",
"description": "Good for answer questions about planet earth",
"retriever": retriever2
}
]
chain = MultiRetrievalQAChain.from_retrievers(llm=gen_llm, retriever_infos=retriever_infos, verbose=True)
### Expected behavior
I expect this to work the same way it does when using an openAI model. i.e., the model chooses the vectorDB to use based on the question, searches, and returns a response. | MultiRetrievalQAChain requires ChatModel... but should it? | https://api.github.com/repos/langchain-ai/langchain/issues/5411/comments | 7 | 2023-05-29T16:47:37Z | 2023-09-20T16:09:16Z | https://github.com/langchain-ai/langchain/issues/5411 | 1,731,026,192 | 5,411 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have sucessfully set up streaming in HTTP call with FastApi and OpenAI + ConversationalRetrievalChain
If I don't use streaming and just return the whole response, like I was doing previously, I also get metadata displayed with the answer.
If I enable streaming, i get displayed only the answer and the '%' at the end of response.
Like: ......dummytext.%
Code, which is responsible for streaming:
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import queue
class ThreadedGenerator:
def __init__(self):
self.queue = queue.Queue()
def __iter__(self):
return self
def __next__(self):
item = self.queue.get()
if item is StopIteration: raise item
return item
def send(self, data):
self.queue.put(data)
def close(self):
self.queue.put(StopIteration)
class ChainStreamHandler(StreamingStdOutCallbackHandler):
def __init__(self, gen):
super().__init__()
self.gen = gen
def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)
def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)
```
Ask question function:
```
def askQuestion(self, generator, collection_id, question):
try:
collection_name = "collection-" + str(collection_id)
self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=settings.OPENAI_API_KEY, streaming=True, verbose=VERBOSE, callback_manager=CallbackManager([ChainStreamHandler(generator)]))
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer')
self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True),
return_source_documents=True,verbose=VERBOSE,
memory=self.memory)
result = self.chain({"question": question})
res_dict = {
"answer": result["answer"],
}
res_dict["source_documents"] = []
for source in result["source_documents"]:
res_dict["source_documents"].append({
"page_content": source.page_content,
"metadata": source.metadata
})
return res_dict
finally:
generator.close()
```
And the API route itself
```
def stream(question, collection_id):
generator = ThreadedGenerator()
threading.Thread(target=thread_handler.askQuestion, args=(generator, collection_id, question)).start()
return generator
@router.post("/collection/{collection_id}/ask_question")
async def ask_question(collection_id: str, request: Request):
form_data = await request.form()
question = form_data["question"]
return StreamingResponse(stream(question, collection_id), media_type='text/event-stream')
```
In askQuestion function I am creating the res_dict object, which has answer and also source from metadata stored.
How can I also display the source after the answer is stopped being streamable? (I have source already in metadata)
Is for this a better way to make separate API call, or are there some other practices or,..?
Thanks for everyone for advice! | Issue: Display metadata after streaming a response with FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/5409/comments | 3 | 2023-05-29T15:31:49Z | 2023-09-18T16:10:20Z | https://github.com/langchain-ai/langchain/issues/5409 | 1,730,952,051 | 5,409 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run.
### Motivation
PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed.
However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool:
```py
prompt = (
f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average "
"sales by month."
)
```
In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`.
The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this.
### Your contribution
I will be submitting a PR shortly with a proposed solution :) | Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute | https://api.github.com/repos/langchain-ai/langchain/issues/5400/comments | 0 | 2023-05-29T13:19:30Z | 2023-06-03T21:59:11Z | https://github.com/langchain-ai/langchain/issues/5400 | 1,730,753,427 | 5,400 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.183 , Platform Anaconda, Python version 3.10.9

### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
[issue.txt](https://github.com/hwchase17/langchain/files/11591485/issue.txt)
### Expected behavior
Cypher Query along with Explanation | GraphCypherQAChain Authentication error | https://api.github.com/repos/langchain-ai/langchain/issues/5399/comments | 2 | 2023-05-29T11:52:53Z | 2023-09-12T16:12:01Z | https://github.com/langchain-ai/langchain/issues/5399 | 1,730,626,909 | 5,399 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
when use the BooleanOutputParser in the chain_filter.py, the LLMs output 'yes|no',the function 'parse' will throw ValueError。
### Suggestion:
I fix it,as follows:
def parse(self, text: str) -> bool:
"""Parse the output of an LLM call to a boolean.
Args:
text: output of language model
Returns:
boolean
"""
cleaned_text = text.upper().strip()
if cleaned_text not in (self.true_val, self.false_val):
raise ValueError(
f"BooleanOutputParser expected output value to either be "
f"{self.true_val} or {self.false_val}. Received {cleaned_text}."
)
return cleaned_text == self.true_val | Issue: value error in BooleanOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/5396/comments | 1 | 2023-05-29T10:56:23Z | 2023-09-10T16:10:36Z | https://github.com/langchain-ai/langchain/issues/5396 | 1,730,545,468 | 5,396 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.183
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
If GPT returns a message formatted this way:
Thought: I will produce a list of things.
Action:
```
{
"action": "Final Answer",
"action_input": [
{ "thing" : "A", "attribute" : "X" },
{ "thing" : "B", "attribute" : "Y" }
]
}
```
Where action_input is a list instead of a string, the output parser will recognize it as valid and return it all the way back.
output_parser.py: line 23->32:
```
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
try:
action_match = re.search(r"```(.*?)```?", text, re.DOTALL)
if action_match is not None:
response = json.loads(action_match.group(1).strip(), strict=False)
if isinstance(response, list):
# gpt turbo frequently ignores the directive to emit a single action
logger.warning("Got multiple action responses: %s", response)
response = response[0]
if response["action"] == "Final Answer":
```
reponse = json.loads only checks that the response is a valid json, not that the type of "reponse['action']" is a string.
While not necessarily an issue functionally speaking, this means that the function prototype
in base.py:225
```
def run(self, *args: Any, callbacks: Callbacks = None, **kwargs: Any) -> str:
"""Run the chain as text in, text out or multiple variables, text out."""
```
is incorrect. the return value usually is a string, but could be a dictionary or a list based on the output of gpt.
### Expected behavior
Either force GPT to reformulate, or change the function prototype | Parser output may not always produce a string (based on what GPT returns), any valid json construct is possible | https://api.github.com/repos/langchain-ai/langchain/issues/5393/comments | 1 | 2023-05-29T08:50:13Z | 2023-09-10T16:10:41Z | https://github.com/langchain-ai/langchain/issues/5393 | 1,730,352,093 | 5,393 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain v0.0.183
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain import Wikipedia
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.react.base import DocstoreExplorer
docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="useful for when you need to ask with search"
),
Tool(
name="Lookup",
func=docstore.lookup,
description="useful for when you need to ask with lookup"
)
]
llm = ChatOpenAI()
react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True)
question = 'Question'
react.run(question)
```
Running this snippet initializes a ReActDocStoreAgent and runs it on the given question.
[initialize_agent( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/initialize.py#L12) returns an AgentExecutor. During this call an agent is created using the [from_llm_and_tools( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L528) function. This function creates a new llm_chain for the agent and sets the prompt using agent_cls.create_prompt( ). Since we are using a ReActDocStoreAgent, [the function](https://github.com/hwchase17/langchain/blob/master/langchain/agents/react/base.py#L35) simply returns the WIKI_PROMPT, which is a PromptTemplate object.
[run( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L934) from AgentExecutor calls [_take_next_step( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L748) which calls agent.plan( ).
[plan( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L425) eventually calls llm_chain.predict( ). Eventually llm_chain calls [generate( )](https://github.com/hwchase17/langchain/blob/master/langchain/chains/llm.py#L72). This function creates a prompt by calling [prep_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/chains/llm.py#L94). The prompt is a StringPromptValue since it is created using [format_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/prompts/base.py#L230).
This variable is then used as an argument for the function llm.generate_prompt( ). [generate_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/base.py#L136) converts the prompt to message format using [to_messages( )](https://github.com/hwchase17/langchain/blob/master/langchain/prompts/base.py#L98), which treats the entire prompt as a HumanMessage.
### Expected behavior
The issue here is that the prompt includes both human and AI messages. Ideally, the questions and observations should be human messages whereas the thoughts and actions should be AI messages. Treating the entire prompt as a human message may decrease prompt quality and lead to suboptimal performance. | Incorrect prompt formatting when initializing ReActDocstoreAgent with a chat model | https://api.github.com/repos/langchain-ai/langchain/issues/5390/comments | 2 | 2023-05-29T07:13:52Z | 2023-09-10T16:10:46Z | https://github.com/langchain-ai/langchain/issues/5390 | 1,730,216,162 | 5,390 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Issues #1026 and #5294 raise legitimate concerns with the security of executing LLM-generated code via the `exec()` function. Out of that discussion came a link to [this blog](https://til.simonwillison.net/webassembly/python-in-a-wasm-sandbox) that demonstrated a way to use Wasm to execute Python code in an isolated interpreter.
I have developed a library based on that code called [wasm-exec](https://github.com/Jflick58/wasm_exec) to provide a clean interface to this solution. I’d like to next make a PR in LangChain replacing instances of `exec()` with `wasm_exec()` but wanted to get some feedback on this solution before doing so.
Right now, the largest unknown is the extent of support for arbitrary packages, which may make running something like Pandas in the sandbox untenable until a solution is found. I believe I have a path forward on that (via installing when configuring the wasm runtime), but will need to continue to experiment.
### Suggestion:
_No response_ | RFC: Use wasm-exec package to sandbox code execution by the Python REPL tool | https://api.github.com/repos/langchain-ai/langchain/issues/5388/comments | 3 | 2023-05-29T05:39:42Z | 2023-09-18T16:10:26Z | https://github.com/langchain-ai/langchain/issues/5388 | 1,730,105,141 | 5,388 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: NA
Python 3.10.9
WSL2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install langchain[all]
2. Wait
3. Observe:
```
Preparing metadata (setup.py) ... done
Downloading openai-0.0.2.tar.gz (741 bytes)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-9c_7ozkg/openai_9e8b55b2ec17406d8b64d964c29099f3/setup.py", line 6, in <module>
raise RuntimeError(
RuntimeError: This package is a placeholder package on the public PyPI instance, and is not the correct version to install. If you are having trouble figuring out the correct package to install, please contact us.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Expected behavior
Installation complete. | Failure when running pip install langchain[all] | https://api.github.com/repos/langchain-ai/langchain/issues/5387/comments | 14 | 2023-05-29T05:23:41Z | 2024-03-20T11:25:41Z | https://github.com/langchain-ai/langchain/issues/5387 | 1,730,087,228 | 5,387 |
[
"langchain-ai",
"langchain"
] | ### System Info
File "d:\langchain\pdfqa-app.py", line 46, in _upload_data
Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name)
File "E:\anaconda\envs\langchain\lib\site-packages\langchain\vectorstores\pinecone.py", line 232, in from_texts
embeds = embedding.embed_documents(lines_batch)
File "E:\anaconda\envs\langchain\lib\site-packages\langchain\embeddings\openai.py", line 297, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "E:\anaconda\envs\langchain\lib\site-packages\langchain\embeddings\openai.py", line 221, in _get_len_safe_embeddings
token = encoding.encode(
File "E:\anaconda\envs\langchain\lib\site-packages\tiktoken\core.py", line 117, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def _load_docs(self):
loader=PyPDFLoader("D:\langchain\data_source\\1706.03762.pdf")
self.doc= loader.load()
def _split_docs (self):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=self.chunk_size, chunk_overlap=self.chunk_overlap,separators= ["\n\n", ""])
self.doc_chunk = text_splitter.split_documents(self.doc)
def _upload_data(self):
embeddings = OpenAIEmbeddings()
list_of_index = pinecone.list_indexes()
if self.index_name in list_of_index:
Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name)
else:
pinecone.create_index(self.index_name, dimension=1024) # for open AI
Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name)
def dataloader(self):
self._load_docs()
self._split_docs()
self._upload_data()
### Expected behavior
Please help with the solution | Running into this error while creating embeddings out of pdf file | https://api.github.com/repos/langchain-ai/langchain/issues/5384/comments | 2 | 2023-05-29T01:10:32Z | 2024-02-15T07:44:39Z | https://github.com/langchain-ai/langchain/issues/5384 | 1,729,806,672 | 5,384 |
[
"langchain-ai",
"langchain"
] | ### System Info
BaseConversationalRetrievalChain._call() and ._acall() run into errors eventually in LLMChain.prep_prompts() on referencing input_list[0]. This will trigger an index error with an empty input_list, which happens when there are no Documents retrieved during _call() by BaseConversationalRetrievalChain._get_docs(). There should be a check for this.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Set up a qa call to ConversationalRetrievalChain.from_llm() on an retriever to which a filter has been applied that will result in no Documents being matched.
### Expected behavior
If 0 docs are found, don't bother running the rest of the process. | BaseConversationalRetrievalChain error on 0 docs found | https://api.github.com/repos/langchain-ai/langchain/issues/5378/comments | 1 | 2023-05-28T23:03:16Z | 2023-09-10T16:10:51Z | https://github.com/langchain-ai/langchain/issues/5378 | 1,729,730,030 | 5,378 |
[
"langchain-ai",
"langchain"
] | Windows 11, Anaconda, python 3.9.16 LanchChain 0.0.183
My goal is to extend the tools used by baby AGI, more specifically to use At least the basic WriteFileTool() and ReadFileTool(). they use two inputs though, so I cannot stick with the vanilla ZeroShotAgent. forgive my poor understanding of Python but I have tried to use AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION and replace it in the code. Agent initialization is the only modification I've made to the code in the link below, though it throws an error. Could anyone kindly help updating the code in order to be able to leverage multiple input tools or provide guidance or resources?
https://python.langchain.com/en/latest/use_cases/agents/baby_agi_with_agent.html?highlight=babyagi%20with%20tools
```python
from langchain.agents import AgentType
from langchain.agents import initialize_agent
@classmethod
def from_llm(
cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs
) -> "BabyAGI":
"""Initialize the BabyAGI Controller."""
task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)
task_prioritization_chain = TaskPrioritizationChain.from_llm(
llm, verbose=verbose
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,llm=llm_chain, tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
return cls(
task_creation_chain=task_creation_chain,
task_prioritization_chain=task_prioritization_chain,
execution_chain=agent_executor,
vectorstore=vectorstore,
**kwargs,
)
```
I get the error
```python
File ~\anaconda3\envs\aagi\lib\site-packages\langchain\agents\structured_chat\base.py:83 in create_prompt args_schema = re.sub("}", "}}}}", re.sub("{", "{{{{", str(tool.args)))
AttributeError: 'str' object has no attribute 'args'
```
### Suggestion:
_No response_ | error thrown when trying to implement BabyAGI with tools with multiple inputs (requiring Structured Tool Chat Agent) | https://api.github.com/repos/langchain-ai/langchain/issues/5375/comments | 1 | 2023-05-28T20:52:13Z | 2023-09-10T16:10:56Z | https://github.com/langchain-ai/langchain/issues/5375 | 1,729,668,382 | 5,375 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Creation of a chain or wrapper that uses two LLMs with different models to force correct formatting:
- do the "difficult" work with a model like GPT-4: output format is free text.
- have a second LLM (davinci-003 or a specialized model for this purpose) that is good at following formatting instructions to convert the output of the first model into (say) JSON.
### Motivation
When a model is set up to be creative, or the use case demands that the output is consistently formatted correctly, we are often out of luck, and the output breaks an app. This proposed use of two models will most likely increase the accuracy of the output a lot.
### Your contribution
If maintainers think this is a good idea, please add your input, and I will be happy to provide a PR. | Formatter chain | https://api.github.com/repos/langchain-ai/langchain/issues/5374/comments | 4 | 2023-05-28T19:35:23Z | 2023-11-05T16:07:09Z | https://github.com/langchain-ai/langchain/issues/5374 | 1,729,634,384 | 5,374 |
[
"langchain-ai",
"langchain"
] | I cannot figure out why this is happening: File "c:\Users\Yaseen\Documents\AVA\main.py", line 2, in <module>
import langchain
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\__init__.py", line 45, in <module>
from langchain.tools.powerbi.tool import (
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\powerbi\tool.py", line 10, in <module>
from langchain.chains.llm import LLMChain
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\__init__.py", line 18, in <module>
from langchain.chains.llm_math.base import LLMMathChain
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm_math\base.py", line 9, in <module>
import numexpr
File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\numexpr\__init__.py", line 24, in <module>
from numexpr.interpreter import MAX_THREADS, use_vml, __BLOCK_SIZE1__
ImportError: DLL load failed while importing interpreter: The specified module could not be found. | ImportError: DLL load failed while importing interpreter: The specified module could not be found. | https://api.github.com/repos/langchain-ai/langchain/issues/5366/comments | 8 | 2023-05-28T16:24:50Z | 2024-01-30T00:43:34Z | https://github.com/langchain-ai/langchain/issues/5366 | 1,729,552,762 | 5,366 |
[
"langchain-ai",
"langchain"
] | # This code:
`from langchain.experimental import AutoGPT
from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
agent = AutoGPT.from_llm_and_tools(
ai_name="Tom",
ai_role="Assistant",
tools=tools,
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}),
memory=vectorstore.as_retriever()
)
agent.chain.verbose = True`
agent.run(["write a weather report for SF today"])
# outputs the error:
`AssertionError Traceback (most recent call last)
Cell In[21], line 1
----> 1 agent.run(["write a weather report for SF today"])
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\agent.py:91, in AutoGPT.run(self, goals)
88 loop_count += 1
90 # Send message to AI, get response
---> 91 assistant_reply = self.chain.run(
92 goals=goals,
93 messages=self.full_message_history,
94 memory=self.memory,
95 user_input=user_input,
96 )
98 # Print Assistant thoughts
99 print(assistant_reply)
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:239, in Chain.run(self, callbacks, *args, **kwargs)
236 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
238 if kwargs and not args:
--> 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
241 if not kwargs and not args:
242 raise ValueError(
243 "`run` supported with either positional arguments or keyword arguments,"
244 " but none were provided."
245 )
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
--> 140 raise e
141 run_manager.on_chain_end(outputs)
142 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
128 run_manager = callback_manager.on_chain_start(
129 {"name": self.__class__.__name__},
130 inputs,
131 )
132 try:
133 outputs = (
--> 134 self._call(inputs, run_manager=run_manager)
135 if new_arg_supported
136 else self._call(inputs)
137 )
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:69, in LLMChain._call(self, inputs, run_manager)
64 def _call(
65 self,
66 inputs: Dict[str, Any],
67 run_manager: Optional[CallbackManagerForChainRun] = None,
68 ) -> Dict[str, str]:
---> 69 response = self.generate([inputs], run_manager=run_manager)
70 return self.create_outputs(response)[0]
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:78, in LLMChain.generate(self, input_list, run_manager)
72 def generate(
73 self,
74 input_list: List[Dict[str, Any]],
75 run_manager: Optional[CallbackManagerForChainRun] = None,
76 ) -> LLMResult:
77 """Generate LLM result from inputs."""
---> 78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
79 return self.llm.generate_prompt(
80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None
81 )
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:106, in LLMChain.prep_prompts(self, input_list, run_manager)
104 for inputs in input_list:
105 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
--> 106 prompt = self.prompt.format_prompt(**selected_inputs)
107 _colored_text = get_colored_text(prompt.to_string(), "green")
108 _text = "Prompt after formatting:\n" + _colored_text
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\prompts\chat.py:144, in BaseChatPromptTemplate.format_prompt(self, **kwargs)
143 def format_prompt(self, **kwargs: Any) -> PromptValue:
--> 144 messages = self.format_messages(**kwargs)
145 return ChatPromptValue(messages=messages)
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\prompt.py:51, in AutoGPTPrompt.format_messages(self, **kwargs)
49 memory: VectorStoreRetriever = kwargs["memory"]
50 previous_messages = kwargs["messages"]
---> 51 relevant_docs = memory.get_relevant_documents(str(previous_messages[-10:]))
52 relevant_memory = [d.page_content for d in relevant_docs]
53 relevant_memory_tokens = sum(
54 [self.token_counter(doc) for doc in relevant_memory]
55 )
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\base.py:377, in VectorStoreRetriever.get_relevant_documents(self, query)
375 def get_relevant_documents(self, query: str) -> List[Document]:
376 if self.search_type == "similarity":
--> 377 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
378 elif self.search_type == "similarity_score_threshold":
379 docs_and_similarities = (
380 self.vectorstore.similarity_search_with_relevance_scores(
381 query, **self.search_kwargs
382 )
383 )
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:255, in FAISS.similarity_search(self, query, k, **kwargs)
243 def similarity_search(
244 self, query: str, k: int = 4, **kwargs: Any
245 ) -> List[Document]:
246 """Return docs most similar to query.
247
248 Args:
(...)
253 List of Documents most similar to the query.
254 """
--> 255 docs_and_scores = self.similarity_search_with_score(query, k)
256 return [doc for doc, _ in docs_and_scores]
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:225, in FAISS.similarity_search_with_score(self, query, k)
215 """Return docs most similar to query.
216
217 Args:
(...)
222 List of Documents most similar to the query and score for each
223 """
224 embedding = self.embedding_function(query)
--> 225 docs = self.similarity_search_with_score_by_vector(embedding, k)
226 return docs
File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:199, in FAISS.similarity_search_with_score_by_vector(self, embedding, k)
197 if self._normalize_L2:
198 faiss.normalize_L2(vector)
--> 199 scores, indices = self.index.search(vector, k)
200 docs = []
201 for j, i in enumerate(indices[0]):
File ~\anaconda3\envs\langchain\Lib\site-packages\faiss\class_wrappers.py:329, in handle_Index.<locals>.replacement_search(self, x, k, params, D, I)
327 n, d = x.shape
328 x = np.ascontiguousarray(x, dtype='float32')
--> 329 assert d == self.d
331 assert k > 0
333 if D is None:
AssertionError: `
# How can I resolve this behaviour? | AssertionError when using AutoGPT with Huggingface | https://api.github.com/repos/langchain-ai/langchain/issues/5365/comments | 4 | 2023-05-28T16:16:57Z | 2023-09-10T03:09:29Z | https://github.com/langchain-ai/langchain/issues/5365 | 1,729,549,900 | 5,365 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The example in https://python.langchain.com/en/latest/reference/modules/embeddings.html
```
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
```
does not work.
I get an error:
> pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFaceEmbeddings
model_kwargs
extra fields not permitted (type=value_error.extra)
### Idea or request for content:
fixed example | DOC: the example on setting model_kwrgs in HuggingFaceEmbeddings does not work | https://api.github.com/repos/langchain-ai/langchain/issues/5363/comments | 5 | 2023-05-28T09:35:29Z | 2023-09-18T16:10:30Z | https://github.com/langchain-ai/langchain/issues/5363 | 1,729,341,841 | 5,363 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| Validation Error importing OpenAPI planner when OpenAI credentials not in environment | https://api.github.com/repos/langchain-ai/langchain/issues/5361/comments | 1 | 2023-05-28T08:18:12Z | 2023-05-29T13:22:37Z | https://github.com/langchain-ai/langchain/issues/5361 | 1,729,290,674 | 5,361 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.