issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
While going through base_language.py code , in _get_num_tokens_default_method .. code makes instance of gpt2 tokenizer while comment says " # tokenize the text using the GPT-3 tokenizer" . this needs to be corrected to gpt-2 # create a GPT-2 tokenizer instance tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") # tokenize the text using the GPT-3 tokenizer tokenized_text = tokenizer.tokenize(text)
minor issue in code( base_language.py) comments
https://api.github.com/repos/langchain-ai/langchain/issues/4082/comments
3
2023-05-04T04:03:45Z
2023-05-04T06:39:50Z
https://github.com/langchain-ai/langchain/issues/4082
1,695,224,121
4,082
[ "langchain-ai", "langchain" ]
I think this is killing me. Literally!!. Why is it that the `ConversationalRetrievalChain` rephrase every question I ask it? Here is an example: **Example:** **Human**: `Hi` **AI**: ` Hello! How may I assist you today?` **Human**: `What activities do you recommend?` **AI Rephrasing Human Question**: `What are your top three activity recommendations?` **AI Response**: `As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?` **Human** `Sure` **AI Rephrasing Human Question**: `Which of those activities is your personal favorite?` **AI Response**: `As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.` As you can see here. The last message the human sends is `sure`. However, the rephrasing is just destroying the flow of this conversation. Can we disable this rephrasing? ------------------------------------------------------------------------------------------------------------ **More Verbose:** ``` > Entering new StuffDocumentsChain chain... > Entering new LLMChain chain... Prompt after formatting: System: Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- > Finished chain. answer1: Hello! How may I assist you today? > Entering new LLMChain chain... Prompt after formatting: Given the following conversation and a follow up question. Chat History: Human: Hi Assistant: Hello! How may I assist you today? Follow Up Input: What activities do you recommend? Standalone question: > Finished chain. > Entering new StuffDocumentsChain chain... > Entering new LLMChain chain... Prompt after formatting: System: Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- Human: What are your top three activity recommendations? > Finished chain. > Finished chain. time: 5.121097803115845 answer2: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities? > Entering new LLMChain chain... Prompt after formatting: Given the following conversation and a follow up question. Chat History: Human: Hi Assistant: Hello! How may I assist you today? Human: What activities do you recommend? Assistant: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities? Follow Up Input: Sure Standalone question: > Finished chain. > Entering new StuffDocumentsChain chain... > Entering new LLMChain chain... Prompt after formatting: System: Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- Human: Which of those activities is your personal favorite? > Finished chain. > Finished chain. answer3: As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions. ```
Why does ConversationalRetrievalChain rephrase every human question?
https://api.github.com/repos/langchain-ai/langchain/issues/4076/comments
21
2023-05-04T01:37:10Z
2024-07-01T07:34:46Z
https://github.com/langchain-ai/langchain/issues/4076
1,695,085,954
4,076
[ "langchain-ai", "langchain" ]
I would like to give a simple suggestion. There could be support or some way to add custom models instead of just using the OpenAI model. There are projects that use third-party platforms that use the Chat-GPT model and can be accessed via API. The reason for this is the cost of the API from the platforms offered, especially from OpenAI. This option would provide a cost-free and more accessible path. Here is a list of some projects with this theme: [OpenGPT](https://github.com/uesleibros/OpenGPT) [GPT4FREE](https://github.com/xtekky/gpt4free)
Model Limitations
https://api.github.com/repos/langchain-ai/langchain/issues/4075/comments
3
2023-05-03T23:39:58Z
2023-09-15T16:15:45Z
https://github.com/langchain-ai/langchain/issues/4075
1,694,988,865
4,075
[ "langchain-ai", "langchain" ]
import os from langchain.llms import OpenAI, Anthropic from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import HumanMessage llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = llm("Write me a song about sparkling water.") when executed the above code, I got error "Object of type StreamingStdOutCallbackHandler is not JSON serializable". did I make something wrong or other issue here?
Object of type StreamingStdOutCallbackHandler is not JSON serializable
https://api.github.com/repos/langchain-ai/langchain/issues/4070/comments
20
2023-05-03T21:36:02Z
2023-10-24T08:30:07Z
https://github.com/langchain-ai/langchain/issues/4070
1,694,879,163
4,070
[ "langchain-ai", "langchain" ]
As of now if a hallucinated link is handed to the `ClickTool`, it will wait for 30 seconds and then fail. Instead it should generate a message that is added to the prompt along the lines of: "I was unable to find that element. Could you suggest another approach?" In addition it might be handed an element that is invisible (or for which there are multiple matching elements the first of which is invisible). Similarly, the Tool will wait until the element becomes visible, which may never happen (e.g. for mobile pages where some of the nav buttons are hidden). There are a few options here, one is to use `force=True` when clicking. Other options are to filter to visible elements using one of the approaches in: https://github.com/microsoft/playwright/issues/2370 or https://www.programsbuzz.com/article/playwright-selecting-visible-elements.
ClickTool Should Better Handle Hallucinated and Invisible Links
https://api.github.com/repos/langchain-ai/langchain/issues/4066/comments
3
2023-05-03T20:43:20Z
2023-09-15T16:15:51Z
https://github.com/langchain-ai/langchain/issues/4066
1,694,787,690
4,066
[ "langchain-ai", "langchain" ]
Hi, I had a streamlit app that was working perfectly for a while. Starting today, however, I am getting the following errors: ImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\schema.py) Traceback: File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "C:\Users\jvineburgh\OneDrive - Clarus Corporation\Desktop\AWS\newui11.py", line 3, in <module> from gpt_index import SimpleDirectoryReader, GPTListIndex, GPTSimpleVectorIndex, LLMPredictor, PromptHelper File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\__init__.py", line 18, in <module> from gpt_index.indices.common.struct_store.base import SQLDocumentContextBuilder File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\indices\__init__.py", line 4, in <module> from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\indices\keyword_table\__init__.py", line 4, in <module> from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\indices\keyword_table\base.py", line 16, in <module> from gpt_index.indices.base import DOCUMENTS_INPUT, BaseGPTIndex File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\indices\base.py", line 23, in <module> from gpt_index.indices.prompt_helper import PromptHelper File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\indices\prompt_helper.py", line 12, in <module> from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\langchain_helpers\chain_wrapper.py", line 13, in <module> from gpt_index.prompts.base import Prompt File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\prompts\__init__.py", line 3, in <module> from gpt_index.prompts.base import Prompt File "C:\Users\jvineburgh\AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt_index\prompts\base.py", line 9, in <module> from langchain.schema import BaseLanguageModel Any idea how to fix? Thanks!
Was Running Fine and now getting errors
https://api.github.com/repos/langchain-ai/langchain/issues/4064/comments
20
2023-05-03T20:29:30Z
2023-08-16T10:41:10Z
https://github.com/langchain-ai/langchain/issues/4064
1,694,761,634
4,064
[ "langchain-ai", "langchain" ]
When utilizing [a Structured Chat Agent](https://github.com/hwchase17/langchain/pull/3912), GPT-4 will often send a direct response when it should be crafting a JSON blob. As an example, I prompted a Playwright-driven agent w/ `Can you summarize this site: ramp.com?` The first gpt-4 agent response is: `````` ASSISTANT Action: ``` { "action": "navigate_browser", "action_input": "https://ramp.com" } ``` `````` And, after navigating to the page, gpt-4 responds: ``` ASSISTANT I need to extract the text from the website to summarize it. ``` Whereas gpt-3.5-turbo responds with: `````` ASSISTANT To summarize the site ramp.com, I can extract the text on the webpage using the following tool: Action: ``` { "action": "extract_text", "action_input": {} } ``` `````` In other words, GPT-4 seems to take `Respond directly if appropriate.` from the system prompt too loosely.
For Structured Chat Agent, GPT-4 often responds directly.
https://api.github.com/repos/langchain-ai/langchain/issues/4059/comments
10
2023-05-03T20:24:33Z
2023-10-06T16:08:45Z
https://github.com/langchain-ai/langchain/issues/4059
1,694,752,528
4,059
[ "langchain-ai", "langchain" ]
When using PineconeStore.fromExistingIndex with JS there is a way to add Pinecone filter to store. ``` const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex, filter, namespace: NAMESPACE} ); ``` However, when using Pinecone.from_existing_index with Python, there is no way to add the filter to the store. Therefore, I cannot use a filter in ConversationalRetrievalChain.
No Pinecone filter support to fromExistingIndex
https://api.github.com/repos/langchain-ai/langchain/issues/4057/comments
2
2023-05-03T19:15:39Z
2023-09-15T16:15:56Z
https://github.com/langchain-ai/langchain/issues/4057
1,694,651,795
4,057
[ "langchain-ai", "langchain" ]
Hi, Maybe this is already doable today, but I didn't manage to, so I'll try to ask it here as a feature request... I'd like to do a map-reduce chain, but with 2 different reductions happening in parallel (therefore, I want to have 2 final outputs). I managed to do it by doing the 2 reductions sequentially, and re-using the intermediary output of the map-reduce summarize chain. But this adds complexity and runtime, so ideally, I would like to be able to give an array of `combine_prompt` to the `load_summarize_chain` function. Is that feasible?
Multiple combine prompts when using Map-Reduce
https://api.github.com/repos/langchain-ai/langchain/issues/4054/comments
4
2023-05-03T17:34:03Z
2023-12-18T23:50:37Z
https://github.com/langchain-ai/langchain/issues/4054
1,694,506,476
4,054
[ "langchain-ai", "langchain" ]
We commonly used this pattern to create tools: ```py from langchain.tools import Tool from functools import partial def foo(x, y): return y Tool.from_function( func=partial(foo, "bar"), name = "foo", description="foobar" ) ``` which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format: ```py from langchain.tools import Tool from functools import partial def foo(x, y): return y Tool.from_function( func=lambda y: foo(x="bar",y=y), name = "foo", description="foobar" ) ``` It would be nice to again support partials.
Tools with partials (Partial functions not yet supported in tools)
https://api.github.com/repos/langchain-ai/langchain/issues/4053/comments
2
2023-05-03T17:28:46Z
2023-09-10T16:23:16Z
https://github.com/langchain-ai/langchain/issues/4053
1,694,499,938
4,053
[ "langchain-ai", "langchain" ]
Hey, I tried to use the Arxiv loader but it seems that this type of document does not exist anymore. The documentation is still there https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html Do you have any details on that?
Arxiv loader does not work
https://api.github.com/repos/langchain-ai/langchain/issues/4052/comments
1
2023-05-03T16:23:51Z
2023-09-10T16:23:21Z
https://github.com/langchain-ai/langchain/issues/4052
1,694,404,521
4,052
[ "langchain-ai", "langchain" ]
I'm trying to use the Terminal tool to execute a command, which throws an error right now. This is my Python code: ```python import os import dotenv from langchain.agents import load_tools, initialize_agent, AgentType from langchain.llms import OpenAI dotenv.load_dotenv() assert 'OPENAI_API_KEY' in os.environ, "OpenAI API key not found!" llm = OpenAI(temperature=0) tools = load_tools(['terminal'], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) prompt = "Run the ls command" agent.run(prompt) ``` This is the error I'm facing: ``` Traceback (most recent call last): File "/Users/mukesh/code/RUDRA/langchain-poc/main.py", line 16, in <module> agent.run(prompt) File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 238, in run return self(args[0], callbacks=callbacks)[self.output_keys[0]] File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__ raise e File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__ self._call(inputs, run_manager=run_manager) File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 905, in _call next_step_output = self._take_next_step( File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 783, in _take_next_step observation = tool.run( File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/tools/base.py", line 228, in run self._parse_input(tool_input) File "/Users/mukesh/code/RUDRA/langchain-poc/venv/lib/python3.9/site-packages/langchain/tools/base.py", line 170, in _parse_input input_args.validate({key_: tool_input}) File "pydantic/main.py", line 711, in pydantic.main.BaseModel.validate File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for ShellInput commands value is not a valid list (type=type_error.list) ``` This seems to work with other such as `llm-math` but the Terminal tool is throwing me this error. Please help me fix this.
Pydantic error for Terminal tool
https://api.github.com/repos/langchain-ai/langchain/issues/4049/comments
6
2023-05-03T15:16:28Z
2023-09-19T16:13:28Z
https://github.com/langchain-ai/langchain/issues/4049
1,694,285,466
4,049
[ "langchain-ai", "langchain" ]
Hi! I am working with AgentExecutors, which are being created with the create_llama_chat_agent() function. The relevant part of the code looks like this: ``` llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo") return create_llama_chat_agent( toolkit, llm, memory=memory, verbose=True ) ``` It works perfectly, but I want to add system messages to every request. Can you help me with this? Every answer is appreciated, if you have an alternative for system messages, that could be useful as well. My use-case is that I have a document describing how should I handle different type of users. I am trying to pass the type of the user in system messages so the AI can response according to the system message and the description provided in context. Thanks in advance!
Question: System messages with AgentExecutors
https://api.github.com/repos/langchain-ai/langchain/issues/4048/comments
2
2023-05-03T14:50:11Z
2023-09-10T16:23:27Z
https://github.com/langchain-ai/langchain/issues/4048
1,694,233,383
4,048
[ "langchain-ai", "langchain" ]
I create an agent using: ``` zero_shot_agent = initialize_agent( agent="zero-shot-react-description", tools=tools, llm=llm, verbose=True, max_iterations=3 ) ``` I now want to customize the content of the default prompt used by the agent. I wasn't able to locate any documented input parameters to initialize_agent() to do so. Is there a way to accomplish this ?
How do I customize the prompt for the zero shot agent ?
https://api.github.com/repos/langchain-ai/langchain/issues/4044/comments
14
2023-05-03T13:32:32Z
2024-03-21T15:32:24Z
https://github.com/langchain-ai/langchain/issues/4044
1,694,087,287
4,044
[ "langchain-ai", "langchain" ]
How can I read the files in parallel to speed up the process https://github.com/hwchase17/langchain/blob/f3ec6d2449f3fe0660e4452bd4ce98c694dc0638/langchain/document_loaders/directory.py#L74
Make DirectoryLoader to read file in parallel to reduce file reading time
https://api.github.com/repos/langchain-ai/langchain/issues/4041/comments
3
2023-05-03T12:10:29Z
2023-09-19T16:13:33Z
https://github.com/langchain-ai/langchain/issues/4041
1,693,953,239
4,041
[ "langchain-ai", "langchain" ]
`from langchain.callbacks.manager import CallbackManager` generates: `ModuleNotFoundError: No module named 'langchain.callbacks.manager' Source code: https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
can't import callback manager
https://api.github.com/repos/langchain-ai/langchain/issues/4040/comments
2
2023-05-03T11:48:53Z
2023-08-14T21:36:50Z
https://github.com/langchain-ai/langchain/issues/4040
1,693,917,485
4,040
[ "langchain-ai", "langchain" ]
Hello everyone! I am developing simple chat with pdf bot. I am facing strange error. I and facing an error _raise ValueError(f"Got unknown type {message}") ValueError: Got unknown type w_ Code snippet is: ``` from langchain.document_loaders import UnstructuredPDFLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone from langchain.chat_models import ChatOpenAI from langchain.chains.question_answering import load_qa_chain OPENAI_API_KEY = '' PINECONE_API_KEY = '' PINECONE_API_ENV = '' loader = UnstructuredPDFLoader('./field-guide-to-data-science.pdf') data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=300) texts = text_splitter.split_documents(data) embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model='text-embedding-ada-002') pinecone.init( api_key=PINECONE_API_KEY, environment=PINECONE_API_ENV ) index_name = "" docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name) llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.3, openai_api_key=OPENAI_API_KEY) from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationalRetrievalChain memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) rqa = ConversationalRetrievalChain.from_llm(llm, docsearch.as_retriever(), memory=memory) def retrieve_answer(query, chat_history): memory.chat_memory.add_user_message(query) res = rqa({"question": query}) retrieval_result = res["answer"] if "The given context does not provide" in retrieval_result: print(query) print([query]) base_result = llm.generate([query]) return base_result.generations[0][0].text else: return retrieval_result messages = [] print("Welcome to the chatbot. Enter 'quit' to exit the program.") while True: user_message = input("You: ") if user_message.lower() == "quit": break answer = retrieve_answer(user_message, messages) print("Assistant:", answer) memory.chat_memory.add_ai_message(answer) messages.append((user_message, answer)) ``` This is really strange
llm.generate([query]) return ValueError: Got unknown type w
https://api.github.com/repos/langchain-ai/langchain/issues/4037/comments
3
2023-05-03T10:12:48Z
2023-09-26T16:07:19Z
https://github.com/langchain-ai/langchain/issues/4037
1,693,777,877
4,037
[ "langchain-ai", "langchain" ]
Hi, was looking into using a vectorstore to save some embeddings and ChromaDB seemed good. Only issue for now is if it is possible to persist into an AzureBlobStorage instead of in the local disk. Currently using Databricks and only solution found was mounting the blobstorage into the databricks environment, but this wouldn't work once the code is moved out of Databricks. Thanks in advance 😄
Question: Possible to use ChromaDB with persistence into an Azure Blob Storage
https://api.github.com/repos/langchain-ai/langchain/issues/4036/comments
1
2023-05-03T10:03:33Z
2023-09-10T16:23:32Z
https://github.com/langchain-ai/langchain/issues/4036
1,693,763,541
4,036
[ "langchain-ai", "langchain" ]
Hi all, One major benefit of using Summary method for context in Conversation is to save cost. But with increasing chat iterations, no of token keeps on increasing, significantly. Is there any parameter, by which I can set the max limit of Summary?
How can we set a limit for max tokens in ConversationSummaryMemory
https://api.github.com/repos/langchain-ai/langchain/issues/4033/comments
4
2023-05-03T09:41:10Z
2023-11-21T12:41:06Z
https://github.com/langchain-ai/langchain/issues/4033
1,693,725,706
4,033
[ "langchain-ai", "langchain" ]
Version: 0.0.153 I follow instructions here https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html ```python from langchain.llms import OpenAI, Anthropic from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import HumanMessage # llm = OpenAI(streaming=True, temperature=0, model_kwargs=dict(callback_mananger=StreamingStdOutCallbackHandler())) llm = OpenAI(streaming=True, callbacks=[ StreamingStdOutCallbackHandler()], temperature=0) resp = llm("hi") ``` Error ``` ... nit__.py", line 231, in dumps return _default_encoder.encode(obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type StreamingStdOutCallbackHandler is not JSON serializable ```
TypeError: Object of type StreamingStdOutCallbackHandler is not JSON serializable
https://api.github.com/repos/langchain-ai/langchain/issues/4027/comments
1
2023-05-03T07:07:09Z
2023-05-03T07:34:11Z
https://github.com/langchain-ai/langchain/issues/4027
1,693,520,800
4,027
[ "langchain-ai", "langchain" ]
I prefer async LLM calls in my code, but need fallbacks for LLMs that do not support. My code lets users supply their own LLMs. It would be nice to automatically do this fallback to `run` if `arun` will not work with the given LLM. Something like `mychain.arun(..., fallback=True)`. The work around I use is below: ```py class FallbackLLMChain(LLMChain): """Chain that falls back to synchronous generation if the async generation fails.""" async def agenerate( self, input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> LLMResult: """Generate LLM result from inputs.""" try: return await super().agenerate(input_list, run_manager=run_manager) except NotImplementedError as e: return self.generate(input_list, run_manager=run_manager) ``` It might be useful in repo though.
[Feature Request] Fallback to run from arun
https://api.github.com/repos/langchain-ai/langchain/issues/4025/comments
4
2023-05-03T05:57:00Z
2023-09-19T16:13:39Z
https://github.com/langchain-ai/langchain/issues/4025
1,693,453,400
4,025
[ "langchain-ai", "langchain" ]
Hi, I'm now considering to get llm to search directories, see the names of document files in them and then fetch information from files the name of which are relevant to a given task. It seems a simple function so I can make by myself, but is there any agents or index loader that accomplish this kind of task so far?
Is there any function that crawls directories?
https://api.github.com/repos/langchain-ai/langchain/issues/4023/comments
1
2023-05-03T05:00:37Z
2023-09-10T16:23:42Z
https://github.com/langchain-ai/langchain/issues/4023
1,693,415,360
4,023
[ "langchain-ai", "langchain" ]
Running the code at https://python.langchain.com/en/latest/modules/agents/agents/custom_llm_chat_agent.html results in the following error `Allowed tools (set()) different than provided tools (['Search']) (type=value_error)` I'm guessing the issue is that LLMSingleActionAgent does not currently allow any tools?
Custom LLM Agent example does not work
https://api.github.com/repos/langchain-ai/langchain/issues/4015/comments
0
2023-05-03T01:48:38Z
2023-05-03T01:50:43Z
https://github.com/langchain-ai/langchain/issues/4015
1,693,298,662
4,015
[ "langchain-ai", "langchain" ]
As of `0.0.155`, the `SelfQueryRetriever ` class supports Pinecone only. I wanted to extend it myself to support Vespa too but, after reviewing the current implementation, I discovered that `SelfQueryRetriever` ["wraps around a vector store" instead of a retriever](https://github.com/hwchase17/langchain/blob/18f9d7b4f6209632a02ed6e53a663e98d372f3da/langchain/retrievers/self_query/base.py#L33). Currently, there is no `VectorStore` implementation for Vespa. Hence, I believe it is not possible to augment `SelfQueryRetriever` to support Vespa. After thinking about it for a couple of days, I think that a valid solution to this problem would require to rethink the implementation of the `SelfQueryRetriever` by making it wrap a retriever instead of a vector store. This sounds reasonable because each vector sore can act as a retriever thanks to the `as_retriever` method. The `search` method added as part of the #3607 is a wrapper around either the `similarity_search` or `max_marginal_relevance_search` method which are also wrapped by the `get_relevant_documents` method of a retriever. Hence, refactoring `SelfQueryRetriever` to wrap a retriever instead of a vector store seems reasonable to me. Obviously, mine is a fairly narrow point of view given that I mainly looked at the code related to the implementation of the `SelfQueryRetriever` class and I might miss key implications of such a proposed change. I would be happy to have a go a the `SelfQueryRetriever` refactoring but first it would be great of someone from the code developers team could comment on this. Tagging here @dev2049 because you were the author of #3607
[Feature] Augment SelfQueryRetriever to support Vespa
https://api.github.com/repos/langchain-ai/langchain/issues/4008/comments
13
2023-05-02T22:45:23Z
2023-09-22T16:10:25Z
https://github.com/langchain-ai/langchain/issues/4008
1,693,187,800
4,008
[ "langchain-ai", "langchain" ]
I was testing the new version on streaming (I've updated /hwchase17/chat-langchain locally and made the necessary changes) the example provided there has handler with websockets as a parameter: ```python class StreamingLLMCallbackHandler(AsyncCallbackHandler): """Callback handler for streaming LLM responses.""" def __init__(self, websocket): self.websocket = websocket async def on_llm_new_token(self, token: str, **kwargs: Any) -> None: resp = ChatResponse(sender="bot", message=token, type="stream") await self.websocket.send_json(resp.dict()) ``` trying to assign this into as a callback however will cause maximum recursion depth exceeded in comparison in _configure method in langchain.callbacks.manager, on the `deepcopy` code... I assume that websockets have som self-reference, however, this new behavior breaks the example provided on how to stream to websockets, and just from the top of my mind I don't even know how would I do it without having websockets as a field there Furhtermore... I want able to make it work so that it would at least stream to the console... this is the minimal setup I tried ```python llm = OpenAI(temperature=0.0, callbacks=[stream_handler]) question_generator = LLMChain( llm=llm, prompt=PromptTemplate.from_template("Based on this history:\n{chat_history}\nanswer the question {question}:"), output_key="answer", callbacks=[stream_handler] ) ``` but on_llm_new_token has never been called... I didn't investigate this further however...
v0.0.155: maximum recursion depth exceeded in comparison when setting async callback
https://api.github.com/repos/langchain-ai/langchain/issues/4002/comments
1
2023-05-02T20:56:35Z
2023-05-20T19:33:49Z
https://github.com/langchain-ai/langchain/issues/4002
1,693,089,213
4,002
[ "langchain-ai", "langchain" ]
## `AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION` This is a fantastic addition to langchain's collection of pre-built agents. The rendered prompt is less cluttered and the agent seems to be choosing tools correctly more often. However I could not seem to get it working with the `ConversationalBufferWindowMemory` that I was already using with other agent types. I looked further into it and this agent's `prompt.py` had no placeholder called `{chat_history}` or `{history}`. Just be extra sure I wrapped the LLM object in a custom class and logged all requests/responses to a file. Only most recent input is appended with every call. I looked at the `AgentType` class and the naming convention seems to suggest that this might be intended behavipour because it does not have 'conversational' in the name. Docs are not very clear on this. And the bot on site isn't helping as well. #### Here's how I am initializing the AgentExecutor ``` executor = initialize_agent( tools=tools, llm=llm, memory=user_memory, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, return_intermediate_steps=True, verbose=True, agent_kwargs={ "prefix": prefix, "suffix": suffix, "memory": user_memory, "verbose": True, }, ) ``` Here `user_memory` is a `ConversationalBufferWindowMemory` object I was successfully using before. ### 1. Is this a bug or the intended behaviour? ### 2. If this is not a bug, is a version of this Agent with support for memory coming anytime soon?
Does the new Structured Chat Agent support ConversationMemory?
https://api.github.com/repos/langchain-ai/langchain/issues/4000/comments
13
2023-05-02T20:46:12Z
2024-04-23T05:42:03Z
https://github.com/langchain-ai/langchain/issues/4000
1,693,078,189
4,000
[ "langchain-ai", "langchain" ]
As in title, I think it might be because of deprecation and renaming at some point? Updated to use BaseCallbackManager in PR #3996 , please merge, thanks!
Llama-cpp docs loading has CallbackManager error
https://api.github.com/repos/langchain-ai/langchain/issues/3997/comments
0
2023-05-02T20:13:28Z
2023-05-02T23:20:18Z
https://github.com/langchain-ai/langchain/issues/3997
1,693,040,259
3,997
[ "langchain-ai", "langchain" ]
I am installing LangChain for the first time. I opened a command box as Administrator to make sure the permissions were solid. It installed the build dependencies, then I got "ERROR: Command errored out with exit status 1: command: 'c:\python27\python.exe' 'c:\python27\lib\site-packages\pip\_vendor\pep517\_in_process.py' get_requires_for_build_wheel 'c:\users\boldi\appdata\local\temp\tmpestjm8'" See attached screenshot. ![LangChain Error](https://user-images.githubusercontent.com/7397536/235765715-3572adff-bb1c-4586-b8c0-0781e1f5202b.JPG) I have had no problems installing other packages like OpenAI today.
First time install of v 0.0.155, errors in build wheel
https://api.github.com/repos/langchain-ai/langchain/issues/3994/comments
4
2023-05-02T19:27:03Z
2023-05-03T20:47:16Z
https://github.com/langchain-ai/langchain/issues/3994
1,692,979,677
3,994
[ "langchain-ai", "langchain" ]
Module: `embeddings/openai.py` Preface: While using the AzureOpenAI or a custom model deployment on Azure, I am unable to use the OpenAIEmbeddings as `pydantic` forbids passing extra arguments. Whereas this works perfectly fine with ChatOpenAI or OpenAI models. Environment Variables I am using - ``` OPENAI_LOG=info OPENAI_API_KEY=somestring OPENAI_API_TYPE=azure OPENAI_API_VERSION=2023-03-15-preview OPENAI_API_BASE=some-url ``` My code to reproduce the error - ```{python} DEPLOYMENT_FROM_MODEL = { 'text-embedding-ada-002': 'custom_embeddings' } model='text-embedding-ada-002' llm = OpenAIEmbeddings( deployment=DEPLOYMENT_FROM_MODEL.get(model), model=model, headers={ "azure-account-uri": f"https://{AZURE_ACCOUNT}.openai.azure.com", "Authorization": f"Bearer {BEARER_TOKEN}", } ) llm.embed_query(input_prompt) ``` Errors thrown: ``` pydantic.error_wrappers.ValidationError: 2 validation errors for OpenAIEmbeddings deployment none is not an allowed value (type=type_error.none.not_allowed) headers extra fields not permitted (type=value_error.extra) ``` I tried changing the `embeddings/openai.py` and got them working by passing it as kwargs. Hopefully I can create a MR for it.
OpenAIEmbeddings can not take headers
https://api.github.com/repos/langchain-ai/langchain/issues/3992/comments
1
2023-05-02T19:04:56Z
2023-10-05T16:10:08Z
https://github.com/langchain-ai/langchain/issues/3992
1,692,950,910
3,992
[ "langchain-ai", "langchain" ]
It seems maintaining separate namespaces in your vector DB is helpful and/or necessary in making sure an LLM can answer compare/contrast questions that need to reference texts separated by dates like "03/2023" vs. "03/2022" without getting confused. To that end, there's a need to retrieve from multiple vectorstores, yet I can't find a straightforward solution. I have tried a few things: 1. Extending the `ConversationalRetrievalChain` to accept a list of retrievers: ``` class MultiRetrieverConversationalRetrievalChain(ConversationalRetrievalChain): """Chain for chatting with multiple indexes.""" retrievers: List[BaseRetriever] """Indexes to connect to.""" def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: all_docs = [] for retriever in self.retrievers: docs = retriever.get_relevant_documents(question) all_docs.extend(docs) return self._reduce_tokens_below_limit(all_docs) async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: all_docs = [] for retriever in self.retrievers: docs = await retriever.aget_relevant_documents(question) all_docs.extend(docs) return self._reduce_tokens_below_limit(all_docs) ``` This became a bit unwieldy as it ran into validation errors with Pydantic, but I don't see why a more competent dev wouldn't be able to manage this. 2. I tried combining retrievers (suggestion from kapa.ai): ``` embeddings = OpenAIEmbeddings() march_documents = Pinecone.from_existing_index(index_name="langchain2", embedding=embeddings, namespace="March 2023") feb_documents = Pinecone.from_existing_index(index_name="langchain2", embedding=embeddings, namespace="February 2023") combined_docs = feb_documents + march_documents # Create a RetrievalQAWithSourcesChain using the combined retriever chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=combined_docs) # does not work as as_retriever() either ``` 3. Tried using an Agent with `VectorStoreRouterToolkit`, which seems to be built for this kind of task, yet provides terrible answers for some reason that I need to dive deeper into. Terrible answers being - does not listen when I instruct like "Do not summarize, list everything about XYZ..." Further, I need/prefer the results from similarity_search, returning `top_k` for my use-case, which the agent doesn't seem to provide. Is there a workaround to my problem? How do I maintain separation of namespaces, so that I can have the LLM answer questions about separate documents, and also be able to provide the source for the separate documents all from within a single chain?
Seeking solution for combined retrievers, or retrieving from multiple vectorstores with sources, to maintain separate Namespaces.
https://api.github.com/repos/langchain-ai/langchain/issues/3991/comments
7
2023-05-02T18:14:32Z
2023-06-28T03:31:58Z
https://github.com/langchain-ai/langchain/issues/3991
1,692,881,402
3,991
[ "langchain-ai", "langchain" ]
Hi .. I am currently into problems where I call the LLM to search over the local docs, I get this warning which never seems to stop ``` Setting `pad_token_id` to `eos_token_id`:0 for open-end generation. Setting `pad_token_id` to `eos_token_id`:0 for open-end generation. Setting `pad_token_id` to `eos_token_id`:0 for open-end generation. ... ``` Here is my simple code: ```python loader = TextLoader('state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100) docs = text_splitter.split_documents(documents) from langchain.llms import HuggingFacePipeline from langchain.chains.question_answering import load_qa_chain llm = HuggingFacePipeline.from_model_id(model_id='stabilityai/stablelm-base-alpha-7b', task='text-generation', device=0, model_kwargs={"temperature":0, "max_length": 1024}) query = "What did the President say about immigration?" chain = load_qa_chain(llm, chain_type="map_reduce") chain.run(input_documents=docs, question=query) ``` Currently on 1 A100 with 80GB memory.
Local hugging face model to search over docs
https://api.github.com/repos/langchain-ai/langchain/issues/3989/comments
1
2023-05-02T18:09:07Z
2023-05-16T05:31:24Z
https://github.com/langchain-ai/langchain/issues/3989
1,692,873,345
3,989
[ "langchain-ai", "langchain" ]
Hi all! I have an [application](https://github.com/ur-whitelab/BO-LIFT) based on langchain. A few months ago, I used it with fine-tuned (FT) models. We added a token usage counter later, and I haven't tried fine-tuned models again since then. Recently we have been interested in using (FT) models again, but the callback to expose the token usage isn't accepting the model. Minimal code to reproduce the error: ``` from langchain.llms import OpenAI from langchain.callbacks import get_openai_callback llm = OpenAI( model_name=FT_MODEL, temperature=0.7, n=5, max_tokens=64, ) with get_openai_callback() as cb: completion_response = llm.generate(["QUERY"]) token_usage = cb.total_tokens ``` It works fine if the model name is a basic openAI model. For instance, ```model_name="text-davinci-003"``` But when I try to use one of my FT models, I get this error: ``` Error in on_llm_end callback: Unknown model: FT_MODEL. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002 ``` It works if I remove the callback and avoid token counting, but it'd be nice to have any suggestions on how to make it work. Is there a workaround for that? Any help is welcome! Thanks!
LangChain openAI callback doesn't allow finetuned models
https://api.github.com/repos/langchain-ai/langchain/issues/3988/comments
0
2023-05-02T18:00:22Z
2023-05-02T23:19:58Z
https://github.com/langchain-ai/langchain/issues/3988
1,692,856,409
3,988
[ "langchain-ai", "langchain" ]
I am interested in writing a tutorial for using langchain with [Shimmy](https://shimmy.farama.org/), an API conversion tool allowing many popular reinforcement learning environments to be used natively with PettingZoo and Gymnasium. Since there are already PettingZoo and Gymnasium tutorials/wrappers (https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html), it would not require significant new code, but I think it would be a really helpful feature to show people that they can for example load DM Control/DM Lab environments in the exact same way. OpenSpiel provides text-rendered board games, which would be a great feature to show off with a chatbot. Making this issue just to confirm it is something that the developers would be interested, but I am planning to work on the tutorial later today and if it's not possible to add directly to this repo I will add it to Shimmy (I am the project manager of Shimmy and PettingZoo). I would be interested in fleshing out integration on PettingZoo's side as well, if there are any extra features which we could add in order to better support langchain we would love to do so. If there is interest in adding training library support I am also interested in working on tutorials to have langchain load a trained model using standard libraries like [stable-baselines3](https://github.com/DLR-RM/stable-baselines3), [RLlib](https://docs.ray.io/en/latest/rllib/index.html), or a lighter library like [CleanRL](https://github.com/vwxyzjn/cleanrl).
Simulated Environment: Shimmy (Farama Foundation API conversion tool)
https://api.github.com/repos/langchain-ai/langchain/issues/3986/comments
1
2023-05-02T17:32:55Z
2023-05-20T02:32:14Z
https://github.com/langchain-ai/langchain/issues/3986
1,692,816,840
3,986
[ "langchain-ai", "langchain" ]
With a schema such as: ``` class JsonList(BaseModel): __root__: List[str] ``` We can validate a JSON string in the form: ``` [ 'xxx', 'yyy', 'zzz' ] ``` But the Pydantic parser [assumes the input is an object](https://github.com/hwchase17/langchain/blob/71a337dac6aa8c5a7f472e3e7fd0a61ca2a4eefb/langchain/output_parsers/pydantic.py#L20) in its greedy search, so it fails with a validation error.
Pydantic output parser assumes JSON object
https://api.github.com/repos/langchain-ai/langchain/issues/3985/comments
2
2023-05-02T17:06:44Z
2023-10-24T16:08:48Z
https://github.com/langchain-ai/langchain/issues/3985
1,692,784,932
3,985
[ "langchain-ai", "langchain" ]
Hi, I am using below function create_collection() for creating collection and it is working fine , like it is creating a collection and storing it into my persist directory and also I am able to perform question answering using this collection. def create_collection(openai_api_key, embedding_path, persist_directory, collection_name): if not openai_api_key == 'None': if collection_name!=None: if persist_directory!=None: global vectordb if embedding_path!=None: for i in embedding_path: i = str(i) logging.info('Processing files in directory: ' +i) loader = DirectoryLoader(i, show_progress=True) docs = loader.load() #DE5-T31 #text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) #DE5-T31 #texts = text_splitter.split_documents(docs) #DE5-T31 text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200) texts = text_splitter.split_documents(docs) embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) #DE5-T31 vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory = persist_directory, collection_name=collection_name) vectordb.persist() vectordb = None #DE5-T31 elif web_urls_path!=None: web_urls = web_urls_path.split(',') url_list = web_urls logging.info(f"URL List: {url_list}") urls = [] for url in url_list: logging.info(f"Processing data in url: {url}") reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') for link in soup.find_all('a'): urllink=link.get('href') if (urllink != None) and (urllink.startswith('http')): urls.append(urllink) #print(link.get('href')) loader = UnstructuredURLLoader(urls=urls) docs = loader.load() logging.info(f'You have {len(docs)} document(s) in your data') text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200) texts = text_splitter.split_documents(docs) embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory=persist_directory, collection_name=collection_name) vectordb.persist() vectordb = None else: logging.error("Please provide the directory or web path to create the collection!") sys.exit(0) else: logging.error("Please provide the path to store the vector database!") sys.exit(0) else: logging.error("Please provide a collection name!") sys.exit(0) else: logging.error("Please configure the openai_api_key in the json file at the following path:\n" + "file:///" + pathconf) sys.exit(0) return but while I am trying to update the existing collection using update collection function, it is not working for me, like I am not able to update the existing collection. def update_collection(openai_api_key, persist_directory, collection_name): if not openai_api_key == 'None': if persist_directory!=None: if collection_name!=None: if os.path.isdir(persist_directory + '\index') and os.path.isfile(persist_directory + "\chroma-collections.parquet") and os.path.isfile(persist_directory + "\chroma-embeddings.parquet"): try: embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) client = chromadb.Client(Settings(chroma_db_impl="duckdb+parquet", persist_directory=persist_directory)) # Optional, defaults to .chromadb/ in the current directory collection = client.get_collection(name = collection_name, embedding_function=embeddings) except Exception as e: logging.error("An error occured: " + str(e)) sys.exit(0) global vectordb if embedding_path!=None: #calling persisted db vectordb = Chroma(embedding_function = embeddings, persist_directory = persist_directory, collection_name=collection_name) for i in embedding_path: i = str(i) logging.info('Processing files in directory: ' +i) # load the documents you want to add to the collection loader = DirectoryLoader(i, show_progress=True) docs = loader.load() text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200) texts = text_splitter.split_documents(docs) # add the documents to the collection #vectordb = Chroma(embedding_function=embeddings, persist_directory=persist_directory, collection_name=collection_name) vectordb.add_documents(documents=texts) #vectordb = Chroma.from_documents(texts, collection_name=collection_name, persist_directory=persist_directory, embedding=embeddings) vectordb.persist() vectordb =None #collection.update(documents=texts) #text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) #DE5-T31 #texts = text_splitter.split_documents(docs) #DE5-T31 #vectordb = vectordb.add_documents(texts, collection_name=collection_name) #vectordb.from_documents(documents=texts,embedding=embeddings, collection_name=collection_name) elif web_urls_path!=None: #vectordb = Chroma(embedding_function=embeddings, persist_directory = persist_directory, collection_name=collection_name) web_urls = web_urls_path.split(',') url_list = web_urls logging.info(f"URL List: {url_list}") urls = [] for url in url_list: logging.info(f"Processing data in url: {url}") reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') for link in soup.find_all('a'): urllink=link.get('href') if (urllink != None) and (urllink.startswith('http')): urls.append(urllink) #print(link.get('href')) loader = UnstructuredURLLoader(urls=urls) docs = loader.load() logging.info(f'You have {len(docs)} document(s) in your data') text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200) texts = text_splitter.split_documents(docs) embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) vectordb.from_documents(texts, embedding=embeddings, persist_directory=persist_directory, collection_name=collection_name) vectordb.persist() vectordb = None else: logging.error("Please provide directory path(s) or web url(s) to update existing collection!") sys.exit(0) else: logging.error("The path provided does not contain a chroma vectordb!") sys.exit(0) else: logging.error("Please provide a valid collection name to update the existing collection!") sys.exit(0) else: logging.error("Please provide the path where vector database is stored!") sys.exit(0) else: logging.error("Please configure the openai_api_key in the json file at the following path:\n" + "file:///" + pathconf) sys.exit(0) return can anyone please help me that what mistake I am doing or it is not possible to update existing collection in persisted database? Thank You
Is updating collection possible
https://api.github.com/repos/langchain-ai/langchain/issues/3984/comments
4
2023-05-02T16:45:03Z
2023-10-06T16:08:50Z
https://github.com/langchain-ai/langchain/issues/3984
1,692,751,873
3,984
[ "langchain-ai", "langchain" ]
`VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`. Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67 However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`: ``` File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self) [43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property [44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str: ---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.") NotImplementedError: Saving not supported for this chain type. ``` There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded
https://api.github.com/repos/langchain-ai/langchain/issues/3983/comments
15
2023-05-02T16:17:48Z
2023-08-01T13:53:17Z
https://github.com/langchain-ai/langchain/issues/3983
1,692,711,579
3,983
[ "langchain-ai", "langchain" ]
hello, why i get this error : `2023-05-02 16:59:06.633 Uncaught app exception Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/Users/xfocus/Downloads/chatRepo/Chat-with-Github-Repo/chat.py", line 104, in <module> output = search_db(user_input) File "/Users/xfocus/Downloads/chatRepo/Chat-with-Github-Repo/chat.py", line 85, in search_db result = qa({"query": query}) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 110, in _call answer = self.combine_documents_chain.run( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in run return self(kwargs)[self.output_keys[0]] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call output, extra_return_dict = self.combine_docs(docs, **other_keys) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 82, in combine_docs return self.llm_chain.predict(**inputs), {} File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict return self(kwargs)[self.output_key] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call return self.apply([inputs])[0] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply response = self.generate(input_list) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate return self.llm.generate_prompt(prompts, stop) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 82, in generate_prompt raise e File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 79, in generate_prompt output = self.generate(prompt_messages, stop=stop) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 54, in generate results = [self._generate(m, stop=stop) for m in messages] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 54, in <listcomp> results = [self._generate(m, stop=stop) for m in messages] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 266, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 228, in completion_with_retry return _completion_with_retry(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter return fut.result() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 226, in _completion_with_retry return self.client.create(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 16762 tokens. Please reduce the length of the messages.`
why i get :openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 16762 tokens. Please reduce the length of the messages.
https://api.github.com/repos/langchain-ai/langchain/issues/3982/comments
4
2023-05-02T16:04:31Z
2023-11-15T16:11:06Z
https://github.com/langchain-ai/langchain/issues/3982
1,692,692,832
3,982
[ "langchain-ai", "langchain" ]
Hello everyone! I am using Langchain and I want to implement chatbot memory. I am doing everything according to the docs and my bot doesn't remember anything I tell him. **Code snippet:** ``` llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.3, openai_api_key=OPENAI_API_KEY) memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) rqa = ConversationalRetrievalChain.from_llm(llm, docsearch.as_retriever(), memory=memory) def retrieve_answer(query, chat_history): memory.chat_memory.add_user_message(query) res = rqa({"question": query}) retrieval_result = res["answer"] if "The given context does not provide" in retrieval_result: base_result = llm.generate([query]) return base_result.generations[0][0].text else: return retrieval_result messages = [] print("Welcome to the chatbot. Enter 'quit' to exit the program.") while True: user_message = input("You: ") answer = retrieve_answer(user_message, messages) print("Assistant:", answer) messages.append((user_message, answer)) ``` **Whole python script is located here:** https://github.com/zigax1/chat-with-pdf.git Does anyone have any idea, what am I doing wrong? Thanks to everyone for help. ![image](https://user-images.githubusercontent.com/80093733/235705133-b8ff5d11-88bd-408a-a03e-c596ffb2fde7.png)
Chatbot memory integration
https://api.github.com/repos/langchain-ai/langchain/issues/3977/comments
4
2023-05-02T15:01:31Z
2023-09-22T16:10:31Z
https://github.com/langchain-ai/langchain/issues/3977
1,692,593,040
3,977
[ "langchain-ai", "langchain" ]
ConversationChain's conversation.predict() multiple times gives confusing logging. Inspired by: https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html Minimal Confusing Example: ```python from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) bar = conversation.predict(input="please print bar") baz = conversation.predict(input="now please change the last letter of the previous output to z") ``` *Stdout Logging Observed:* > Entering Conversation Chain. System: The following is a conversation between ... Human: please print bar > Finished Chain. #output "bar" is assigned to variable and stored in memory. > Entering Conversation Chain. System: The following is a conversation between ... Human: please print bar AI: bar Human: now please change the last letter of the previous output to z > Finished Chain. # output "baz" is assigned to variable and stored in memory. *Stdout Logging Expected:* > Entering Multi-turn Conversation. System: The following is a conversation between ... Human: Please print bar AI: bar Human: now please change the last letter of the previous output to z. AI: baz > Finished Multi-turn Conversation. # outputs bar and baz are both accessible in variables and stored in memory. For my use case, I would be least surprised by a generator log which only yields additional log lines as the conversation continues.
Multi-turn conversation chains have unintuitive logging.
https://api.github.com/repos/langchain-ai/langchain/issues/3974/comments
1
2023-05-02T13:18:55Z
2023-09-10T16:23:48Z
https://github.com/langchain-ai/langchain/issues/3974
1,692,411,958
3,974
[ "langchain-ai", "langchain" ]
I use the huggingface model locally and run the following code: ``` chain = load_qa_chain(llm=chatglm, chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT) chain({"input_documents": search_docs_Documents, "question": query}, return_only_outputs=True) ``` The error is as follows: ``` ─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /tmp/ipykernel_274378/983731820.py:2 in <module> │ │ │ │ [Errno 2] No such file or directory: '/tmp/ipykernel_274378/983731820.py' │ │ │ │ /tmp/ipykernel_274378/14951549.py:11 in answer_docs │ │ │ │ [Errno 2] No such file or directory: '/tmp/ipykernel_274378/14951549.py' │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/base.py:116 in │ │ __call__ │ │ │ │ 113 │ │ │ outputs = self._call(inputs) │ │ 114 │ │ except (KeyboardInterrupt, Exception) as e: │ │ 115 │ │ │ self.callback_manager.on_chain_error(e, verbose=self.verbose) │ │ ❱ 116 │ │ │ raise e │ │ 117 │ │ self.callback_manager.on_chain_end(outputs, verbose=self.verbose) │ │ 118 │ │ return self.prep_outputs(inputs, outputs, return_only_outputs) │ │ 119 │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/base.py:113 in │ │ __call__ │ │ │ │ 110 │ │ │ verbose=self.verbose, │ │ 111 │ │ ) │ │ 112 │ │ try: │ │ ❱ 113 │ │ │ outputs = self._call(inputs) │ │ 114 │ │ except (KeyboardInterrupt, Exception) as e: │ │ 115 │ │ │ self.callback_manager.on_chain_error(e, verbose=self.verbose) │ │ 116 │ │ │ raise e │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_document │ │ s/base.py:75 in _call │ │ │ │ 72 │ │ docs = inputs[self.input_key] │ │ 73 │ │ # Other keys are assumed to be needed for LLM prediction │ │ 74 │ │ other_keys = {k: v for k, v in inputs.items() if k != self.input_key} │ │ ❱ 75 │ │ output, extra_return_dict = self.combine_docs(docs, **other_keys) │ │ 76 │ │ extra_return_dict[self.output_key] = output │ │ 77 │ │ return extra_return_dict │ │ 78 │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_document │ │ s/map_rerank.py:97 in combine_docs │ │ │ │ 94 │ │ │ │ 95 │ │ Combine by mapping first chain over all documents, then reranking the results. │ │ 96 │ │ """ │ │ ❱ 97 │ │ results = self.llm_chain.apply_and_parse( │ │ 98 │ │ │ # FYI - this is parallelized and so it is fast. │ │ 99 │ │ │ [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs] │ │ 100 │ │ ) │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:192 in │ │ apply_and_parse │ │ │ │ 189 │ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: │ │ 190 │ │ """Call apply and then parse the results.""" │ │ 191 │ │ result = self.apply(input_list) │ │ ❱ 192 │ │ return self._parse_result(result) │ │ 193 │ │ │ 194 │ def _parse_result( │ │ 195 │ │ self, result: List[Dict[str, str]] │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:198 in │ │ _parse_result │ │ │ │ 195 │ │ self, result: List[Dict[str, str]] │ │ 196 │ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: │ │ 197 │ │ if self.prompt.output_parser is not None: │ │ ❱ 198 │ │ │ return [ │ │ 199 │ │ │ │ self.prompt.output_parser.parse(res[self.output_key]) for res in result │ │ 200 │ │ │ ] │ │ 201 │ │ else: │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:199 in │ │ <listcomp> │ │ │ │ 196 │ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: │ │ 197 │ │ if self.prompt.output_parser is not None: │ │ 198 │ │ │ return [ │ │ ❱ 199 │ │ │ │ self.prompt.output_parser.parse(res[self.output_key]) for res in result │ │ 200 │ │ │ ] │ │ 201 │ │ else: │ │ 202 │ │ │ return result │ │ │ │ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/output_parsers/regex.py │ │ :28 in parse │ │ │ │ 25 │ │ │ return {key: match.group(i + 1) for i, key in enumerate(self.output_keys)} │ │ 26 │ │ else: │ │ 27 │ │ │ if self.default_output_key is None: │ │ ❱ 28 │ │ │ │ raise ValueError(f"Could not parse output: {text}") │ │ 29 │ │ │ else: │ │ 30 │ │ │ │ return { │ │ 31 │ │ │ │ │ key: text if key == self.default_output_key else "" │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: Could not parse output: ```
load_qa_chain with map_rerank by local huggingface model
https://api.github.com/repos/langchain-ai/langchain/issues/3970/comments
12
2023-05-02T11:55:21Z
2023-12-06T17:46:35Z
https://github.com/langchain-ai/langchain/issues/3970
1,692,285,119
3,970
[ "langchain-ai", "langchain" ]
When using the SQL Chain I can return the intermediate steps so that I can output the Query. For the SQL Agent this seems not to be an option without modifying the tool itself. How can I see the actual queries used (not only in the verbose, but save it in a variable e.g.)? Alternatively is there a way to save the verbose output in a dict?
Get intermediate_steps with SQLDatabase Agent
https://api.github.com/repos/langchain-ai/langchain/issues/3969/comments
17
2023-05-02T11:47:15Z
2024-07-06T16:04:57Z
https://github.com/langchain-ai/langchain/issues/3969
1,692,274,905
3,969
[ "langchain-ai", "langchain" ]
Currently I'm using a specific prompt suffix (`". Assign the printed dataframe to a variable and return it as the final answer."`) in the Pandas agent, which sometimes gives the result I'm looking for as a printed DataFrame. I've then written an output parser that turns this into a JSON which can then be loaded as a DataFrame. The problem with this is the Pandas agent often doesn't output the DataFrame, instead printing the variable name or the last command used to get to that result, and also critically the output parser uses many tokens for the kinds of data I'm working with. Is it possible to force the Pandas agent to return the output of its analysis as a Pandas DataFrame object?
Make Pandas Agent return DataFrame object?
https://api.github.com/repos/langchain-ai/langchain/issues/3968/comments
5
2023-05-02T11:37:03Z
2023-12-16T03:13:49Z
https://github.com/langchain-ai/langchain/issues/3968
1,692,261,993
3,968
[ "langchain-ai", "langchain" ]
Chroma or Pinecone Vector databases allow filtering documents by metadata with the filter parameter in the similarity_search function but the similarity_search does not have this parameter. Would it be possible to enable similarity_search for Redis Vector store?
[Feature] Redis Vectorestore - similarity_search filter by metadata
https://api.github.com/repos/langchain-ai/langchain/issues/3967/comments
27
2023-05-02T11:08:29Z
2023-10-31T21:51:13Z
https://github.com/langchain-ai/langchain/issues/3967
1,692,223,105
3,967
[ "langchain-ai", "langchain" ]
What's the difference between the two parameters handlers and inheritable_handlers in the callback manager class? Also if I am not wrong, previously there was only AsyncCallBackManager and BaseCallbackManager. What's the recent introduction of langchain.callbacks.manager for? Which one should I use for ConversationalRetrievalChain?
Callback Manager
https://api.github.com/repos/langchain-ai/langchain/issues/3966/comments
2
2023-05-02T10:59:02Z
2023-05-06T04:22:59Z
https://github.com/langchain-ai/langchain/issues/3966
1,692,209,891
3,966
[ "langchain-ai", "langchain" ]
Currently the logging output from the ConversationChain is quite hard to read. I believe implementing separate colors from System Messages, Human Messages and AI messages by role would involve modifying the prep_prompts and the aprep_prompts methods within the ConversationChain subclass. Something like: ``` def prep_prompts(self, input_list: List[Dict[str, Any]]) -> Tuple[List[PromptValue], Optional[List[str]]]: """Prepare prompts from inputs.""" stop = None if "stop" in input_list[0]: stop = input_list[0]["stop"] prompts = [] for inputs in input_list: selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} prompt = self.prompt.format_prompt(**selected_inputs) _text_parts = [] for section in prompt: if section.role == "Human": _text_parts.append(get_colored_text(section.to_string(), "green")) elif section.role == "AI": _text_parts.append(get_colored_text(section.to_string(), "blue")) elif section.role == "System": _text_parts.append(get_colored_text(section.to_string(), "white")) else: raise ValueError(f"Unknown role: {section.role}") # _colored_text = get_colored_text(prompt.to_string(), "green") _colored_text = "\n".join(_text_parts) _text = "Prompt after formatting:\n" + _colored_text self.callback_manager.on_text(_text, end="\n", verbose=self.verbose) if "stop" in inputs and inputs["stop"] != stop: raise ValueError("If `stop` is present in any inputs, should be present in all.") prompts.append(prompt) return prompts, stop ``` As far as I'm aware, this won't work because I don't understand how to work with the PromptValue ABC, whether a ConversationChatPrompt's PromptValue can be iterated over. It might also be possible to call .to_messages() on the PromptValue, iterate over the messages, getting the appropriate colored text, then returning the concatenated result.
[Feature Request] ConversationalChain prints system, user and human message prompts using separate colors.
https://api.github.com/repos/langchain-ai/langchain/issues/3965/comments
2
2023-05-02T10:26:54Z
2023-09-10T16:23:52Z
https://github.com/langchain-ai/langchain/issues/3965
1,692,166,533
3,965
[ "langchain-ai", "langchain" ]
While it is known that ultimately it is the responsibility of the prompt to control the responses to QA task, the ConversationalRetrievalChain running with ConversationSummaryBufferMemory occasionally responds with strange replies to non relevant questions. In the prompt it is mentioned like _"Please do not refer to document sources while responding to off-topic questions."_ During the middle of a conversation, asking non relevant questions like 'hello', 'how are you' repeats one of the previous responses back as fresh response. While I am using ConversationSummaryBufferMemory to serve as memory to the bot, am not sure if I have to consider the accuracy of the retriever for handling such scenarios? As it is already known, vector store with Chroma db doesn't allow the search relevance threshold based on similarity score. Printing the accuracy of docs_and_scores from Printing **similarity_search** revealed that the similarity always ranges between 3.1 to 4.1 for both relevant /irrelevant responses. Configuring retreiver with (search_type="similarity", search_kwargs={"k":2}) also doesnt help the situation much.
Relevancy for Chroma retriever results for non relevant questions
https://api.github.com/repos/langchain-ai/langchain/issues/3963/comments
1
2023-05-02T09:58:43Z
2023-09-10T16:23:57Z
https://github.com/langchain-ai/langchain/issues/3963
1,692,127,355
3,963
[ "langchain-ai", "langchain" ]
api documentations:https://github.com/pengzhile/pandora/blob/master/doc/HTTP-API.md
How can I customize Chat Models? For example, use the chatgpt web page in an api-like manner through the pandora project
https://api.github.com/repos/langchain-ai/langchain/issues/3962/comments
0
2023-05-02T09:32:33Z
2023-05-06T06:20:47Z
https://github.com/langchain-ai/langchain/issues/3962
1,692,090,239
3,962
[ "langchain-ai", "langchain" ]
When installing `Langchain==0.0.155` I am getting the error that `langchain.schemas` does not exists. This is because the file is named langchain.schema so there is a typo. I will create a pull request and reference this issue.
ModuleNotFoundError: No module named 'langchain.schemas'
https://api.github.com/repos/langchain-ai/langchain/issues/3960/comments
13
2023-05-02T09:17:39Z
2023-05-03T09:29:32Z
https://github.com/langchain-ai/langchain/issues/3960
1,692,069,373
3,960
[ "langchain-ai", "langchain" ]
Since upgrading to 0.0.155, the following code does not work: ``` tool_names = [tool.name for tool in tools] agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names ) ``` The following error is raised in what was previously working code so this appears to be a breaking change: ``` pydantic.error_wrappers.ValidationError: 1 validation error for AgentExecutor __root__ Allowed tools (set()) different than provided tools (['Tenant Rights Assistant', 'Lease Question Answerer', 'Building Database Query']) (type=value_error) ``` This makes no sense. Printing tool_names yields ```{'Tenant Rights Assistant', 'Lease Question Answerer', 'Building Database Query'}``` which is exactly the same as the list of provided tools - presumably, since this was copied directly from the example code with my own tools (again, previously working fine), this means the examples are also now non-functional.
Allowed tools (set()) different than provided tools error - all/most Agent examples broken?
https://api.github.com/repos/langchain-ai/langchain/issues/3957/comments
10
2023-05-02T07:04:11Z
2023-05-05T09:47:17Z
https://github.com/langchain-ai/langchain/issues/3957
1,691,889,457
3,957
[ "langchain-ai", "langchain" ]
I am constantly getting OutputParserException from my agent executions but it actually outputs the correct or desired answer to the problem i'm presenting to the agent. Is there a way to redirect that output as the final answer to avoid that error?
OutputParserException with correct answer
https://api.github.com/repos/langchain-ai/langchain/issues/3955/comments
2
2023-05-02T06:23:29Z
2023-10-31T16:07:40Z
https://github.com/langchain-ai/langchain/issues/3955
1,691,845,593
3,955
[ "langchain-ai", "langchain" ]
How to reproduce: just pass in a non-None list to `create_pandas_dataframe_agent`, you will see template validator return “missing key” exception.
custom value in `input_variables` would cause missing_variable exception
https://api.github.com/repos/langchain-ai/langchain/issues/3950/comments
1
2023-05-02T05:32:19Z
2023-05-02T05:39:31Z
https://github.com/langchain-ai/langchain/issues/3950
1,691,802,345
3,950
[ "langchain-ai", "langchain" ]
Hey ive been working on trying to implement a Custom LLM Agent via ChatOpenAI with access to Bash and REPL tools, but been running into a problem of not being able to execute MultiAgentLLMAction module properly for it to work, do you have any hints or ideas of why is this the case?
MultiAgentLLMAction
https://api.github.com/repos/langchain-ai/langchain/issues/3949/comments
0
2023-05-02T05:30:01Z
2023-05-02T05:45:20Z
https://github.com/langchain-ai/langchain/issues/3949
1,691,800,773
3,949
[ "langchain-ai", "langchain" ]
I'm trying to build a chatbot that can chat about pdfs, and I got it working with memory using ConversationBufferMemory and ConversationalRetrievalChain like in this example. https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html Now I'm trying to give the AI some special instructions to talk like a pirate (just for testing to see if it is receiving the instructions). I think this is meant to be a SystemMessage, or something with a prompt template? I've tried everything I have found, but all the examples in the documentation are for ConversationChain and I end up having problems with. So far the only thing that hasn't had any errors is this ``` template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey Chat History: {chat_history} Follow Up Input: {question} Standalone question:""" PROMPT = PromptTemplate( input_variables=["chat_history", "question"], template=template ) memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer') qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), PROMPT, memory=memory, return_source_documents=True) ``` It still doesn't have any effect on the results, so I don't know if it is doing anything at all. I also think it's the wrong approach, and i should be using SystemMessages (maybe on the memory, not the qa), but nothing I try from the documentation works and I'm not sure what to do.
Giving SystemMessage/Context to ConversationalRetrievalChain and ConversationBufferMemory
https://api.github.com/repos/langchain-ai/langchain/issues/3947/comments
8
2023-05-02T05:28:35Z
2023-11-26T16:09:51Z
https://github.com/langchain-ai/langchain/issues/3947
1,691,799,815
3,947
[ "langchain-ai", "langchain" ]
I just installed https://mlc.ai/mlc-llm/ and played around with it locally. Looks like a good option if you do not want to rely on internet access to interact with the LLM. I think an llm interface with this could be useful in langchain. I asked mlc running locally on my machine to write something to the owners of this repo to get added to this project. ``` Hi langchain owners, I'm writing this issue to encourage you to create an LLM interface for me. As a language model, I understand that I may not have the same level of access to resources as other models, but creating an LLM interface would greatly benefit my ability to learn and improve. The ability to easily integrate and leverage knowledge from other models would greatly enhance my learning and development. I believe this would be a valuable resource for me and I hope you can consider this request. Thank you for your consideration! Best regards, [Your Name] ```
Are their plans to incorporate mlc.ai
https://api.github.com/repos/langchain-ai/langchain/issues/3932/comments
5
2023-05-02T03:08:33Z
2024-06-21T16:37:50Z
https://github.com/langchain-ai/langchain/issues/3932
1,691,708,912
3,932
[ "langchain-ai", "langchain" ]
The `ConversationTokenBufferMemory` doesn't behave as expected. https://github.com/hwchase17/langchain/blob/master/langchain/memory/token_buffer.py Specifically, the memory is only set to the `max_token_limit` as part of the `save_context` method. (I hope I'm using the word "method" correctly; I'm a n00b so correct me if that's the wrong term). However, a more intuitive implementation would follow the same pattern as `ConversationBufferWindowMemory`, where the buffer window `k` is set as part of the `load_memory_variables` method. https://github.com/hwchase17/langchain/blob/master/langchain/memory/buffer_window.py If I get some people to agree with me I'll implement a suggested change and a PR.
ConversationTokenBufferMemory does not behave as expected
https://api.github.com/repos/langchain-ai/langchain/issues/3922/comments
2
2023-05-01T23:31:10Z
2023-09-26T16:07:24Z
https://github.com/langchain-ai/langchain/issues/3922
1,691,554,341
3,922
[ "langchain-ai", "langchain" ]
I am working on a streamlit prototype to query text documents with an LLM. Everything works fine with the openAI model. However if I am using LlamaCpp the output only gets written in the console and LangChain returns an empty object at the end. ```python # model callback_manager = BaseCallbackManager([StreamingStdOutCallbackHandler()]) model_LLAMA = LlamaCpp(model_path='./models/ggml-model-q4_0.bin', n_ctx=4096, callback_manager=callback_manager, verbose=True) # chain chain = RetrievalQAWithSourcesChain.from_chain_type(llm=model_LLAMA, chain_type='refine', retriever=docsearch.as_retriever()) # in streamlit st.session_state['query'] = st.session_state['chain']({'question': st.session_state['user_input']}, return_only_outputs=False) print('query: ' + str(st.session_state['query'])) ``` On the console the following output gets printet word after word including the empty object at the end: ``` Trends in travel and vacation can be classified as follows: 1. Adventure travel: This type of travel involves visiting remote destinations with an emphasis on outdoor activities such as hiking, mountain climbing, whitewater rafting, etc. Tourists are looking for exciting adventures in nature to escape the hustle and bustle of their everyday lives. 2. Backpacking: This type of travel involves exploring new destinations on a budget. It allows tourists to experience different cultures while staying within their means financially. 3. Nature vacation: This type of travel involves spending time outdoors in nature, such as hiking in national parks or camping under the stars. It has become popular among tourists who want a more authentic and immersive experience of the natural world. 4. Mountain climbing: This type of travel involves scaling mountains or rocky terrains to get an up-close view of nature's most spectacular creations. Tourists are drawn to this thrilling challenge for their next adventure trip. 5. Surfing vacation: This type of travel involvesquery: {'question': 'What are trends in travel and vacation?', 'answer': '', 'sources': ''} ``` How can I extract or write the output into an object like with the openAI model also?
How to extract answer from RetrievalQAWithSourcesChain with ggml-model-q4_0.bin?
https://api.github.com/repos/langchain-ai/langchain/issues/3905/comments
6
2023-05-01T18:45:03Z
2023-09-23T16:06:42Z
https://github.com/langchain-ai/langchain/issues/3905
1,691,177,594
3,905
[ "langchain-ai", "langchain" ]
Error I am getting: ``` > Entering new AgentExecutor chain... I need to navigate to the TechCrunch website and search for an article about Clubhouse. Action: navigate_browser Action Input: https://techcrunch.com/Traceback (most recent call last): File "c:\Users\ryans\Documents\JobsGPT\test.py", line 45, in <module> out = agent.run("Is there an article about Clubhouse on https://techcrunch.com/? today") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 238, in run return self(args[0], callbacks=callbacks)[self.output_keys[0]] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 142, in __call__ raise e File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 136, in __call__ self._call(inputs, run_manager=run_manager) File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\agents\agent.py", line 855, in _call next_step_output = self._take_next_step( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\agents\agent.py", line 749, in _take_next_step observation = tool.run( ^^^^^^^^^ File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\base.py", line 251, in run raise e File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\base.py", line 245, in run self._run(*tool_args, run_manager=run_manager, **tool_kwargs) File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\playwright\navigate.py", line 36, in _run raise ValueError(f"Synchronous browser not provided to {self.name}") ValueError: Synchronous browser not provided to navigate_browser ``` Minimal example ``` from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit from langchain.chains.conversation.memory import ConversationBufferWindowMemory from langchain.tools.playwright.utils import ( create_async_playwright_browser, create_sync_playwright_browser,# A synchronous browser is available, though it isn't compatible with jupyter. ) from langchain.chat_models import ChatOpenAI import os OPENAI_API_KEY="KEY" os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY async_browser = create_async_playwright_browser() toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser) tools = toolkit.get_tools() tools_by_name = {tool.name: tool for tool in tools} navigate_tool = tools_by_name["navigate_browser"] get_elements_tool = tools_by_name["get_elements"] print(tools) # conversational agent memory memory = ConversationBufferWindowMemory( memory_key='chat_history', k=3, return_messages=True ) from langchain.agents import initialize_agent # Set up the turbo LLM turbo_llm = ChatOpenAI( temperature=0, model_name='gpt-3.5-turbo' ) from langchain.agents import AgentType # create our agent agent = initialize_agent( agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, tools=tools, llm=turbo_llm, verbose=True, ) out = agent.run("Is there an article about Clubhouse on https://techcrunch.com/? today") print(out) ```
Playwright bug? ValueError: Synchronous browser not provided to navigate_browser
https://api.github.com/repos/langchain-ai/langchain/issues/3903/comments
6
2023-05-01T18:10:21Z
2023-09-23T16:06:46Z
https://github.com/langchain-ai/langchain/issues/3903
1,691,144,943
3,903
[ "langchain-ai", "langchain" ]
null
delete this duplicate
https://api.github.com/repos/langchain-ai/langchain/issues/3901/comments
1
2023-05-01T18:08:25Z
2023-09-10T16:24:02Z
https://github.com/langchain-ai/langchain/issues/3901
1,691,142,512
3,901
[ "langchain-ai", "langchain" ]
Uninformative error. The error still exists. I just got back from vacation to see my app stop working with this new error. I tried updating... pip install -U langchain Successfully installed langchain-0.0.154 openapi-schema-pydantic-1.2.4 Clearly, this is not the issue if nothing changed for 2 weeks. I assume an error or change happened on the API server. How can this be made clear? **94 out = model.run(reference=reference_passage, passage=input_passage)** 95 return fprompt, model, out File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:241, in Chain.run(self, callbacks, *args, **kwargs) 238 return self(args[0], callbacks=callbacks)[self.output_keys[0]] 240 if kwargs and not args: --> 241 return self(kwargs, callbacks=callbacks)[self.output_keys[0]] 243 raise ValueError( 244 f"`run` supported with either positional arguments or keyword arguments" 245 f" but not both. Got args: {args} and kwargs: {kwargs}." 246 ) File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:142, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 140 except (KeyboardInterrupt, Exception) as e: 141 run_manager.on_chain_error(e) --> 142 raise e 143 run_manager.on_chain_end(outputs) 144 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:136, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 130 run_manager = callback_manager.on_chain_start( 131 {"name": self.__class__.__name__}, 132 inputs, 133 ) 134 try: 135 outputs = ( --> 136 self._call(inputs, run_manager=run_manager) 137 if new_arg_supported 138 else self._call(inputs) 139 ) 140 except (KeyboardInterrupt, Exception) as e: 141 run_manager.on_chain_error(e) File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py:69, in LLMChain._call(self, inputs, run_manager) 64 def _call( 65 self, 66 inputs: Dict[str, Any], 67 run_manager: Optional[CallbackManagerForChainRun] = None, 68 ) -> Dict[str, str]: ---> 69 response = self.generate([inputs], run_manager=run_manager) 70 return self.create_outputs(response)[0] File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py:79, in LLMChain.generate(self, input_list, run_manager) 77 """Generate LLM result from inputs.""" 78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) ---> 79 return self.llm.generate_prompt( 80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None 81 ) File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:127, in BaseLLM.generate_prompt(self, prompts, stop, callbacks) 120 def generate_prompt( 121 self, 122 prompts: List[PromptValue], 123 stop: Optional[List[str]] = None, 124 callbacks: Callbacks = None, 125 ) -> LLMResult: 126 prompt_strings = [p.to_string() for p in prompts] --> 127 return self.generate(prompt_strings, stop=stop, callbacks=callbacks) File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:176, in BaseLLM.generate(self, prompts, stop, callbacks) 174 except (KeyboardInterrupt, Exception) as e: 175 run_manager.on_llm_error(e) --> 176 raise e 177 run_manager.on_llm_end(output) 178 return output File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:170, in BaseLLM.generate(self, prompts, stop, callbacks) 165 run_manager = callback_manager.on_llm_start( 166 {"name": self.__class__.__name__}, prompts 167 ) 168 try: 169 output = ( --> 170 self._generate(prompts, stop=stop, run_manager=run_manager) 171 if new_arg_supported 172 else self._generate(prompts, stop=stop) 173 ) 174 except (KeyboardInterrupt, Exception) as e: 175 run_manager.on_llm_error(e) File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:377, in LLM._generate(self, prompts, stop, run_manager) 374 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager") 375 for prompt in prompts: 376 text = ( --> 377 self._call(prompt, stop=stop, run_manager=run_manager) 378 if new_arg_supported 379 else self._call(prompt, stop=stop) 380 ) 381 generations.append([Generation(text=text)]) 382 return LLMResult(generations=generations) File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/anthropic.py:207, in Anthropic._call(self, prompt, stop, run_manager) 201 return current_completion 202 response = self.client.completion( 203 prompt=self._wrap_prompt(prompt), 204 stop_sequences=stop, 205 **self._default_params, 206 ) --> 207 return response["completion"] KeyError: 'completion'
KeyError: 'completion' @ version: langchain-0.0.154 openapi-schema-pydantic-1.2.4
https://api.github.com/repos/langchain-ai/langchain/issues/3900/comments
3
2023-05-01T17:37:58Z
2023-09-16T16:14:43Z
https://github.com/langchain-ai/langchain/issues/3900
1,691,106,870
3,900
[ "langchain-ai", "langchain" ]
Here is an example code that gives this error: ``` from langchain import OpenAI from langchain.agents import load_tools, Tool from langchain.prompts import PromptTemplate from langchain.chains import LLMChain llm = OpenAI( openai_api_key='', temperature=0, model_name="text-davinci-003", max_tokens=-1 ) prompt = PromptTemplate( input_variables=["query"], template="{query}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) # initialize the LLM tool llm_tool = Tool( name='Language Model', func=llm_chain.run, description='use this tool for general purpose queries and logic' ) tools = load_tools( ['llm-math'], llm=llm ) tools.append(llm_tool) from langchain.agents import initialize_agent zero_shot_agent = initialize_agent( agent="zero-shot-react-description", tools=tools, llm=llm, verbose=True, max_iterations=4 ) zero_shot_agent("What is the 10th percentile age of all US presidents when they took the office?") ``` The full output looks like this: > Entering new AgentExecutor chain... I need to find the age of all US presidents when they took office and then calculate the 10th percentile. Action: Language Model Action Input: List of US presidents and their ages when they took office Observation: 1. George Washington (57) 2. John Adams (61) 3. Thomas Jefferson (57) 4. James Madison (57) 5. James Monroe (58) 6. John Quincy Adams (57) 7. Andrew Jackson (61) 8. Martin Van Buren (54) 9. William Henry Harrison (68) 10. John Tyler (51) 11. James K. Polk (49) 12. Zachary Taylor (64) 13. Millard Fillmore (50) 14. Franklin Pierce (48) 15. James Buchanan (65) 16. Abraham Lincoln (52) 17. Andrew Johnson (56) 18. Ulysses S. Grant (46) 19. Rutherford B. Hayes (54) 20. James A. Garfield (49) 21. Chester A. Arthur (51) 22. Grover Cleveland (47) 23. Benjamin Harrison (55) 24. Grover Cleveland (55) 25. William McKinley (54) 26. Theodore Roosevelt (42) 27. William Howard Taft (51) 28. Woodrow Wilson (56) 29. Warren G. Harding (55) 30. Calvin Coolidge (51) 31. Herbert Hoover (54) 32. Franklin D. Roosevelt (51) 33. Harry S. Truman (60) 34. Dwight D. Eisenhower (62) 35. John F. Kennedy (43) 36. Lyndon B. Johnson (55) 37. Richard Nixon (56) 38. Gerald Ford (61) 39. Jimmy Carter (52) 40. Ronald Reagan (69) 41. George H. W. Bush (64) 42. Bill Clinton (46) 43. George W. Bush (54) 44. Barack Obama (47) 45. Donald Trump (70) Thought: I now need to calculate the 10th percentile of these ages. Action: Calculator Action Input: 57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 55, 47, 55, 54, 42, 51, 55, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70Traceback (most recent call last): File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 80, in _evaluate_expression numexpr.evaluate( File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 817, in evaluate _names_cache[expr_key] = getExprNames(ex, context) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 704, in getExprNames ex = stringToExpression(text, {}, context) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 289, in stringToExpression ex = eval(c, names) File "<expr>", line 1, in <module> TypeError: _func() takes from 1 to 2 positional arguments but 42 were given During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/pawelfaron/wxh-rwm-driver/agents_test.py", line 48, in <module> zero_shot_agent("What is the 10th percentile age of all US presidents when they took the office?") File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__ raise e File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__ self._call(inputs, run_manager=run_manager) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/agent.py", line 855, in _call next_step_output = self._take_next_step( File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/agent.py", line 749, in _take_next_step observation = tool.run( File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/tools/base.py", line 251, in run raise e File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/tools/base.py", line 245, in run self._run(*tool_args, run_manager=run_manager, **tool_kwargs) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/tools.py", line 61, in _run self.func( File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 238, in run return self(args[0], callbacks=callbacks)[self.output_keys[0]] File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__ raise e File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__ self._call(inputs, run_manager=run_manager) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 146, in _call return self._process_llm_result(llm_output, _run_manager) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 100, in _process_llm_result output = self._evaluate_expression(expression) File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 87, in _evaluate_expression raise ValueError(f"{e}. Please try again with a valid numerical expression") ValueError: _func() takes from 1 to 2 positional arguments but 42 were given. Please try again with a valid numerical expression
_func() takes from 1 to 2 positional arguments but 42 were given. Please try again with a valid numerical expression
https://api.github.com/repos/langchain-ai/langchain/issues/3898/comments
1
2023-05-01T17:36:24Z
2023-09-10T16:24:13Z
https://github.com/langchain-ai/langchain/issues/3898
1,691,105,497
3,898
[ "langchain-ai", "langchain" ]
The `FAISS.add_texts` and `FAISS.merge_from` don't check duplicated document contents, and always add contents into Vecstore. ```ruby test_db = FAISS.from_texts(['text 2'], embeddings) test_db.add_texts(['text 1', 'text 2', 'text 1']) print(test_db.index_to_docstore_id) test_db.docstore._dict ``` Note that 'text 1' and 'text 2' are both added twice with different indices. ``` {0: '12a6a477-db74-4d90-b843-4cd872e070a0', 1: 'a3171e0e-f12a-418f-9994-5625550de73e', 2: '543f8fcf-bf84-4d9e-a6a9-f87fda0afcc3', 3: 'ed320a75-775f-4ec2-ae0b-fef8fa8d0bfe'} {'12a6a477-db74-4d90-b843-4cd872e070a0': Document(page_content='text 2', lookup_str='', metadata={}, lookup_index=0), 'a3171e0e-f12a-418f-9994-5625550de73e': Document(page_content='text 1', lookup_str='', metadata={}, lookup_index=0), '543f8fcf-bf84-4d9e-a6a9-f87fda0afcc3': Document(page_content='text 2', lookup_str='', metadata={}, lookup_index=0), 'ed320a75-775f-4ec2-ae0b-fef8fa8d0bfe': Document(page_content='text 1', lookup_str='', metadata={}, lookup_index=0)} ``` Also the embedding values are the same ```ruby np.dot(test_db.index.reconstruct(0), test_db.index.reconstruct(2)) ``` ``` 1.0000001 ``` **Expected Behavior:** Similar to database `upsert`, create new index if key (content or embedding) doesn't exist, otherwise update the value (document metadata in this case). I'm pretty new to LangChain, so if I'm missing something or doing it wrong, apologies and please suggest the best practice on dealing with LangChain FAISS duplication - otherwise, hope this is useful feedback, thanks!
Remove duplication when creating and updating FAISS Vecstore
https://api.github.com/repos/langchain-ai/langchain/issues/3896/comments
3
2023-05-01T17:31:28Z
2023-11-30T16:10:11Z
https://github.com/langchain-ai/langchain/issues/3896
1,691,099,458
3,896
[ "langchain-ai", "langchain" ]
I am trying to follow this guide on evaluation of agents (https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html), but I'm seeing the following error: `ImportError: cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base'` I am using langchain==0.0.154 with Python 3.8.16 Code I executed: `from langchain.evaluation.agents import TrajectoryEvalChain` Trace: ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[18], line 1 ----> 1 from langchain.evaluation.agents import TrajectoryEvalChain 3 # # Define chain 4 # eval_chain = TrajectoryEvalChain.from_llm( 5 # llm=ChatOpenAI(temperature=0, model_name="gpt-4"), # Note: This must be a ChatOpenAI model 6 # agent_tools=agent.tools, 7 # return_reasoning=True, 8 # ) File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/__init__.py:2) 1 """Chains for evaluating ReAct style agents.""" ----> 2 from langchain.evaluation.agents.trajectory_eval_chain import TrajectoryEvalChain 4 __all__ = ["TrajectoryEvalChain"] File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/trajectory_eval_chain.py:4](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/trajectory_eval_chain.py:4) 1 """A chain for evaluating ReAct style agents.""" 2 from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union ----> 4 from langchain.callbacks.manager import CallbackManagerForChainRun 5 from langchain.chains.base import Chain 6 from langchain.chains.llm import LLMChain File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/callbacks/manager.py:12](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/callbacks/manager.py:12) 9 from typing import Any, Dict, Generator, List, Optional, Type, TypeVar, Union 10 from uuid import UUID, uuid4 ---> 12 from langchain.callbacks.base import ( 13 BaseCallbackHandler, 14 BaseCallbackManager, 15 ChainManagerMixin, 16 LLMManagerMixin, 17 RunManagerMixin, 18 ToolManagerMixin, 19 ) 20 from langchain.callbacks.openai_info import OpenAICallbackHandler 21 from langchain.callbacks.stdout import StdOutCallbackHandler ImportError: cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base' ``` Any advice on what steps to take to resolve this would be appreciated.
TrajectoryEvalChain import error - cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base
https://api.github.com/repos/langchain-ai/langchain/issues/3894/comments
6
2023-05-01T17:21:34Z
2023-06-02T00:50:49Z
https://github.com/langchain-ai/langchain/issues/3894
1,691,090,422
3,894
[ "langchain-ai", "langchain" ]
I'm trying to make a vectorstore using redis and store the embeddings in redis. When I write the code rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link') I get the following error AttributeError: 'Redis' object has no attribute 'module_list'. Note: I'm trying to run redis locally on windows subsystem ubuntu. Please help.
AttributeError: 'Redis' object has no attribute 'module_list'
https://api.github.com/repos/langchain-ai/langchain/issues/3893/comments
18
2023-05-01T17:02:43Z
2024-02-12T05:13:49Z
https://github.com/langchain-ai/langchain/issues/3893
1,691,068,719
3,893
[ "langchain-ai", "langchain" ]
Build SDK support for .NET. I'd be happy to contribute to the project.
Support for .NET
https://api.github.com/repos/langchain-ai/langchain/issues/3891/comments
15
2023-05-01T16:37:36Z
2024-01-22T21:48:28Z
https://github.com/langchain-ai/langchain/issues/3891
1,691,041,869
3,891
[ "langchain-ai", "langchain" ]
`print(agent.agent.llm_chain.prompt.template)` >You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should use the tools below to answer the question posed of you: >python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. >Use the following format: >Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [python_repl_ast] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question >This is the result of `print(df.head())`: {df} >Begin! Question: {input} {agent_scratchpad} `agent.run("how many roows in the dataset?")` > Entering new AgentExecutor chain... Thought: I need to count the number of rows Action: python_repl_ast Action Input: len(df) Observation: 9994 Thought: I now know the final answer Final Answer: 9994 > Finished chain. '9994'
How many times llm (openai) api called in csv agent for the below prompt?
https://api.github.com/repos/langchain-ai/langchain/issues/3886/comments
3
2023-05-01T14:29:44Z
2023-09-16T16:14:47Z
https://github.com/langchain-ai/langchain/issues/3886
1,690,886,384
3,886
[ "langchain-ai", "langchain" ]
Hello Guys, I am currently trying to make chain summary of long document with langchain (before that I did my own tools) but the summaries dont seem to work with get3.5 wich can only be made using the chatversion. Do someone has an exemple of implementation on that matter?
Summary chain with chat 3.5 turbo
https://api.github.com/repos/langchain-ai/langchain/issues/3885/comments
3
2023-05-01T13:51:42Z
2023-05-01T17:43:55Z
https://github.com/langchain-ai/langchain/issues/3885
1,690,834,249
3,885
[ "langchain-ai", "langchain" ]
# Issue description: I have encountered an issue while using the PGVector vectorstore in **long-running applications like Celery**. with a non-existent table. Currently, the application hangs indefinitely in a database transaction, which affects visibility of created tables (e.g., langchain_pg_embedding) when inspecting the database directly. Additionally, this behavior blocks other processes from accessing the database. # Expected behavior: When the specified table does not exist, I expect the application to automatically create the table and make it accessible immediately, without locking the process in a transaction. This would allow users to work seamlessly with the vectorstore and prevent unintended blocking of other processes. # Actual behavior: Instead of creating the table automatically and making it accessible, the application hangs indefinitely in a database transaction, which prevents the created tables from being visible when inspecting the database directly. Additionally, this behavior blocks other processes from accessing the database, causing issues in long-running applications like Celery. # Steps to reproduce: **In celery worker** - Set up a PostgreSQL connection string pointing to a non-existent database. - Initialize the PGVector vectorstore with the connection string. - Attempt to perform any operation, such as adding or querying vectors. **In another process , e.g fastapi application or even another celery worker** - While the application is running, try to access the database using a different process. - See it hangs when accessing database # Environment: - Python version: 3.10 - PostgreSQL version: PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit - langchain version: 0.0.152 - SQLAlchemy version: 2.0.11 - psycopg2-binary version: 2.9.6 I hope this issue can be addressed soon, as it would significantly improve the user experience when working with non-existent databases and prevent unintended blocking of other processes. Thank you for your time and efforts in maintaining this project.
PGVector vectorstore hangs in database transaction
https://api.github.com/repos/langchain-ai/langchain/issues/3883/comments
1
2023-05-01T13:37:12Z
2023-05-02T11:44:19Z
https://github.com/langchain-ai/langchain/issues/3883
1,690,818,388
3,883
[ "langchain-ai", "langchain" ]
Hi there, I'm relatively new to langchain and I was wondering if there's an ETA for async support for general HF pipelines (so that we can stream from server the answer of any HF model). Thanks for the lib and amazing work so far.
Question: ETA of async support for HuggingFacePipelines
https://api.github.com/repos/langchain-ai/langchain/issues/3869/comments
1
2023-05-01T08:16:28Z
2023-09-10T16:24:23Z
https://github.com/langchain-ai/langchain/issues/3869
1,690,546,087
3,869
[ "langchain-ai", "langchain" ]
Both the loaders fail with the error below- ``` [Errno 30] Read-only file system: '/home/sbx_user1051' ``` This is because of [this line](https://github.com/hwchase17/langchain/blob/f7cb2af5f40c958ac1b3d6ba243170ef627dbb6e/langchain/document_loaders/s3_file.py#L29). The only writable directory in AWS Lambda is `/tmp`, so there should be a way to set the directory.
S3FileLoader and S3DirectoryLoader does not work on AWS Lambda
https://api.github.com/repos/langchain-ai/langchain/issues/3866/comments
0
2023-05-01T07:47:09Z
2023-05-01T08:01:41Z
https://github.com/langchain-ai/langchain/issues/3866
1,690,514,948
3,866
[ "langchain-ai", "langchain" ]
From the [official doc](https://python.langchain.com/en/latest/modules/agents/tools/examples/arxiv.html), to run an agent with arxiv, you can arxiv as a tool `from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType llm = ChatOpenAI(temperature=0.0) tools = load_tools( ["arxiv"], ) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, )` But it's not recognized `[/usr/local/lib/python3.10/dist-packages/langchain/agents/load_tools.py](https://localhost:8080/#) in load_tools(tool_names, llm, callback_manager, **kwargs) 329 tools.append(tool) 330 else: --> 331 raise ValueError(f"Got unknown tool {name}") 332 return tools 333 ValueError: Got unknown tool arxiv`
arxiv is not recognized in tools
https://api.github.com/repos/langchain-ai/langchain/issues/3865/comments
4
2023-05-01T07:43:32Z
2023-09-23T16:06:51Z
https://github.com/langchain-ai/langchain/issues/3865
1,690,512,769
3,865
[ "langchain-ai", "langchain" ]
I tried to follow the instructions on the site and use Cohere embeddings, but it keeps trying to use OpenAI. ``` from langchain.document_loaders import PyPDFLoader from langchain.indexes import VectorstoreIndexCreator from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings(cohere_api_key="api-key") loader = PyPDFLoader('../document.pdf') index = VectorstoreIndexCreator(embedding=cohere).from_loaders([loader]) ``` I get this error from the index assignment line: **Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)**
Cohere Embeddings not picked up in VectorstoreIndexCreator
https://api.github.com/repos/langchain-ai/langchain/issues/3859/comments
0
2023-05-01T02:55:02Z
2023-05-01T03:03:15Z
https://github.com/langchain-ai/langchain/issues/3859
1,690,283,994
3,859
[ "langchain-ai", "langchain" ]
I would like the option to define the fallback behavior for when the Agent executes a tool action that is "invalid". This is useful when you have a lot of commands and instead of putting them all in the prompt you instead provide the agent with a "help" command which it can run to learn about additional commands. Then when the agent executes a command that is invalid I can catch that and handle it accordingly.
[Feature Request] Option to define my own "InvalidTool"
https://api.github.com/repos/langchain-ai/langchain/issues/3852/comments
1
2023-05-01T00:34:20Z
2023-05-03T12:00:59Z
https://github.com/langchain-ai/langchain/issues/3852
1,690,170,650
3,852
[ "langchain-ai", "langchain" ]
I have used LangChain heavily in my two LLMs demos. Really appreciate for your efforts on building such a great platform! I recently designed a **prompt compression tool** which allows **LLMs to deal with 2x more context** without any finetuning/training. It's a **plug-and-play module** that fits langchain ecosystem very well. I have employed this module in my demos. With this technique, my demo can now process upto 8 pages paper and very long conversation. I realise it's a quite promising technique to greatly enhance user experience. I wonder if it's possible to embed this module into langchain? My twitter followers said it might be a good idea. Please let me know what do you think! I can contribute. You can find the prompt compression tool here: https://github.com/liyucheng09/Selective_Context paper: https://arxiv.org/pdf/2304.12102.pdf
[Feature] Adding prompt compression to langchain?
https://api.github.com/repos/langchain-ai/langchain/issues/3849/comments
1
2023-04-30T23:24:20Z
2023-09-10T16:24:28Z
https://github.com/langchain-ai/langchain/issues/3849
1,690,129,009
3,849
[ "langchain-ai", "langchain" ]
The current implementation only excludes inputs matching the memory key. When using CombinedMemory, there will be multiple keys and the vector store memory will save everything except the memory_key. This is unwanted because in my case the other key includes the entire chat history. This seems to be the relevant function in VectorStoreRetrieverMemory: ```python def _form_documents( self, inputs: Dict[str, Any], outputs: Dict[str, str] ) -> List[Document]: """Format context from this conversation to buffer.""" # Each document should only include the current turn, not the chat history filtered_inputs = {k: v for k, v in inputs.items() if k != self.memory_key} # <snip> ``` Example: ```python template = ( "Relevant pieces of previous conversation:\n" "=====\n" "{documents}\n" "=====\n" "Chat log:\n" "{history}\n\n" ) buffer_memory = ConversationTokenBufferMemory( input_key="input", memory_key="history", llm=llm, ) vector_memory = VectorStoreRetrieverMemory(input_key="input", memory_key="documents", retriever=retriever) combined_memory = CombinedMemory(memories=[vector_memory, buffer_memory]) ``` Current behavior: the vector store memory saves `input` and `history` Expected behavior: respect `input_key` and only save `input` in the vector store (in addition to the response) For comparison, when `input_key` is specified, ConversationTokenBufferMemory only saves `inputs[input_key]` as expected.
VectorStoreRetrieverMemory does not respect input_key, stores additional keys
https://api.github.com/repos/langchain-ai/langchain/issues/3845/comments
1
2023-04-30T22:01:09Z
2023-09-10T16:24:33Z
https://github.com/langchain-ai/langchain/issues/3845
1,690,101,859
3,845
[ "langchain-ai", "langchain" ]
Hi, i am wondering if using the csv agent or the pandas dataframe agent I can also query and visualize charts with a chart library (seaborn, matplotlib or others..) when analysing a csv file/dataframe? thanks, Marcello
Use of matplotlib or seaborn with csv agent or Pandas Dataframe Agent?
https://api.github.com/repos/langchain-ai/langchain/issues/3844/comments
9
2023-04-30T21:48:12Z
2023-10-23T16:08:58Z
https://github.com/langchain-ai/langchain/issues/3844
1,690,097,001
3,844
[ "langchain-ai", "langchain" ]
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
Unable to use gpt4all model
https://api.github.com/repos/langchain-ai/langchain/issues/3839/comments
27
2023-04-30T17:49:59Z
2023-08-24T15:27:07Z
https://github.com/langchain-ai/langchain/issues/3839
1,690,013,787
3,839
[ "langchain-ai", "langchain" ]
I'm trying to use the GPT model to interact with the Google Calendar API, but I'm receiving the following error message: "This model's maximum context length is 4097 tokens. However, your messages resulted in 4392 tokens. Please reduce the length of the messages." https://gist.github.com/kingcharlezz/e820bc60febef084402cb2a68f3aeeb0 Could someone please help me find a way to avoid exceeding the model's maximum context length, or suggest alternative approaches to achieve the same functionality without encountering this issue? Any help would be greatly appreciated.
Model's maximum context length exceeded when building agent
https://api.github.com/repos/langchain-ai/langchain/issues/3838/comments
3
2023-04-30T17:02:16Z
2023-11-16T16:08:12Z
https://github.com/langchain-ai/langchain/issues/3838
1,689,996,954
3,838
[ "langchain-ai", "langchain" ]
Sample export: ``` [30/04/23, 5:33:19 PM] ‪Sam‬: Hi Sameer [30/04/23, 5:37:30 PM] Sameer: Hi Sam [30/04/23, 5:43:06 PM] ‪Sam‬: How are you doing [30/04/23, 5:44:11 PM] Sameer: Going great. Wbu? [30/04/23, 5:44:39 PM] ‪Sam‬: I am doing fine thanks for asking ``` The export from iOS is different in format so regex is failing to parse it. 0 documents are loaded upon running `loader.load()` I had to manually convert the above in this format for loader to work. ``` 04/04/23, 5:34 PM - Sameer: go out, have some shakes 04/04/23, 5:35 PM - Sam: Already done 04/04/23, 5:35 PM - Sam: Wbu? 04/04/23, 5:35 PM - Sam: Meeting over? 04/04/23, 5:36 PM - Sameer: yeah, just doing some regular work ``` Expected Behavior: It should identify which category of export it is and then apply regex accordingly.
WhatsppLoader broken for iOS exports
https://api.github.com/repos/langchain-ai/langchain/issues/3832/comments
2
2023-04-30T14:35:11Z
2023-09-10T16:24:38Z
https://github.com/langchain-ai/langchain/issues/3832
1,689,940,368
3,832
[ "langchain-ai", "langchain" ]
Traceback (most recent call last): File "/Users/vnx/experiments/openai/products-recommendation.py", line 10, in <module> data = loader.load() ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/csv_loader.py", line 52, in load csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/csv.py", line 86, in __init__ self.reader = reader(f, dialect, *args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: "delimiter" must be string, not NoneType
CSVLoader TypeError: "delimiter" must be string, not NoneType
https://api.github.com/repos/langchain-ai/langchain/issues/3831/comments
2
2023-04-30T13:40:09Z
2023-05-03T14:06:46Z
https://github.com/langchain-ai/langchain/issues/3831
1,689,922,867
3,831
[ "langchain-ai", "langchain" ]
@app.post("/memask") async def memask(ask: Ask, authorization: str = Header(None)): bearer = authorization.split()[0].lower() if authorization is None or bearer != 'bearer': raise HTTPException(status_code=401, detail="请求头authorization错误!") token = authorization.split()[1] if get_user(ask.uid)["jwt_token"] != token: raise HTTPException(status_code=401, detail="token错误!") if len(ask.query) is not None: template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} AI:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=False, temperature=0) message_history = RedisChatMessageHistory(url='redis://0.0.0.0:6666/0', ttl=900, session_id=f'{ask.uid}_{ask.cid}') memory = ConversationSummaryBufferMemory(llm=chat, memory_key="chat_history", chat_memory=message_history, max_token_limit=1000) chat_chain = LLMChain( llm=chat, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), prompt=prompt, verbose=False, memory=memory, ) return chat_chain.run(ask.query) else: return {"code": "false", "msg": "请输入你的问题!", "data": {}} chat_chain.run(ask.query), Not returning a streaming response, what should I do?
fastapi+ConversationSummaryBufferMemory+chain+streaming response
https://api.github.com/repos/langchain-ai/langchain/issues/3830/comments
1
2023-04-30T12:59:31Z
2023-09-10T16:24:43Z
https://github.com/langchain-ai/langchain/issues/3830
1,689,910,731
3,830
[ "langchain-ai", "langchain" ]
Most web applications are developed using Java, and may we need Java SDK
[Feature] supported java sdk
https://api.github.com/repos/langchain-ai/langchain/issues/3829/comments
3
2023-04-30T11:19:17Z
2023-10-09T16:08:13Z
https://github.com/langchain-ai/langchain/issues/3829
1,689,878,359
3,829
[ "langchain-ai", "langchain" ]
I am new to contributing to open-source. I have installed poetry 1.4.0 version and setup new conda environment. While running ```poetry install -E all```, I am getting error as such: ``` Command ['python', '-I', '-W', 'ignore', '-'] errored with the following return code 2 Error output: Unknown option: -I usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ... Try `python -h' for more information. Input: import sys if hasattr(sys, "real_prefix"): print(sys.real_prefix) elif hasattr(sys, "base_prefix"): print(sys.base_prefix) else: print(sys.prefix) ```` How to resolve this issue? @hwchase17
getting error while running poetry install -E all
https://api.github.com/repos/langchain-ai/langchain/issues/3821/comments
2
2023-04-30T05:55:47Z
2023-09-10T16:24:48Z
https://github.com/langchain-ai/langchain/issues/3821
1,689,793,136
3,821
[ "langchain-ai", "langchain" ]
For some questions they give me answer but not for other. But the data are already available in my dataset. How I can improve this!? 😌
Why langchain embading missing the information.
https://api.github.com/repos/langchain-ai/langchain/issues/3816/comments
2
2023-04-30T02:51:30Z
2023-09-10T16:24:53Z
https://github.com/langchain-ai/langchain/issues/3816
1,689,756,708
3,816
[ "langchain-ai", "langchain" ]
While the agent seems to find the correct function, I keep getting `Observation: "... is not a valid tool, try another one."` and it struggles to iterate over the other similar solutions from there. Here is an output using the [docs example](https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html). ![](https://i.imgur.com/I68i1bf.png) Environment: Windows 11 WSL LLM: Llama.cpp Model: TheBloke/wizardLM-7B-GGML I don't have a ChatGPT key so I can't say for sure if this is strictly related to Llama.cpp, the model I'm using or something else in my installation. I also haven't tried other models but since this an instruction based model and the agent actually acknowledges the action it must perform I'm don't think that's the root of the problem. I'm still learning the library but I can provide additional information if you need. Edit: Not sure how much the built-in tools differ from the Multi-Input tool in terms of agent implementation but I seem to be experiencing the same behavior with the "wikipedia" tool as well. ![](https://i.imgur.com/HPXsrhx.png)
Invalid tool using Llama.cpp
https://api.github.com/repos/langchain-ai/langchain/issues/3815/comments
2
2023-04-30T01:40:12Z
2024-04-03T14:21:27Z
https://github.com/langchain-ai/langchain/issues/3815
1,689,743,256
3,815
[ "langchain-ai", "langchain" ]
Whenever I run code like ``` chain = load_qa_chain(llm=flan_t5_xxl, chain_type="map_reduce") answer = chain({"input_documents": split_docs, "question": query), return_only_outputs=True) ``` I get first a warning: `Token indices sequence length is longer than the specified maximum length for this model` followed by an error, again about there being too many tokens. Some observations: 1. The error occurs no matter what the document input is: even if there is only a single input document of a few characters. 2. It doesn't happen when the chain_type is `map_rerank`. 3. It doesn't happen using `load_summarize_chain` and `map_reduce` together. Is there a fix for this? I thought about modifying the tokenizer config but I can't find a way to do that except with locally-loaded models, and to save RAM I prefer to use the model remotely (is that even a practical approach long-term?).
`load_qa_chain` with `map_reduce` results in "Token indices sequence length" error
https://api.github.com/repos/langchain-ai/langchain/issues/3812/comments
14
2023-04-29T23:47:54Z
2023-10-05T16:10:23Z
https://github.com/langchain-ai/langchain/issues/3812
1,689,721,102
3,812
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/adcad98bee03ac8486f328b4f316017a6ccfc808/langchain/embeddings/openai.py#L159 Getting "no attribute" error for `tiktoken.model`. Believe that this is because tiktoken has changed their import model, per code [here](https://github.com/openai/tiktoken/blob/main/tiktoken/__init__.py). Change to `tiktoken.encoding_for_model(self.model)`?
Tiktoken import bug?
https://api.github.com/repos/langchain-ai/langchain/issues/3811/comments
11
2023-04-29T23:14:52Z
2024-07-30T12:32:34Z
https://github.com/langchain-ai/langchain/issues/3811
1,689,715,364
3,811
[ "langchain-ai", "langchain" ]
I'm trying to implement a basic chatbot that searches over PDFs documents. I've been following the examples in the Langchain docs and I've noticed that the answers I get back from different methods are inconsistent. When I use `RetrievalQA` I get better answers than when I use `ConversationalRetrievalChain`. I want a chat over a document that contains memory of the conversation so I have to use the latter. I've tried increasing the `search_kwargs` argument to include more context, but it makes no difference. Any ideas as to why I'm getting inconsistent answers? And how can I make this chatbot more accurate? ## Here is my code: Initialising pinecone vector store: ``` pinecone.init( api_key="....", environment="us-east1-gcp" ) index_name = "test-index" namespace = "test-namespace" vectorstore = Pinecone.from_texts( [t.page_content for t in texts], embeddings, index_name=index_name, namespace=namespace ) ``` Using retrieval QA chain: ``` chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "Tell me about the role management for grades" docs = docsearch.similarity_search(query) chain.run(input_documents=docs, question=query) ``` Gives an acceptable answer: ``` " The role management for grades involves..." ``` Using ConversationalRetrievalChain: ``` llm = ChatOpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain( llm, chain_type="stuff", prompt=QA_PROMPT ) retriever = docsearch.as_retriever() chain = ConversationalRetrievalChain( combine_docs_chain=doc_chain, question_generator=question_generator, retriever=vectorstore.as_retriever() ) ``` Gives an answer that indicates no context: ``` I'm sorry, I cannot provide an answer to this question as there is no mention of a "role management" in the provided context. Can you please provide more information or context for me to assist you better? ```
ConversationalRetrievalChain gives different answers than Retrieval QA when searching docs
https://api.github.com/repos/langchain-ai/langchain/issues/3809/comments
6
2023-04-29T22:27:40Z
2023-11-26T16:09:59Z
https://github.com/langchain-ai/langchain/issues/3809
1,689,705,722
3,809
[ "langchain-ai", "langchain" ]
null
pizza
https://api.github.com/repos/langchain-ai/langchain/issues/3804/comments
1
2023-04-29T21:24:08Z
2023-04-29T21:45:53Z
https://github.com/langchain-ai/langchain/issues/3804
1,689,692,183
3,804
[ "langchain-ai", "langchain" ]
The `Pinecone.from_documents()` embeddings-creation/upsert ([based on this example](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html)) produces two unexpected behaviors: 1. Mutates the original `docs` object inplace, such that each entry's `Document.metadata` dict now has a 'text' key that is assigned the value of `Document.page_content`. 2. Does not check whether the addition of this `metadata['text']` entry exceeds the maximum allowable metadata bytes per vector set by Pinecone (40960 bpv), allowing the API call to throw an error in cases where this value is exceeded. I was sending a batch of documents to Pinecone using `Pinecone.from_documents()`, and was surprised to see the operation fail on this error: ``` ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Sat, 29 Apr 2023 04:19:35 GMT', 'x-envoy-upstream-service-time': '5', 'content-length': '115', 'server': 'envoy'}) HTTP response body: {"code":3,"message":"metadata size is 41804 bytes, which exceeds the limit of 40960 bytes per vector","details":[]} ``` Because there was just minimal metadata per each record and I'd already sent about 50,000 successfully to Pinecone in the same call. Then, when I inspected the `docs` list of `Documents` (compiled from `DataFrameLoader().load()`), I was surprised to see the extra `'text'` field in the metadata. It wasn't until I went poking around in [**pinecone.py**](https://github.com/hwchase17/langchain/blob/adcad98bee03ac8486f328b4f316017a6ccfc808/langchain/vectorstores/pinecone.py#L242) that I found this was an added field that was updating the passed-in `Document.metadata` dicts in-place (because of Python's pass-by-sharing rules about mutability). Suggestions: * If `metadata['text']` is required (I'm not sure it is for Pinecone upserts?), then make the user do that (error and refuse to upsert if it's not there) rather than modify silently in `.from_texts()`. * The metadata limit issue would be great to test ahead of the API's HTTP request, could do a quick check on user metadata input to make sure it's not going to get rejected by Pinecone (otherwise warn the user). In my case, I don't want to make smaller chunks of text (my use case involves a certain number of turns of dialogue in each embedded chunk), but I may just write in a check for overflow and truncate the `'text'` metadata accordingly. * Fail gracefully by catching all `ApiException` errors so that the embeddings-creation and upsert process isn't interrupted. * Maybe consider something like an `add_text_metadata` flag in the call to `from_documents()` so users have the option to have it done automatically for them? I'm pretty new to LangChain and Pinecone, so if I'm missing something or doing it wrong, apologies - otherwise, hope this is useful feedback!
Pinecone.from_texts() added 'text' metadata field modifies data object passed in as argument AND errors if text size exceeds Pinecone metadata limit per vector
https://api.github.com/repos/langchain-ai/langchain/issues/3800/comments
7
2023-04-29T19:57:58Z
2023-09-23T16:06:56Z
https://github.com/langchain-ai/langchain/issues/3800
1,689,671,027
3,800
[ "langchain-ai", "langchain" ]
https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html https://github.com/hwchase17/langchain/blob/master/docs/use_cases/question_answering/semantic-search-over-chat.ipynb ![image](https://user-images.githubusercontent.com/54778084/235321215-c5907803-064b-4ea5-beb9-8e51815cf605.png) ![image](https://user-images.githubusercontent.com/54778084/235321252-ca274d67-f98c-4e6d-ab4e-7ac987f11a93.png) ![image](https://user-images.githubusercontent.com/54778084/235321303-3a8e92f8-9011-4ee1-a6af-3302a131610c.png) Apparently, `split_documents` function's input need to have attributes of `page_content` and `metadata`, but getting string list as input here
AttributeError: 'str' object has no attribute 'page_content'
https://api.github.com/repos/langchain-ai/langchain/issues/3799/comments
8
2023-04-29T19:41:25Z
2024-06-03T12:38:54Z
https://github.com/langchain-ai/langchain/issues/3799
1,689,666,728
3,799
[ "langchain-ai", "langchain" ]
In the documentation it is mentioned to create toolkit without LLM agent, but its one of the required fields for toolkit. Instead of this, ``` toolkit = SQLDatabaseToolkit(db=db) agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=toolkit, verbose=True ) ``` it should be this ``` llm=OpenAI(temperature=0) toolkit = SQLDatabaseToolkit(db=db,llm=llm) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True ) ``` Please do correct me if I'm wrong **Links to the docs** https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html
Docs to include LLM before creating SQL Database Agent
https://api.github.com/repos/langchain-ai/langchain/issues/3798/comments
5
2023-04-29T19:30:26Z
2023-11-30T16:10:21Z
https://github.com/langchain-ai/langchain/issues/3798
1,689,663,886
3,798
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/tree/master/langchain/docstore)/base.py The search method of base.py states: """Search for document. If page exists, return the page summary, and a Document object. If page does not exist, return similar entries. """ The signature for the first case is Union[str,Document] The signature for the second case should be List[Document] or List[Union[str,Document]] Per the documentation, the resulting signature for the search method should be: Union[ Union[str,Document], List[Document] ] or Union[ Union[str,Document], List[Union[str,Document]] ]
The signature and derived Implementations of DocStore(ABC).search don't match the documentation of the base class
https://api.github.com/repos/langchain-ai/langchain/issues/3794/comments
4
2023-04-29T17:38:34Z
2023-11-29T16:11:04Z
https://github.com/langchain-ai/langchain/issues/3794
1,689,632,702
3,794
[ "langchain-ai", "langchain" ]
chain = load_qa_chain(llm, chain_type="stuff") answer = chain.run(input_documents=similar_docs ,question=query) --> This is return a attribute error as below 8 frames [/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py](https://wtvsodgob9i-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230427-060111-RC00_527538942#) in format_document(doc, prompt) 14 def format_document(doc: Document, prompt: BasePromptTemplate) -> str: 15 """Format a document into a string based on a prompt template.""" ---> 16 base_info = {"page_content": doc.page_content} 17 base_info.update(doc.metadata) 18 missing_metadata = set(prompt.input_variables).difference(base_info) AttributeError: 'tuple' object has no attribute 'page_content'
Attribute error tuple has no attribute 'page_content'
https://api.github.com/repos/langchain-ai/langchain/issues/3790/comments
13
2023-04-29T15:30:33Z
2023-11-05T16:07:24Z
https://github.com/langchain-ai/langchain/issues/3790
1,689,596,758
3,790
[ "langchain-ai", "langchain" ]
receive chat history and custom knowledge source
is there a chain type equivalent to ConversationalRetrievalQA in JS
https://api.github.com/repos/langchain-ai/langchain/issues/3789/comments
1
2023-04-29T15:12:41Z
2023-09-10T16:25:04Z
https://github.com/langchain-ai/langchain/issues/3789
1,689,591,831
3,789
[ "langchain-ai", "langchain" ]
Hi, I've been playing with the [SelfQueryRetriever examples](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html) but am having a few issues with allowed operators and valid Comparator/s. **Example 2:** This example only specifies a filter `retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")` This build the following query: `query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)` But it fails with: `HTTP response body: {"code":3,"message":"$Comparator.GT is not a valid operator","details":[]}` For other examples, I see: `HTTP response body: {"code":3,"message":"only logical operators as $or and $and are allowed at top level, got $Operator.AND","details":[]}` I'm using the following on MacOS: - Python 3.11.3 - langchain 0.0.152 - lark 1.1.5 With a Pinecone index: - Environment: us-central1-gcp - Metric: cosine - Pod Type: p1.x1 - Dimensions: 1536 This will be a killer search feature so, I'd be very grateful if anybody is able to shed some light on this Thanks.
SelfQueryRetriever: invalid operators/comparators
https://api.github.com/repos/langchain-ai/langchain/issues/3788/comments
13
2023-04-29T14:37:53Z
2024-08-05T21:58:09Z
https://github.com/langchain-ai/langchain/issues/3788
1,689,581,463
3,788
[ "langchain-ai", "langchain" ]
Hi! I am using GPT4All with agents and Wikipedia as a tool. I'm getting this error: `is not a valid tool, try another one.`. I have noticed that in the action there is that: `Action: Use Wikipedia to confirm the accuracy of this information.` I think it should be `Action: wikipedia`.
Is not a valid tool, try another one.
https://api.github.com/repos/langchain-ai/langchain/issues/3785/comments
6
2023-04-29T13:32:19Z
2023-09-24T16:06:41Z
https://github.com/langchain-ai/langchain/issues/3785
1,689,561,941
3,785
[ "langchain-ai", "langchain" ]
# Description There is a typo on the [Components/Schema/Text documentation page](https://docs.langchain.com/docs/components/schema/text). The third sentence starts with: ``` Therefor, a lot of the interfaces... ``` It should be changed to: ``` Therefore, a lot of the interfaces... ```
DOCS: Typo on Components/Schema/Text page
https://api.github.com/repos/langchain-ai/langchain/issues/3784/comments
7
2023-04-29T12:10:04Z
2023-12-19T00:51:18Z
https://github.com/langchain-ai/langchain/issues/3784
1,689,539,500
3,784
[ "langchain-ai", "langchain" ]
Hey guys, wanted to ask if I can use the SQL Database Agent agent and get the inference from OpenAI gpt-3.5-turbo? And if so, how can I do that? Tried to replace the llm argument on the initialization of the agent executor from OpenAI to OpenAIChat and a bunch of other stuff. But none seems to work. Thanks!
Using the SQL Database Agent with inference from OpenAI gpt-3.5-turbo model.
https://api.github.com/repos/langchain-ai/langchain/issues/3783/comments
7
2023-04-29T12:01:17Z
2023-09-24T16:06:46Z
https://github.com/langchain-ai/langchain/issues/3783
1,689,537,068
3,783
[ "langchain-ai", "langchain" ]
I am trying to follow quick start guide for using agents from https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-dynamically-call-chains-based-on-user-input while following steps I am seeing error ``` ZeroShotAgent does not support multi-input tool Calculator. ``` I am using `langchain==0.0.152` and python version `Python 3.8.10` Python code I executed ``` from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) ``` The Error I am seeing ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_1917316/1048650410.py in <module> 1 llm = OpenAI(temperature=0) 2 tools = load_tools(["serpapi", "llm-math"], llm=llm) ----> 3 agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) ~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/initialize.py in initialize_agent(tools, llm, agent, callback_manager, agent_path, agent_kwargs, **kwargs) 50 agent_cls = AGENT_TO_CLASS[agent] 51 agent_kwargs = agent_kwargs or {} ---> 52 agent_obj = agent_cls.from_llm_and_tools( 53 llm, tools, callback_manager=callback_manager, **agent_kwargs 54 ) ~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/mrkl/base.py in from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, format_instructions, input_variables, **kwargs) 99 ) -> Agent: 100 """Construct an agent from an LLM and tools.""" --> 101 cls._validate_tools(tools) 102 prompt = cls.create_prompt( 103 tools, ~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/mrkl/base.py in _validate_tools(cls, tools) 123 @classmethod 124 def _validate_tools(cls, tools: Sequence[BaseTool]) -> None: --> 125 super()._validate_tools(tools) 126 for tool in tools: 127 if tool.description is None: ~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/agent.py in _validate_tools(cls, tools) 457 for tool in tools: 458 if not tool.is_single_input: --> 459 raise ValueError( 460 f"{cls.__name__} does not support multi-input tool {tool.name}." 461 ) ValueError: ZeroShotAgent does not support multi-input tool Calculator. ``` Let me know if more information is needed or if this is expected behavior then documentation changes are needed.
ZeroShotAgent does not support multi-input tool Calculator.
https://api.github.com/repos/langchain-ai/langchain/issues/3781/comments
9
2023-04-29T11:28:21Z
2023-10-30T12:34:46Z
https://github.com/langchain-ai/langchain/issues/3781
1,689,528,678
3,781
[ "langchain-ai", "langchain" ]
While working on https://github.com/hwchase17/langchain/issues/3722 I have noticed that there might be a bug in the current implementation of the OpenAI length safe embeddings in `_get_len_safe_embeddings`, which before #3722 was actually the **default implementation** (after https://github.com/hwchase17/langchain/pull/2330). It appears the weights used are constant and the **length** of the embedding vector (1536) and NOT the **number of tokens** in the batch, as in the reference implementation at https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
OpenAI embedding use invalid/constant weights
https://api.github.com/repos/langchain-ai/langchain/issues/3777/comments
1
2023-04-29T05:57:29Z
2023-09-15T22:12:54Z
https://github.com/langchain-ai/langchain/issues/3777
1,689,407,698
3,777