issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Description Currently, the `AzureOpenAI` and `AzureChatOpenAI` classes call the same underlying SDK, but the developer interaction with both is different. The goal of this issue is to update the `AzureOpenAI` class to use the following parameters like `AzureChatOpenAI`: ```python deployment_name: str = "" openai_api_type: str = "azure" openai_api_base: str = "" openai_api_version: str = "" openai_api_key: str = "" ``` This way, developer interaction with both `AzureOpenAI` and `AzureChatOpenAI` is the same. ### Approach Create a class called `AzureOpenAIMixin` that contains the code from `AzureChatOpenAI` and is inherited by `AzureOpenAI` and `AzureChatOpenAI`.#3635 ### Proposed Implementation ```python class AzureOpenAIMixin(abc.ABC, BaseModel): """Wrapper around Azure OpenAI Chat Completion API. To use this class you must have a deployed model on Azure OpenAI. Use `deployment_name` in the constructor to refer to the "Model deployment name" in the Azure portal. In addition, you should have the ``openai`` python package installed, and the following environment variables set or passed in constructor in lower case: - ``OPENAI_API_TYPE`` (default: ``azure``) - ``OPENAI_API_KEY`` - ``OPENAI_API_BASE`` - ``OPENAI_API_VERSION`` For exmaple, if you have `gpt-35-turbo` deployed, with the deployment name `35-turbo-dev`, the constructor should look like: .. code-block:: python AzureChatOpenAI( deployment_name="35-turbo-dev", openai_api_version="2023-03-15-preview", ) Be aware the API version may change. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. """ deployment_name: str = "" openai_api_type: str = "azure" openai_api_base: str = "" openai_api_version: str = "" openai_api_key: str = "" @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" openai_api_key = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY", ) openai_api_base = get_from_dict_or_env( values, "openai_api_base", "OPENAI_API_BASE", ) openai_api_version = get_from_dict_or_env( values, "openai_api_version", "OPENAI_API_VERSION", ) openai_api_type = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", ) try: import openai openai.api_type = openai_api_type openai.api_base = openai_api_base openai.api_version = openai_api_version openai.api_key = openai_api_key except ImportError: raise ValueError( "Could not import openai python package. " "Please it install it with `pip install openai`." ) try: values["client"] = openai.ChatCompletion except AttributeError: raise ValueError( "`openai` has no `ChatCompletion` attribute, this is likely " "due to an old version of the openai package. Try upgrading it " "with `pip install --upgrade openai`." ) if values["n"] < 1: raise ValueError("n must be at least 1.") if values["n"] > 1 and values["streaming"]: raise ValueError("n must be 1 when streaming.") return values @property def _default_params(self) -> Dict[str, Any]: """Get the default parameters for calling OpenAI API.""" return { **super()._default_params, "engine": self.deployment_name, } class AzureOpenAI(BaseOpenAI, AzureOpenAIMixin): @property def _default_params(self) -> Dict[str, Any]: """Get the default parameters for calling OpenAI API.""" return { **super()._default_params, "engine": self.deployment_name, } class AzureChatOpenAI(ChatOpenAI, AzureOpenAIMixin): @property def _identifying_params(self) -> Mapping[str, Any]: return { **{"deployment_name": self.deployment_name}, **super()._identifying_params, } @property def _invocation_params(self) -> Dict[str, Any]: return {**{"engine": self.deployment_name}, **super()._invocation_params} ```
[Azure OpenAI] Merging validate_environment from AzureChatOpenAI to AzureOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/3769/comments
2
2023-04-29T04:12:41Z
2023-09-10T16:25:14Z
https://github.com/langchain-ai/langchain/issues/3769
1,689,355,370
3,769
[ "langchain-ai", "langchain" ]
Commit: https://github.com/hwchase17/langchain/commit/4654c58f7238e10b35544633bd780b73bbb75c75 This commit appears to have broken the Quick Start example for Agents: Dynamically Call Chains Based on User Input. When attempting to follow along this now causes an error ValueError: ZeroShotAgent does not support multi-input tool Calculator.
Quick Start example for Agents: Dynamically Call Chains Based on User Input is broken
https://api.github.com/repos/langchain-ai/langchain/issues/3757/comments
5
2023-04-29T01:02:41Z
2023-05-01T21:42:18Z
https://github.com/langchain-ai/langchain/issues/3757
1,689,291,529
3,757
[ "langchain-ai", "langchain" ]
Hey guys, the code is absolutely the same as https://github.com/hwchase17/langchain/issues/3750 but just with a german prompt. The prompt justs asks for recommendations about things to do in Middle-Franconia (part of Germany). ``` % python3 app.py "Nenne mir 3 Ausflugsziele in Mittelfranken?" llama.cpp: loading model from /Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin llama_model_load_internal: format = ggjt v1 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 512 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 73.73 KB llama_model_load_internal: mem required = 9807.47 MB (+ 1608.00 MB per state) llama_init_from_file: kv self size = 400.00 MB AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | Answer the following questions as best you can. You have access to the following tools: Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Google Search, Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} {agent_scratchpad} -- Serving request for input: Nenne mir 3 Ausflugsziele in Mittelfranken? > Entering new AgentExecutor chain... Thought: what to do Action: Google Search Google Search: "Ausflugsziele mittelfranken" Results: - https://www.mittelfrankenschau.de/service/kultur/ausflüge-mitten-fuer-franken/ - https://mw21.de/blog/flaechen/10-top-ausflugsziele-in-mittel-franken - http://www.m-frankfurt.de/ausflugsziele/ Action Input: "mittelfranken" Observation: Google Search Google Search: "Ausflugsziele mittelfranken" Results: - https://www.mittelfrankenschau.de/service/kultur/ausflüge-mitten-fuer-franken/ - https://mw21.de/blog/flaechen/10-top-ausflugsziele-in-mittel-franken - http://www.m-frankfurt.de/ausflugsziele/ is not a valid tool, try another one. Thought:Traceback (most recent call last): File "/Users/myadmin/lc-serve/app.py", line 54, in <module> ask(sys.argv[1]) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/lcserve/backend/decorators.py", line 14, in sync_wrapper return func(*args, **kwargs) File "/Users/myadmin/lc-serve/app.py", line 47, in ask return agent_executor.run(input) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 796, in _call next_step_output = self._take_next_step( File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 676, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 384, in plan full_output = self.llm_chain.predict(**full_inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 151, in predict return self(kwargs)[self.output_key] File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 57, in _call return self.apply([inputs])[0] File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 118, in apply response = self.generate(input_list) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 62, in generate return self.llm.generate_prompt(prompts, stop) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 107, in generate_prompt return self.generate(prompt_strings, stop=stop) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/llamacpp.py", line 222, in _call for token in self.stream(prompt=prompt, stop=stop): File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/llamacpp.py", line 268, in stream for chunk in result: File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/llama_cpp/llama.py", line 426, in _create_completion raise ValueError( ValueError: Requested tokens exceed context window of 512 ``` I am guessing (no idea really) that some of the search results might be too big, but i am also uncertain of how to use a TextSplitter in between the load_tools and google search result parsing. Any ideas would be greatly appreciated! Thanks in advance!
ValueError: Requested tokens exceed context window of 512
https://api.github.com/repos/langchain-ai/langchain/issues/3751/comments
8
2023-04-28T23:42:51Z
2023-05-24T20:06:54Z
https://github.com/langchain-ai/langchain/issues/3751
1,689,259,604
3,751
[ "langchain-ai", "langchain" ]
Hey there, thanks for langchain! It's super awesome! 👍 I am currently trying to write a simple REST API but i am getting somewhat random errors. Sometimes (about 1 in 15 runs) it's this: ``` % python3 app.py "Who won the superbowl the year justin bieber was born?" llama.cpp: loading model from /Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin llama_model_load_internal: format = ggjt v1 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 512 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 73.73 KB llama_model_load_internal: mem required = 9807.47 MB (+ 1608.00 MB per state) llama_init_from_file: kv self size = 400.00 MB AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | Answer the following questions as best you can. You have access to the following tools: Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Google Search, Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} {agent_scratchpad} -- Serving request for input: Who won the superbowl the year justin bieber was born? > Entering new AgentExecutor chain... Traceback (most recent call last): File "/Users/myadmin/lc-serve/app.py", line 54, in <module> ask(sys.argv[1]) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/lcserve/backend/decorators.py", line 14, in sync_wrapper return func(*args, **kwargs) File "/Users/myadmin/lc-serve/app.py", line 47, in ask return agent_executor.run(input) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 796, in _call next_step_output = self._take_next_step( File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 676, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 385, in plan return self.output_parser.parse(full_output) File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/mrkl/output_parser.py", line 26, in parse raise OutputParserException(f"Could not parse LLM output: `{text}`") langchain.schema.OutputParserException: Could not parse LLM output: ` Thought: I don't know who won that year... or what happened in the next 3 years. Action: Google Search Google Search: Super Bowl` ``` The script is as follows: ```python # app.py #model = "/Users/myadmin/dalai/" model = "/Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin" #model = "/Users/myadmin/dalai/llama/models/13B/ggml-vic13b-q5_1.bin" #model = "/Users/myadmin/dalai/alpaca/models/30B/ggml-model-q4_0.bin" import sys, os os.environ["GOOGLE_CSE_ID"] = "xyz" os.environ["GOOGLE_API_KEY"] = "xyz" from langchain import LLMChain from langchain.llms import LlamaCpp from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, load_tools from lcserve import serving prefix = """Answer the following questions as best you can. You have access to the following tools:""" #suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args" suffix = """Begin! Question: {input} {agent_scratchpad}""" llm = LlamaCpp(model_path=model, verbose=True) tools = load_tools(["google-search", "wikipedia"], llm=llm) prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"], ) print(prompt.template) llm_chain = LLMChain(llm=llm, prompt=prompt) tool_names = [tool.name for tool in tools] agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names) agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) @serving def ask(input: str) -> str: print("-- Serving request for input: %s" % input) return agent_executor.run(input) if __name__ == "__main__": if len(sys.argv) == 1: ask('How many people live in canada as of 2023?') else: ask(sys.argv[1]) ``` Not quite sure what i'm doing wrong here, or if it's just purely a random thing i should just catch? Thank you very much in advance!
langchain.schema.OutputParserException: Could not parse LLM output: `
https://api.github.com/repos/langchain-ai/langchain/issues/3750/comments
7
2023-04-28T23:35:05Z
2024-03-16T23:01:29Z
https://github.com/langchain-ai/langchain/issues/3750
1,689,255,999
3,750
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/retrievers/document_compressors/chain_extract.py#L67-L77 Should allow to pass arbitrary argument to the `LLMChain` for example `verbose`, much like for example `BaseQAWithSourcesChain` does: https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/chains/qa_with_sources/base.py#L40-L47
`LLMChainExtractor.from_llm` should accept `kwargs` for the internal `LLMChain`
https://api.github.com/repos/langchain-ai/langchain/issues/3747/comments
0
2023-04-28T23:31:03Z
2023-04-29T04:21:25Z
https://github.com/langchain-ai/langchain/issues/3747
1,689,254,467
3,747
[ "langchain-ai", "langchain" ]
The following code broke post updating, was working fine before: ```py def get_chat_agent(memory, tools): return initialize_agent( tools, ChatOpenAI(verbose=True), agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, memory=memory, system_message=CHAT_AGENT_SYSTEM_MESSAGE, return_intermediate_steps=True, verbose=True, ) ``` Error message: ``` Cell In[2], line 37, in get_chat_agent(memory, tools) 36 def get_chat_agent(memory, tools): ---> 37 return initialize_agent( 38 tools, 39 ChatOpenAI(verbose=True), 40 agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, 41 memory=memory, 42 system_message=CHAT_AGENT_SYSTEM_MESSAGE, 43 return_intermediate_steps=True, 44 verbose=True, 45 ) ... 195 @property 196 def _prompt_type(self) -> str: --> 197 raise NotImplementedError NotImplementedError: ```
:bug: Breaking changes introduced into previous code after updating to 0.0.152
https://api.github.com/repos/langchain-ai/langchain/issues/3743/comments
2
2023-04-28T23:03:42Z
2023-09-10T16:25:19Z
https://github.com/langchain-ai/langchain/issues/3743
1,689,241,785
3,743
[ "langchain-ai", "langchain" ]
The default prompt of `load_qa_with_sources_chain` in `langchain.chains.qa_with_sources` ( `langchain 0.0.147` and the last few versions) contains user information (probably a question someone had, or an example) - please clean it. **The default prompt should be (I think):** ``` template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER:""" ``` **The current default prompt in `load_qa_with_sources_chain`:** `template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:'`
Bug reporting in `load_qa_with_sources_chain` promt
https://api.github.com/repos/langchain-ai/langchain/issues/3737/comments
6
2023-04-28T21:12:29Z
2023-09-24T16:06:51Z
https://github.com/langchain-ai/langchain/issues/3737
1,689,140,634
3,737
[ "langchain-ai", "langchain" ]
I am interested in making use of Prompt serialization in order to allow for more modular use of models / chains. I noticed when partial variables were initially added in [this PR](https://github.com/hwchase17/langchain/pull/1308), there was some discussion about their interaction with serialized prompts, which resulted in [these lines](https://github.com/hwchase17/langchain/blob/e3b7a20454cea592fc6d0a0d91c36206e8ad6790/langchain/prompts/base.py#L203-L204) being added to disable serialization of prompts with partial variables. In my testing I found that partial variables seem to work well as is, if those lines are removed, as done in [this PR](https://github.com/hwchase17/langchain/pull/3734). - https://github.com/hwchase17/langchain/pull/3734 I'm happy to help with developing this feature, but would be interested to hear more about what the reasoning was for initially disabling it, as I am sure there are edge cases I am missing. Thanks! Cases to be investigated: - [ ] Partial Variable needs to be redefined after the PROMPT is loaded back from serialization
Serializing Prompts with Partial Variables
https://api.github.com/repos/langchain-ai/langchain/issues/3735/comments
2
2023-04-28T21:06:22Z
2023-09-17T17:21:49Z
https://github.com/langchain-ai/langchain/issues/3735
1,689,135,358
3,735
[ "langchain-ai", "langchain" ]
Hello, my code is reading "state_of_the_union" text which I converted to a PDF chromadb.__version__ '0.3.21' my code is ################## # LLM model model_path = f"{model_dir}gpt-neo-2.7B" generate_text = pipeline('text-generation', model = model_path,\ max_new_tokens = 100, temperature = 0.99, top_p=0.99,\ repetition_penalty=1.4, use_cache=True, do_sample = True) hf_pipeline = HuggingFacePipeline(pipeline=generate_text) # embedding model model_path = f"{model_dir}all-mpnet-base-v2" hf_embed = HuggingFaceEmbeddings(model_name = model_path) # load data data_path = f"{data_dir}state_of_the_union.pdf" loader = PyPDFLoader(data_path) documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) db = Chroma.from_documents(texts, hf_embed) query = "Who is Michael Jackson?" # out of context retriever = db.as_retriever(search_type="similarity", search_kwargs={"k":2}) # retriever = db.as_retriever() # create a chain to answer questions qa = RetrievalQA.from_chain_type( llm = hf_pipeline, chain_type="stuff", retriever=retriever,\ return_source_documents=True) # tried with other chain types as well result = qa({"query": query}) print(result) ######################## My question why is the function returning me results from the source document that dont even match my query Results {'query': 'Who is Michael Jackson?', 'result': ' We can thank him again. \nHe was the man who saved us from bankruptcy four times. \n \nIn 1983, he went to Washington and worked with Congress to enact the Gramm-Leach-Bliley Act creating \nthe Federal Home Loan Mortgage Corporation and the Community Reinvestment Act. \n \nThese two laws allowed banks to start bringing into the mortgage lending market products that \nwould not have made sense ten years earlier. ', 'source_documents': [Document(page_content='And I did that 4 days ago, when I nominated ....', metadata={'source': 'LLM_Learnings\\data\\state_union.pdf', 'page': 22}), Document(page_content='Just last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal \nincome tax. \n \nThat’s simply not fair. That’s why I’ve proposed a 15% minimum .....', metadata={'source': 'LLM_Learnings\\data\\state_union.pdf', 'page': 14})]} Other options such as load_qa_chain or FAISS, seem to be working ok docs = db.as_retriever().get_relevant_documents(query) # docs = db.similarity_search_with_score(query) chain = load_qa_chain(llm = hf_pipeline, chain_type="stuff", prompt = PROMPT) # gives error when no embedding for the given query are found in the hf_embed try: result = chain({"input_documents": docs, "question": query}, return_only_outputs=False) except IndexError: result = chain({"input_documents": '', "question": query}, return_only_outputs=False) #### OR using FAISS try: docs_db = faiss.similarity_search_with_relevance_scores(query, k=2) docs_db = [x[0] for x in docs_db] except ValueError: docs_db = "" chain = load_qa_chain(hf_pipeline, chain_type="stuff", prompt = PROMPT) result = chain({"input_documents": docs_db, "question": query}, return_only_outputs=False) Any help or advise would be greatly appreciated. Regards Akbar
Strange document similarity results
https://api.github.com/repos/langchain-ai/langchain/issues/3731/comments
2
2023-04-28T20:09:11Z
2023-10-09T16:08:18Z
https://github.com/langchain-ai/langchain/issues/3731
1,689,078,087
3,731
[ "langchain-ai", "langchain" ]
``` from langchain.document_loaders import DirectoryLoader loader = DirectoryLoader('./server', glob="**/*.md") data = loader.load() ``` Error ``` from pdfminer.utils import open_filename ImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/usr/local/lib/python3.8/dist-packages/pdfminer/utils.py) ``` Langchain version: '0.0.152'
Errors with DirectoryLoader
https://api.github.com/repos/langchain-ai/langchain/issues/3726/comments
9
2023-04-28T18:16:21Z
2024-03-26T16:04:51Z
https://github.com/langchain-ai/langchain/issues/3726
1,688,958,540
3,726
[ "langchain-ai", "langchain" ]
If you have a look at: [https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html), you will see that it tells to `pip install pyllamacpp` This is because the class GPT4All (from langchain.llms import GPT4All) will load `from pyllamacpp.model import Model as GPT4AllModel` In other words, the GPT4All models are loaded using pyllamacpp ([https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)). However, this is does not work anymore. The LLMs downloaded from [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all) cannot be loaded with pyllamacpp. The reason for that is, the gpt4all models have now their own python bindings --> see [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all). So, in order to use the gpt4all models, it is not allowed to use the regular pyllamacpp, but rather the python bindings provided by nomic-ai [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all) must be used. Updating the documentation is one thing, adding the nomic-ai offical python bindings is the second modification that is required.
GPT4All model - Update required in the documentation
https://api.github.com/repos/langchain-ai/langchain/issues/3725/comments
1
2023-04-28T18:03:24Z
2023-09-15T22:12:53Z
https://github.com/langchain-ai/langchain/issues/3725
1,688,945,178
3,725
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/1bf1c37c0cccb7c8c73d87ace27cf742f814dbe5/langchain/embeddings/openai.py#L210-L211 Means that the length safe embedding method is "always" used, initial implementation https://github.com/hwchase17/langchain/pull/991 has the `embedding_ctx_length` set to -1 (meaning you had to opt-in for the length safe method), https://github.com/hwchase17/langchain/pull/2330 changed that to max length of OpenAI embeddings v2, meaning the length safe method is used at all times. How about changing that if branch to use length safe method only when needed, meaning when the text is longer than the max context length?
`OpenAIEmbeddings` should use length safe embedding method only when needed
https://api.github.com/repos/langchain-ai/langchain/issues/3722/comments
1
2023-04-28T16:42:26Z
2023-04-29T03:11:38Z
https://github.com/langchain-ai/langchain/issues/3722
1,688,845,073
3,722
[ "langchain-ai", "langchain" ]
Type: Improvement performance Issue: Most of the CPUs / Hardware are today multi core. 4 cores are common, some have 8 cores. The [DirectoryLoader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/directory_loader.html) is using a single core and misses the opportunity of leverage the multiple cores . Using multi-core would divide the loading of document by a factor of the number of core. Divide and conquer would be a good strategy.
Multi-core loader
https://api.github.com/repos/langchain-ai/langchain/issues/3720/comments
1
2023-04-28T15:29:25Z
2023-09-10T16:25:30Z
https://github.com/langchain-ai/langchain/issues/3720
1,688,753,825
3,720
[ "langchain-ai", "langchain" ]
I recommend modifying the `PythonREPL` class in the provided code to raise syntax exceptions when the command string has invalid syntax. Currently, the `run` method uses a `try-except` block to catch any exceptions and return their string representation. Instead, we can use the `ast.parse` function to check for syntax errors before executing the command. By using `ast.parse`, we can explicitly raise a `SyntaxError` when the command string has invalid syntax. The rest of the exceptions are still caught by the existing `try-except` block.
Modify PythonREPL to raise syntax exceptions for invalid command strings
https://api.github.com/repos/langchain-ai/langchain/issues/3712/comments
1
2023-04-28T13:37:48Z
2023-04-28T14:01:54Z
https://github.com/langchain-ai/langchain/issues/3712
1,688,576,160
3,712
[ "langchain-ai", "langchain" ]
## Context When the completion is of a longer format such as an Email, the text will likely contain new line character `\n`. If it is not properly escaped like `\\n`, parsing will fail when using PydanticOutputParser as `json.loads` does not allow control characters in strict mode. Most of the time, RetryWithErrorOutputParser also fails to correct the format. ## Example ```python from langchain.output_parsers import PydanticOutputParser from langchain.prompts import PromptTemplate from pydantic import BaseModel, Field class Email(BaseModel): subject: str = Field(description="main objective of the email") body: str = Field(description="email content") parser = PydanticOutputParser(pydantic_object=Email) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()}, ) completion = llm( prompt.format( query="Write a long formal email to inform my clients that the company is broke." ) ) parser.parse(completion) ``` ```python # completion > Here is the output instance: \``` {"subject": "Company Status Update", "body": "Dear Clients, This email is to inform you that our company is currently in a difficult financial situation. We apologize for any inconvenience caused by this and are doing our best to ensure that our services remain of the highest quality for our valued clients. We want to thank you for your support and understanding during this difficult time. Sincerely, [Company Name]"} \``` ``` ```python # parser.parse(completion) > Got: Invalid control character at: line 1 column 61 (char 60) ``` ## Thoughts Maybe include instructions on escaping in PYDANTIC_FORMAT_INSTRUCTIONS? Or could adding an option to allow non-strict mode be considered? https://github.com/hwchase17/langchain/blob/32793f94fd6da0bb36311e1af4051f7883dd12c5/langchain/output_parsers/pydantic.py#L25
PydanticOutputParser has high chance failing when completion contains new line
https://api.github.com/repos/langchain-ai/langchain/issues/3709/comments
1
2023-04-28T12:21:49Z
2023-09-24T16:06:56Z
https://github.com/langchain-ai/langchain/issues/3709
1,688,460,377
3,709
[ "langchain-ai", "langchain" ]
It fails with ``` File [langchain/chains/base.py:113], in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) 112 try: --> 113 outputs = self._call(inputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) File [langchain/chains/retrieval_qa/base.py:110], in BaseRetrievalQA._call(self, inputs) 107 question = inputs[self.input_key] 109 docs = self._get_docs(question) --> 110 answer = self.combine_documents_chain.run( 111 input_documents=docs, question=question 112 ) 114 if self.return_source_documents: 115 return {self.output_key: answer, "source_documents": docs} File [langchain/chains/base.py:205], in Chain.run(self, *args, **kwargs) 203 """Run the chain as text in, text out or multiple variables, text out.""" 204 if len(self.output_keys) != 1: --> 205 raise ValueError( 206 f"`run` not supported when there is not exactly " 207 f"one output key. Got {self.output_keys}." 208 ) 210 if args and not kwargs: 211 if len(args) != 1: ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps']. ``` The culprit is in calling `run` method instead of general `_call` in `BaseRetrievalQA._call`.
RetrievalQA cannot be called with QA chain having `return_intermediate_steps=True`
https://api.github.com/repos/langchain-ai/langchain/issues/3707/comments
3
2023-04-28T12:05:34Z
2023-10-18T01:58:05Z
https://github.com/langchain-ai/langchain/issues/3707
1,688,438,803
3,707
[ "langchain-ai", "langchain" ]
null
How to create unittest for langchain use in my project?
https://api.github.com/repos/langchain-ai/langchain/issues/3706/comments
1
2023-04-28T12:03:39Z
2023-09-10T16:25:34Z
https://github.com/langchain-ai/langchain/issues/3706
1,688,436,004
3,706
[ "langchain-ai", "langchain" ]
from langchain.retrievers.self_query.base import SelfQueryRetriever failed File [~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/query_constructor/parser.py:41](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/rubensmau/luby/machine-learning/langchain/~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/query_constructor/parser.py:41) 8 from langchain.chains.query_constructor.ir import ( 9 Comparator, 10 Comparison, (...) 13 Operator, 14 ) 16 GRAMMAR = """ 17 ?program: func_call 18 ?expr: func_call (...) 37 %ignore WS 38 """ ---> 41 @v_args(inline=True) 42 class QueryTransformer(Transformer): 43 def __init__( 44 self, 45 *args: Any, (...) 48 **kwargs: Any, 49 ): 50 super().__init__(*args, **kwargs) NameError: name 'v_args' is not defined
SelfQueryRetriever import failed in langchain 0.0.152 and 0.0.151
https://api.github.com/repos/langchain-ai/langchain/issues/3705/comments
2
2023-04-28T10:54:32Z
2023-05-02T10:50:16Z
https://github.com/langchain-ai/langchain/issues/3705
1,688,345,812
3,705
[ "langchain-ai", "langchain" ]
I'm trying to store vector in Milvus with the following code ~~~ from os import environ MILVUS_HOST = "xxx" MILVUS_PORT = "xxx" OPENAI_API_KEY = "xxx" ## Set up environment variables environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Milvus from langchain.document_loaders import UnstructuredMarkdownLoader from langchain.text_splitter import MarkdownTextSplitter # Use the WebBaseLoader to load specified web pages into documents loader = UnstructuredMarkdownLoader("./content.md", mode="elements") docs = loader.load() # Split the documents into smaller chunks text_splitter = MarkdownTextSplitter(chunk_size=1024, chunk_overlap=0) docs = text_splitter.split_documents(docs) # print(docs[0]) # Set up an embedding model to covert document chunks into vector embeddings. embeddings = OpenAIEmbeddings(model="ada") # Set up a vector store used to save the vector embeddings. Here we use Milvus as the vector store. vector_store = Milvus.from_documents( docs, embedding=embeddings, connection_args={"host": MILVUS_HOST, "port": MILVUS_PORT} ) ~~~ But I'm getting an error ~~~ Traceback (most recent call last): File "/home/olivierb/Developments/experiments/openai/import_lang.py", line 35, in <module> vector_store = Milvus.from_documents( File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 272, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 804, in from_texts vector_db.add_texts(texts=texts, metadatas=metadatas) File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 436, in add_texts insert_list = [insert_dict[x][i:end] for x in self.fields] File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 436, in <listcomp> insert_list = [insert_dict[x][i:end] for x in self.fields] KeyError: 'title' ~~~ It seems that Document have missing metadata.title ~~~ page_content='xxx' metadata={'source': './content.md, 'page_number': 1, 'category': 'Title'} ~~~
Missing key title in metadata with UnstructuredFileLoader
https://api.github.com/repos/langchain-ai/langchain/issues/3704/comments
12
2023-04-28T10:51:47Z
2024-01-08T19:16:22Z
https://github.com/langchain-ai/langchain/issues/3704
1,688,342,420
3,704
[ "langchain-ai", "langchain" ]
There is an error when i tried to use this code. ``` tools = load_tools(['llm-math', 'python_repl'], llm) agent = initialize_agent(tools, agent="zero-shot-react-description", llm=llm) ``` Looks like because of #3684 checking if len(self.args) == 1: in self.is_single_input. But, self.args of llm-math is ``` {'args': {'title': 'Args', 'type': 'array', 'items': {}}, 'kwargs': {'title': 'Kwargs', 'type': 'object'}} ``` So, self.is_single_input return False Is there a way to get single input llm-math?
ValueError: ZeroShotAgent does not support multi-input tool Calculator.
https://api.github.com/repos/langchain-ai/langchain/issues/3700/comments
29
2023-04-28T09:33:17Z
2023-09-09T01:47:57Z
https://github.com/langchain-ai/langchain/issues/3700
1,688,227,611
3,700
[ "langchain-ai", "langchain" ]
## Motivation The Map step is scalable as long as we split documents into chunks. Meanwhile, as far as I know, the Reduce step has a weak point on the limitation of max tokens of a LLM model, since the Reduce step tries to put all summaries generated in the Map step into a prompt of the Reduce step. So, if all summaries generated in the Map step is too large to put in a request to a LLM model, it meets the max tokens quota. So, it would be good to enable to skip the Reduce step and to get outputs of the intermediate steps. By doing so, we can deal with the outputs with the Refine mode and so on. ```python chain = load_summarize_chain(chain_type="map_reduce", skip_reduce=True) ```
Enable to skip the reduce step of the MapReduce mode of `MapReduceDocumentsChain`
https://api.github.com/repos/langchain-ai/langchain/issues/3694/comments
7
2023-04-28T06:29:39Z
2023-10-13T22:27:50Z
https://github.com/langchain-ai/langchain/issues/3694
1,687,979,068
3,694
[ "langchain-ai", "langchain" ]
I have ``` model_kwargs = {"n_predict": 500, "top_k": 40, "top_p": 0.95, "repeat_penalty" : 3} llm = Replicate(model="replicate/gpt4all:1150831d577dd5a992a38aa47cec565ab099390b2825c6c090bd7c715219db3b", model_kwargs=model_kwargs) loader = TextLoader("./materials/sometext.txt") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) ##################################################### embeddings = HuggingFaceEmbeddings() db = FAISS.from_documents(documents, embeddings) retriever = db.as_retriever(search_type="similarity", search_kwargs={"k":2}) chain = ConversationalRetrievalChain( retriever=retriever, question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] while True: query = input ("> ") result = chain({"question": query, "chat_history": chat_history}) ``` ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/conversational_retrieval/base.py", line 99, in _call answer = self.combine_docs_chain.run(input_documents=docs, **new_inputs) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 216, in run return self(kwargs)[self.output_keys[0]] File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call output, extra_return_dict = self.combine_docs(docs, **other_keys) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/combine_documents/stuff.py", line 82, in combine_docs return self.llm_chain.predict(**inputs), {} File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 151, in predict return self(kwargs)[self.output_key] File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 57, in _call return self.apply([inputs])[0] File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 118, in apply response = self.generate(input_list) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 62, in generate return self.llm.generate_prompt(prompts, stop) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 107, in generate_prompt return self.generate(prompt_strings, stop=stop) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/replicate.py", line 109, in _call return "".join([output for output in iterator]) TypeError: 'NoneType' object is not iterable ``` Here is the code for replicate.py ``` def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: """Call to replicate endpoint.""" try: import replicate as replicate_python except ImportError: raise ValueError( "Could not import replicate python package. " "Please install it with `pip install replicate`." ) # get the model and version model_str, version_str = self.model.split(":") model = replicate_python.models.get(model_str) version = model.versions.get(version_str) # sort through the openapi schema to get the name of the first input input_properties = sorted( version.openapi_schema["components"]["schemas"]["Input"][ "properties" ].items(), key=lambda item: item[1].get("x-order", 0), ) first_input_name = input_properties[0][0] inputs = {first_input_name: prompt, **self.input} iterator = replicate_python.run(self.model, input={**inputs}) return "".join([output for output in iterator]) ``` It doesnt happen all the time, sometimes it output the text, but sometimes it output the NoneType error
Bugs with replicate
https://api.github.com/repos/langchain-ai/langchain/issues/3689/comments
1
2023-04-28T03:16:52Z
2023-09-10T16:25:39Z
https://github.com/langchain-ai/langchain/issues/3689
1,687,832,316
3,689
[ "langchain-ai", "langchain" ]
I am currently in the process of replacing GPT with Vicuna in my project. While Vicuna is able to successfully generate the required action and action input, I am encountering a bug where the search API is failing to execute these actions. As a result, the LLM is generating observations independently instead of utilizing the actions generated by Vicuna. I would appreciate any suggestions or solutions for resolving this issue. > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Search Action Input: "What is an iPhone?" Observation: "An iPhone is a smartphone designed and developed by Apple Inc. It is a handheld device with a touchscreen interface that allows users to make phone calls, send messages, and access the internet and a variety of other apps and features." Thought: Do I need to use a tool? No AI: An iPhone is a smartphone designed and developed by Apple Inc. It is a handheld device with a touchscreen interface that allows users to make phone calls, send messages, and access the internet and a variety of other apps and features. > Finished chain.
Action Not Executed After Attempting to Replace with Vicuna
https://api.github.com/repos/langchain-ai/langchain/issues/3688/comments
2
2023-04-28T02:24:18Z
2023-09-10T16:25:45Z
https://github.com/langchain-ai/langchain/issues/3688
1,687,790,352
3,688
[ "langchain-ai", "langchain" ]
Hi there. Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py `def _similarity_search_with_relevance_scores( self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ raise NotImplementedError` Trying to make a work-around now Many thanks Ian
TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py
https://api.github.com/repos/langchain-ai/langchain/issues/3685/comments
2
2023-04-28T00:16:05Z
2023-09-03T22:05:31Z
https://github.com/langchain-ai/langchain/issues/3685
1,687,707,556
3,685
[ "langchain-ai", "langchain" ]
Hi, I am in the process of developing an agent and a toolkit to query mongodb databases, and to do this I simulated SQL Database Agent code with implementations for mongodb, I created this function to create the agent: ``` def create_mongodb_agent( llm: BaseLLM, toolkit: MongoDBDatabaseToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = MONGODB_PREFIX, suffix: str = MONGODB_SUFFIX, format_instructions: str = FORMAT_INSTRUCTIONS, input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = "force", verbose: bool = False, **kwargs: Any, ) -> AgentExecutor: """Construct a MongoDB agent from an LLM and tools.""" tools = toolkit.get_tools() prefix = prefix.format(top_k=top_k) prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, format_instructions=format_instructions, input_variables=input_variables, ) llm_chain = LLMChain( llm=llm, prompt=prompt, callback_manager=callback_manager, ) tool_names = [tool.name for tool in tools] agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs) return AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=verbose, max_iterations=max_iterations, max_execution_time=max_execution_time, early_stopping_method=early_stopping_method, ) ``` and this is my ToolKit: ``` class MongoDBDatabaseToolkit(BaseToolkit): """Toolkit for interacting with MongoDB databases.""" db: MongoDBDatabase = Field(exclude=True) llm: BaseLLM = Field(default_factory=lambda: OpenAIChat(temperature=0)) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True def get_tools(self) -> List[BaseTool]: """Get the tools in the toolkit.""" return [ QueryMongoDBDatabaseTool(db=self.db), InfoMongoDBDatabaseTool(db=self.db), ListMongoDBDatabaseTool(db=self.db), QueryCheckerTool(db=self.db, llm=self.llm), ] ``` but I get the error from the AgentExecutor validator: ``` File "..\env\lib\site-packages\langchain\agents\initialize.py", line 64, in initialize_agent return AgentExecutor.from_agent_and_tools( File "..\env\lib\site-packages\langchain\agents\agent.py", line 557, in from_agent_and_tools return cls( File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__ File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "..\env\lib\site-packages\langchain\agents\agent.py", line 565, in validate_tools tools = values["tools"] KeyError: 'tools' ``` I did some debugging and verified that the tools are passed correctly to the AgentExecutor.from_agent_and_tools function. Can anyone help me with this?
Error creating AgentExecutor with custom Toolkit
https://api.github.com/repos/langchain-ai/langchain/issues/3680/comments
5
2023-04-27T21:49:18Z
2023-09-29T03:10:54Z
https://github.com/langchain-ai/langchain/issues/3680
1,687,597,127
3,680
[ "langchain-ai", "langchain" ]
[One of the best practices](https://nextword.dev/blog/pinecone-cost-best-practices#store-foreign_key-in-your-meta-not-the-whole-json) for dealing with vectorstores is to treat them as vector databases, not traditional databases (NoSQL, SQL, Postgres, etc). I propose to update LangChain's [getting started](https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html), or similar docs, to include the practice of storing `foreign_key` in metadata. For example: --- To align with the best practice of storing foreign keys in metadata instead of storing the entire JSON, you would need to modify the `metadata` parameter when creating documents to include a `foreign_key` field that contains a unique identifier for each document. For example, you could use a UUID for each document: ``` from uuid import uuid4 metadata = {"source": "State of the Union", "foreign_key": str(uuid4())} documents = text_splitter.create_documents([state_of_the_union], metadatas=[metadata]) ``` Then, when adding texts to the vectorstore using the `add_texts` method, you can pass in the list of foreign keys as the `ids` parameter: ``` foreign_keys = [doc.metadata["foreign_key"] for doc in documents] docsearch.add_texts([doc.page_content for doc in documents], ids=foreign_keys) ``` This way, the only data stored in the vectorstore is the embeddings and the foreign keys, and you can use the foreign keys to look up the actual documents in a separate datastore if needed.
Store foreign_key in your meta, not the whole JSON
https://api.github.com/repos/langchain-ai/langchain/issues/3676/comments
1
2023-04-27T21:11:24Z
2023-09-10T16:25:55Z
https://github.com/langchain-ai/langchain/issues/3676
1,687,556,728
3,676
[ "langchain-ai", "langchain" ]
With FAISS you can save and load created indexes locally: db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) In a production environment you might want to keep your indexes and docs separated from your application and access those remotely and not locally. How can that be achieved? Is there another option where you can host your own vector store separated from your llm agent?
FAISS remote saving and loading of indexes
https://api.github.com/repos/langchain-ai/langchain/issues/3673/comments
3
2023-04-27T20:26:42Z
2024-01-25T19:58:23Z
https://github.com/langchain-ai/langchain/issues/3673
1,687,502,751
3,673
[ "langchain-ai", "langchain" ]
Trying to run langchain with open ai api, it works fine with short paragraphs but when I tried longer ones I got this error: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 13214 tokens (12958 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. I don't know if I get the setting right or not, here is my code: import os from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import FAISS from langchain.chains.question_answering import load_qa_chain from langchain.chat_models import ChatOpenAI os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxx" def main(): global db, chain, entry, output # Add entry and output to the global variables file_path = r"F:\langchain\doc1.txt" loader = TextLoader(file_path, encoding='utf-8') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = FAISS.from_documents(docs, embeddings) llm = ChatOpenAI(openai_api_key="sk-xxxxxxxxxx", model_name="gpt-3.5-turbo", max_token=200) chain = load_qa_chain(llm, chain_type="stuff")
Feedback error while running langchain
https://api.github.com/repos/langchain-ai/langchain/issues/3670/comments
2
2023-04-27T19:16:21Z
2023-09-10T16:26:00Z
https://github.com/langchain-ai/langchain/issues/3670
1,687,421,640
3,670
[ "langchain-ai", "langchain" ]
I have two tools to manage credits from a bank One calculate the loan payments and the other get the the interest rate from a table. ``` sql_tool = Tool( name='Interest rate DB', func=sql_chain.run, description="Useful for when you need to answer questions about interest rate of credits" ) class CalculateLoanPayments(BaseTool): name = "Loan Payments calculator" description = "use this tool when you need to calculate a loan payments" def _run(self, parameters): # Convert annual interest rate to monthly rate monthly_rate = interest_rate / 12.0 # Calculate total number of monthly payments num_payments = num_years * 12.0 # Calculate monthly payment amount using formula for present value of annuity # where PV = A * [(1 - (1 + r)^(-n)) / r] # A = monthly payment amount # r = monthly interest rate # n = total number of payments payment = (principal * monthly_rate) / (1 - (1 + monthly_rate) ** (-num_payments)) return payment def _arun(self, radius: Union[int, float]): raise NotImplementedError("This tool does not support async") tools.append(CalculateLoanPayments()) ``` Iam asking I need to calculate the monthly payments for a 2-year loan of 2 million pesos with a commercial credit The result is: I need to use a loan payments calculator to calculate the monthly payments Action: Loan Payments calculator Action Input: Loan amount: 2,000,000, Loan term: 2 years, Interest rate: I need to look up the interest rate in the Interest rate DB Action: Interest rate DB Action Input: Commercial credit interest rate One of the parameters need an action so i try: ``` class LeoOutputParser(AgentOutputParser): def parse(self, text: str) -> Union[AgentAction, AgentFinish]: if FINAL_ANSWER_ACTION in text: return AgentFinish( {"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text ) actions = [] # \s matches against tab/newline/whitespace regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match1 = re.search(regex, text, re.DOTALL) if not match1: raise ValueError(f"Could not parse LLM output: `{text}`") action1 = match1.group(1).strip() action_input1 = match1.group(2).strip(" ").strip('"') first = AgentAction(action1, action_input1, text) actions.append(first) match2 = re.search(regex, action_input1, re.DOTALL) if match2: action2 = match2.group(1).strip() action_input2 = match2.group(2).strip(" ").strip('"') second = AgentAction(action2, action_input2, action_input1) actions.insert(0,second) return actions ``` zero_shot_agent = initialize_agent( agent="zero-shot-react-description", tools=tools, llm=llm, verbose=True, max_iterations=3, agent_kwargs={'output_parser': LeoOutputParser()} ) It execute the parameter getting but not pass the parameter to the calculator. I propose to send the before result to the tool ``` observation = tool.run( agent_action.tool_input, verbose=self.verbose, color=color, **tool_run_kwargs, result ) ``` Thanks in advance
Execute Actions when response has two actions
https://api.github.com/repos/langchain-ai/langchain/issues/3666/comments
4
2023-04-27T17:44:40Z
2023-08-31T17:06:46Z
https://github.com/langchain-ai/langchain/issues/3666
1,687,296,784
3,666
[ "langchain-ai", "langchain" ]
**Issue** Sometimes when doing search similarity using chromaDB wrapper, I run into the following issue: `RuntimeError(\'Cannot return the results in a contigious 2D array. Probably ef or M is too small\')` **Some background info:** ChromaDB is a library for performing similarity search on high-dimensional data. It uses an approximate nearest neighbor (ANN) search algorithm called Hierarchical Navigable Small World (HNSW) to find the most similar items to a given query. The parameters `ef` and `M` are related to the HNSW algorithm and have an impact on the search quality and performance. 1. `ef`: This parameter controls the size of the dynamic search list used by the HNSW algorithm. A higher value for `ef` results in a more accurate search but slower search speed. A lower value will result in a faster search but less accurate results. 2. `M`: This parameter determines the number of bi-directional links created for each new element during the construction of the HNSW graph. A higher value for `M` results in a denser graph, leading to higher search accuracy but increased memory consumption and construction time. The error message you encountered indicates that either or both of these parameters are too small for the current dataset. This can cause issues when trying to return the search results in a contiguous 2D array. To resolve this error, you can try increasing the values of `ef` and `M` in the ChromaDB configuration or during the search query. It's important to note that the optimal values for `ef` and `M` can depend on the specific dataset and use case. You may need to experiment with different values to find the best balance between search accuracy, speed, and memory consumption for your application. **My proposal** 3 possibilities: - Simple one: .adding ef and M optional parameter to similarity_search - More complex one : adding a retrial system that tries over a range ef and M when encountering the issue built into similarity search - Very complex one: calculating optimal ef and M within similarity_search to always have optimal ef and M
Chroma DB : Cannot return the results in a contiguous 2D array
https://api.github.com/repos/langchain-ai/langchain/issues/3665/comments
5
2023-04-27T16:44:01Z
2024-06-27T09:44:47Z
https://github.com/langchain-ai/langchain/issues/3665
1,687,201,707
3,665
[ "langchain-ai", "langchain" ]
got the following error when running today: ``` File "venv/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module> from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain File "venv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module> from langchain.agents.agent import ( File "venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 17, in <module> from langchain.chains.base import Chain File "venv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 2, in <module> from langchain.chains.api.base import APIChain File "venv/lib/python3.11/site-packages/langchain/chains/api/base.py", line 8, in <module> from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT File "venv/lib/python3.11/site-packages/langchain/chains/api/prompt.py", line 2, in <module> from langchain.prompts.prompt import PromptTemplate File "venv/lib/python3.11/site-packages/langchain/prompts/__init__.py", line 14, in <module> from langchain.prompts.loading import load_prompt File "venv/lib/python3.11/site-packages/langchain/prompts/loading.py", line 14, in <module> from langchain.utilities.loading import try_load_from_hub File "venv/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 5, in <module> from langchain.utilities.bash import BashProcess File "venv/lib/python3.11/site-packages/langchain/utilities/bash.py", line 7, in <module> import pexpect ModuleNotFoundError: No module named 'pexpect' ``` does this need to be added to project dependencies?
import error when importing `from langchain import OpenAI` on 0.0.151
https://api.github.com/repos/langchain-ai/langchain/issues/3664/comments
21
2023-04-27T16:24:30Z
2023-04-28T17:54:02Z
https://github.com/langchain-ai/langchain/issues/3664
1,687,175,750
3,664
[ "langchain-ai", "langchain" ]
When i use other embedding model,the vector dimensions is always wrong. So i use 'None' to replace ADA_TOKEN_COUNT. It will be auto compute how many dimensions when first time to use an embedding model. I use 'GanymedeNil/text2vec-large-chinese' test and success. so i change this : embedding: Vector = sqlalchemy.Column(Vector(ADA_TOKEN_COUNT)) to this embedding: Vector = sqlalchemy.Column(Vector(None))
pgvector embedding length error
https://api.github.com/repos/langchain-ai/langchain/issues/3660/comments
3
2023-04-27T15:57:33Z
2023-10-07T16:07:39Z
https://github.com/langchain-ai/langchain/issues/3660
1,687,134,800
3,660
[ "langchain-ai", "langchain" ]
have no idea, just install langchain and run code below, the error popup, any idea? ```python from langchain.document_loaders import UnstructuredPDFLoader, OnlinePDFLoader, UnstructuredImageLoader from langchain.text_splitter import RecursiveCharacterTextSplitter --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-2-756b21b77eab> in <module> ----> 1 from langchain.document_loaders import UnstructuredPDFLoader, OnlinePDFLoader, UnstructuredImageLoader 2 from langchain.text_splitter import RecursiveCharacterTextSplitter /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/__init__.py in <module> 4 from typing import Optional 5 ----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain 7 from langchain.cache import BaseCache 8 from langchain.callbacks import ( /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/agents/__init__.py in <module> 1 """Interface for agents.""" ----> 2 from langchain.agents.agent import ( 3 Agent, 4 AgentExecutor, 5 AgentOutputParser, /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/agents/agent.py in <module> 15 from langchain.agents.tools import InvalidTool 16 from langchain.callbacks.base import BaseCallbackManager ---> 17 from langchain.chains.base import Chain 18 from langchain.chains.llm import LLMChain 19 from langchain.input import get_color_mapping /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/chains/__init__.py in <module> 1 """Chains are easily reusable components which can be linked together.""" ----> 2 from langchain.chains.api.base import APIChain 3 from langchain.chains.api.openapi.chain import OpenAPIEndpointChain 4 from langchain.chains.combine_documents.base import AnalyzeDocumentChain 5 from langchain.chains.constitutional_ai.base import ConstitutionalChain /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/chains/api/base.py in <module> 6 from pydantic import Field, root_validator 7 ----> 8 from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT 9 from langchain.chains.base import Chain 10 from langchain.chains.llm import LLMChain /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/chains/api/prompt.py in <module> 1 # flake8: noqa ----> 2 from langchain.prompts.prompt import PromptTemplate 3 4 API_URL_PROMPT_TEMPLATE = """You are given the below API Documentation: 5 {api_docs} /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/prompts/__init__.py in <module> 1 """Prompt template classes.""" 2 from langchain.prompts.base import BasePromptTemplate, StringPromptTemplate ----> 3 from langchain.prompts.chat import ( 4 AIMessagePromptTemplate, 5 BaseChatPromptTemplate, /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/prompts/chat.py in <module> 8 from pydantic import BaseModel, Field 9 ---> 10 from langchain.memory.buffer import get_buffer_string 11 from langchain.prompts.base import BasePromptTemplate, StringPromptTemplate 12 from langchain.prompts.prompt import PromptTemplate /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/memory/__init__.py in <module> 21 from langchain.memory.summary_buffer import ConversationSummaryBufferMemory 22 from langchain.memory.token_buffer import ConversationTokenBufferMemory ---> 23 from langchain.memory.vectorstore import VectorStoreRetrieverMemory 24 25 __all__ = [ /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/memory/vectorstore.py in <module> 8 from langchain.memory.utils import get_prompt_input_key 9 from langchain.schema import Document ---> 10 from langchain.vectorstores.base import VectorStoreRetriever 11 12 /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/vectorstores/__init__.py in <module> 1 """Wrappers on top of vector stores.""" ----> 2 from langchain.vectorstores.analyticdb import AnalyticDB 3 from langchain.vectorstores.annoy import Annoy 4 from langchain.vectorstores.atlas import AtlasDB 5 from langchain.vectorstores.base import VectorStore /home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/langchain/vectorstores/analyticdb.py in <module> 9 from sqlalchemy import REAL, Index 10 from sqlalchemy.dialects.postgresql import ARRAY, JSON, UUID ---> 11 from sqlalchemy.orm import Mapped, Session, declarative_base, relationship 12 from sqlalchemy.sql.expression import func 13 ImportError: cannot import name 'Mapped' from 'sqlalchemy.orm' (/home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/sqlalchemy/orm/__init__.py) ```
ImportError: cannot import name 'Mapped' from 'sqlalchemy.orm' (/home/nvme2/kunzhong/anaconda3/lib/python3.8/site-packages/sqlalchemy/orm/__init__.py)
https://api.github.com/repos/langchain-ai/langchain/issues/3655/comments
6
2023-04-27T14:55:35Z
2023-09-24T16:07:06Z
https://github.com/langchain-ai/langchain/issues/3655
1,687,033,336
3,655
[ "langchain-ai", "langchain" ]
![image](https://user-images.githubusercontent.com/109070189/234898722-419457e3-335c-45ea-bd15-c75227f4ee5c.png) if return text is not str, there is nothing helpful info
logging Generation text type error
https://api.github.com/repos/langchain-ai/langchain/issues/3654/comments
3
2023-04-27T14:48:41Z
2023-09-10T16:26:05Z
https://github.com/langchain-ai/langchain/issues/3654
1,687,020,237
3,654
[ "langchain-ai", "langchain" ]
Hi there, I'm using Langchain + AzureOpenAi api. Based on that I'm trying to use the sql agent to run queries against the postgresql table (15.2). In many cases it works fine but once in a while I'm getting an error: ``` Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> ``` The llm instance gets initiated as: ``` llm = AzureOpenAI(deployment_name=settings.OPENAI_ENGINE, model_name="code-davinci-002") ``` Here's an example of the Agent output: ``` .... > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: app_organisationadvertiser, app_transaction, app_publisher, app_basketproduct Thought: I need to query the app_organisationadvertiser table to get the list of brands Action: query_sql_db Action Input: SELECT name FROM app_organisationadvertiser LIMIT 10 Observation: [('Your brand nr 1',)] Thought: I should check my query before executing it Action: query_checker_sql_db Action Input: SELECT name FROM app_organisationadvertiser LIMIT 10 ``` The final query looks good and is a valid SQL query, but the agent returns an exception with the error as described above. Any ideas how to deal with that?
AzureOpenAi - Sql Agent: must provide an `engine` or `deployment_id`
https://api.github.com/repos/langchain-ai/langchain/issues/3649/comments
4
2023-04-27T12:28:05Z
2023-04-28T14:14:46Z
https://github.com/langchain-ai/langchain/issues/3649
1,686,748,777
3,649
[ "langchain-ai", "langchain" ]
Hi, using the text-embedding-ada-002 model provided by Azure OpenAI doesnt seem to be working for me. Any fixes?
Azure OpenAI Embeddings model not working
https://api.github.com/repos/langchain-ai/langchain/issues/3648/comments
5
2023-04-27T12:03:40Z
2023-05-04T03:53:17Z
https://github.com/langchain-ai/langchain/issues/3648
1,686,708,535
3,648
[ "langchain-ai", "langchain" ]
Use my custom LLM model, got warning like this. Token indices sequence length is longer than the specified maximum sequence length for this model (1266 > 1024). Running this sequence through the model will result in indexing errors My model max support token is 8k. Did anyone know what this mean? ``` python loader = SeleniumURLLoader(urls=urls) data = loader.load() print(data) llm = MyLLM() chain = load_summarize_chain(llm, chain_type="map_reduce") print(chain.run(data)) ```
Token indices sequence length is longer than the specified maximum sequence length for this model
https://api.github.com/repos/langchain-ai/langchain/issues/3647/comments
2
2023-04-27T11:59:19Z
2023-10-05T16:10:38Z
https://github.com/langchain-ai/langchain/issues/3647
1,686,700,270
3,647
[ "langchain-ai", "langchain" ]
I am using Langchain package to connect to a remote DB. The problem is that it takes a lot of time (sometimes more than 3 minutes) to run the SQLDatabase class. To avoid that long time I am specifying just to load a table but still is taking up to a minute to do that work. Here the code: ```python from langchain import OpenAI from langchain.sql_database import SQLDatabase from sqlalchemy import create_engine # already loaded environment vars llm = OpenAI(temperature=0) engine = create_engine("postgresql+psycopg2://{user}:{passwd}@{host}:{port}/chatdatabase") include_tables=['table_1'] db = SQLDatabase(engine, include_tables=include_tables) ... ``` As in the documentation, Langchain uses SQLAlchemy in the background for making connections and loading tables. That is why I tried to make a connection with pure SQLAlchemy and not using langchain: ```python from sqlalchemy import create_engine engine = create_engine("postgresql+psycopg2://{user}:{passwd}@{host}:{port}/chatdatabase") with engine.connect() as con: rs = con.execute('select * from table_1 limit 10') for row in rs: print(row) ``` And surprisingly it takes just few seconds to do so. Is there any way or documentation to read (I've searched but not lucky) so that this process can be faster?
Langchain connection to remote DB takes a lot of time
https://api.github.com/repos/langchain-ai/langchain/issues/3645/comments
27
2023-04-27T11:35:12Z
2024-07-30T09:27:42Z
https://github.com/langchain-ai/langchain/issues/3645
1,686,665,722
3,645
[ "langchain-ai", "langchain" ]
I think I have found an issue with using ChatVectorDBChain together with HuggingFacePipeline that uses Hugging Face Accelerate. First, I successfully load and use a ~10GB model pipeline on an ~8GB GPU (setting it to use only ~5GB by specifying `device_map` and `max_memory`), and initialize the vectorstore: ```python from transformers import pipeline pipe = pipeline(model='declare-lab/flan-alpaca-xl', device_map='auto', model_kwargs={'max_memory': {0: "5GiB", "cpu": "20GiB"}}) pipe("How are you?") # [{'generated_text': "I'm doing well. I'm doing well, thank you. How about you?"}] import faiss import getpass import os from langchain.vectorstores.faiss import FAISS from langchain.text_splitter import CharacterTextSplitter from langchain.chains import ChatVectorDBChain from langchain import HuggingFaceHub, HuggingFacePipeline from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" embeddings = HuggingFaceEmbeddings(model_name=model_name) !nvidia-smi # Thu Apr 27 10:14:26 2023 # +-----------------------------------------------------------------------------+ # | NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6 | # |-------------------------------+----------------------+----------------------+ # | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | # | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | # | | | MIG M. | # |===============================+======================+======================| # | 0 Quadro RTX 4000 Off | 00000000:00:05.0 Off | N/A | # | 30% 47C P0 33W / 125W | 5880MiB / 8192MiB | 0% Default | # | | | N/A | # +-------------------------------+----------------------+----------------------+ # +-----------------------------------------------------------------------------+ # | Processes: | # | GPU GI CI PID Type Process name GPU Memory | # | ID ID Usage | # |=============================================================================| # +-----------------------------------------------------------------------------+ with open('data/made-up-story.txt') as f: text = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=20) texts = text_splitter.split_text(text) vectorstore = FAISS.from_texts(texts, embeddings) ``` So far so good. The issue arises when I try to load ChatVectorDBChain: ```python llm = HuggingFacePipeline(pipeline=pipe) qa = ChatVectorDBChain.from_llm(llm, vectorstore) # Produces RuntimeError: CUDA out of memory. ``` Full output: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [9], in <cell line: 4>() 1 from transformers import pipeline 3 llm = HuggingFacePipeline(pipeline=pipe) ----> 4 qa = ChatVectorDBChain.from_llm(llm, vectorstore) File /usr/local/lib/python3.9/dist-packages/langchain/chains/conversational_retrieval/base.py:240, in ChatVectorDBChain.from_llm(cls, llm, vectorstore, condense_question_prompt, chain_type, combine_docs_chain_kwargs, **kwargs) 238 """Load chain from LLM.""" 239 combine_docs_chain_kwargs = combine_docs_chain_kwargs or {} --> 240 doc_chain = load_qa_chain( 241 llm, 242 chain_type=chain_type, 243 **combine_docs_chain_kwargs, 244 ) 245 condense_question_chain = LLMChain(llm=llm, prompt=condense_question_prompt) 246 return cls( 247 vectorstore=vectorstore, 248 combine_docs_chain=doc_chain, 249 question_generator=condense_question_chain, 250 **kwargs, 251 ) File /usr/local/lib/python3.9/dist-packages/langchain/chains/question_answering/__init__.py:218, in load_qa_chain(llm, chain_type, verbose, callback_manager, **kwargs) 213 if chain_type not in loader_mapping: 214 raise ValueError( 215 f"Got unsupported chain type: {chain_type}. " 216 f"Should be one of {loader_mapping.keys()}" 217 ) --> 218 return loader_mapping[chain_type]( 219 llm, verbose=verbose, callback_manager=callback_manager, **kwargs 220 ) File /usr/local/lib/python3.9/dist-packages/langchain/chains/question_answering/__init__.py:67, in _load_stuff_chain(llm, prompt, document_variable_name, verbose, callback_manager, **kwargs) 63 llm_chain = LLMChain( 64 llm=llm, prompt=_prompt, verbose=verbose, callback_manager=callback_manager 65 ) 66 # TODO: document prompt ---> 67 return StuffDocumentsChain( 68 llm_chain=llm_chain, 69 document_variable_name=document_variable_name, 70 verbose=verbose, 71 callback_manager=callback_manager, 72 **kwargs, 73 ) File /usr/local/lib/python3.9/dist-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__() File /usr/local/lib/python3.9/dist-packages/pydantic/main.py:1038, in pydantic.main.validate_model() File /usr/local/lib/python3.9/dist-packages/pydantic/fields.py:857, in pydantic.fields.ModelField.validate() File /usr/local/lib/python3.9/dist-packages/pydantic/fields.py:1074, in pydantic.fields.ModelField._validate_singleton() File /usr/local/lib/python3.9/dist-packages/pydantic/fields.py:1121, in pydantic.fields.ModelField._apply_validators() File /usr/local/lib/python3.9/dist-packages/pydantic/class_validators.py:313, in pydantic.class_validators._generic_validator_basic.lambda12() File /usr/local/lib/python3.9/dist-packages/pydantic/main.py:679, in pydantic.main.BaseModel.validate() File /usr/local/lib/python3.9/dist-packages/pydantic/main.py:605, in pydantic.main.BaseModel._copy_and_set_values() File /usr/lib/python3.9/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.9/copy.py:230, in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y File /usr/lib/python3.9/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.9/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) File /usr/lib/python3.9/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.9/copy.py:230, in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y [... skipping similar frames: _deepcopy_dict at line 230 (1 times), deepcopy at line 146 (1 times)] File /usr/lib/python3.9/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.9/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 230 (2 times), deepcopy at line 146 (2 times), deepcopy at line 172 (2 times), _reconstruct at line 270 (1 times)] File /usr/lib/python3.9/copy.py:296, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 294 for key, value in dictiter: 295 key = deepcopy(key, memo) --> 296 value = deepcopy(value, memo) 297 y[key] = value 298 else: [... skipping similar frames: deepcopy at line 172 (2 times), _deepcopy_dict at line 230 (1 times), _reconstruct at line 270 (1 times), deepcopy at line 146 (1 times)] File /usr/lib/python3.9/copy.py:296, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 294 for key, value in dictiter: 295 key = deepcopy(key, memo) --> 296 value = deepcopy(value, memo) 297 y[key] = value 298 else: [... skipping similar frames: deepcopy at line 172 (11 times), _deepcopy_dict at line 230 (5 times), _reconstruct at line 270 (5 times), _reconstruct at line 296 (5 times), deepcopy at line 146 (5 times)] File /usr/lib/python3.9/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) File /usr/lib/python3.9/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.9/copy.py:230, in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y File /usr/lib/python3.9/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.9/copy.py:296, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 294 for key, value in dictiter: 295 key = deepcopy(key, memo) --> 296 value = deepcopy(value, memo) 297 y[key] = value 298 else: File /usr/lib/python3.9/copy.py:153, in deepcopy(x, memo, _nil) 151 copier = getattr(x, "__deepcopy__", None) 152 if copier is not None: --> 153 y = copier(memo) 154 else: 155 reductor = dispatch_table.get(cls) File /usr/local/lib/python3.9/dist-packages/torch/nn/parameter.py:56, in Parameter.__deepcopy__(self, memo) 54 return memo[id(self)] 55 else: ---> 56 result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad) 57 memo[id(self)] = result 58 return result RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 7.80 GiB total capacity; 6.82 GiB already allocated; 30.44 MiB free; 6.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` It seems to me that LangChain is somehow trying to reload the (whole?) pipeline on the GPU. Any help appreciated, thank you.
Issue with ChatVectorDBChain and Hugging Face Accelerate
https://api.github.com/repos/langchain-ai/langchain/issues/3642/comments
1
2023-04-27T10:28:29Z
2023-09-10T16:26:11Z
https://github.com/langchain-ai/langchain/issues/3642
1,686,567,730
3,642
[ "langchain-ai", "langchain" ]
Follow the instruction: https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html ![image](https://user-images.githubusercontent.com/54729177/234824204-0d44bcbc-bbda-4c57-948c-590b6900c922.png) But I get the error: ``` bash = BashProcess(persistent=True) TypeError: BashProcess.__init__() got an unexpected keyword argument 'persistent' ``` The version of langchain is 0.0.150
no 'persistent=True' tag
https://api.github.com/repos/langchain-ai/langchain/issues/3641/comments
1
2023-04-27T09:42:23Z
2023-04-27T19:08:03Z
https://github.com/langchain-ai/langchain/issues/3641
1,686,495,917
3,641
[ "langchain-ai", "langchain" ]
I'm attempting to load some Documents and get a `TransformError` - could someone please point me in the right direction? Thanks! I'm afraid the traceback doesn't mean much to me. ```python db = DeepLake(dataset_path=deeplake_path, embedding_function=embeddings) db.add_documents(texts) ``` ``` tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (0,) float32 None ids text (0,) str None metadata json (0,) str None text text (0,) str None Evaluating ingest: 0%| | 0/1 [00:10<? Traceback (most recent call last): File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1065, in extend self._extend(samples, progressbar, pg_callback=pg_callback) File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1001, in _extend self._samples_to_chunks( File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 824, in _samples_to_chunks num_samples_added = current_chunk.extend_if_has_space( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 50, in extend_if_has_space return self.extend_if_has_space_byte_compression( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 233, in extend_if_has_space_byte_compression serialized_sample, shape = self.serialize_sample( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\base_chunk.py", line 342, in serialize_sample incoming_sample, shape = serialize_text( ^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 505, in serialize_text incoming_sample, shape = text_to_bytes(incoming_sample, dtype, htype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 458, in text_to_bytes byts = json.dumps(sample, cls=HubJsonEncoder).encode() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ ValueError: Circular reference detected The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\util\transform.py", line 220, in _transform_and_append_data_slice transform_dataset.flush() File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform_dataset.py", line 154, in flush raise SampleAppendError(name) from e deeplake.util.exceptions.SampleAppendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1065, in extend self._extend(samples, progressbar, pg_callback=pg_callback) File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1001, in _extend self._samples_to_chunks( File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 824, in _samples_to_chunks num_samples_added = current_chunk.extend_if_has_space( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 50, in extend_if_has_space return self.extend_if_has_space_byte_compression( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 233, in extend_if_has_space_byte_compression serialized_sample, shape = self.serialize_sample( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\base_chunk.py", line 342, in serialize_sample incoming_sample, shape = serialize_text( ^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 505, in serialize_text incoming_sample, shape = text_to_bytes(incoming_sample, dtype, htype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 458, in text_to_bytes byts = json.dumps(sample, cls=HubJsonEncoder).encode() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ ValueError: Circular reference detected The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\util\transform.py", line 177, in _handle_transform_error transform_dataset.flush() File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform_dataset.py", line 154, in flush raise SampleAppendError(name) from e deeplake.util.exceptions.SampleAppendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform.py", line 298, in eval raise TransformError( deeplake.util.exceptions.TransformError: Transform failed at index 0 of the input data. See traceback for more details. ```
deeplake.util.exceptions.TransformError
https://api.github.com/repos/langchain-ai/langchain/issues/3640/comments
7
2023-04-27T09:03:57Z
2023-11-27T04:56:27Z
https://github.com/langchain-ai/langchain/issues/3640
1,686,435,981
3,640
[ "langchain-ai", "langchain" ]
Brief summary: Need to solve multiple tasks in sequence (eg: translate an input -> use it to answer question -> translate to different language) Previously was making multiple LLMChain objects with different prompts and passing outputs of one chain into another. Came across sequential chains, tried it. I didnt find any big difference on why I should use one over the other. Moreover, sequential chains seem to be slower than just calling multiple LLMChains. Anything I'm missing, or if anyone can elaborate the need of sequential chains. Thanks!!
Sequential chains vs multiple LLMChains (Why prefer one over the other?)
https://api.github.com/repos/langchain-ai/langchain/issues/3638/comments
5
2023-04-27T07:12:50Z
2023-10-21T16:09:41Z
https://github.com/langchain-ai/langchain/issues/3638
1,686,256,733
3,638
[ "langchain-ai", "langchain" ]
In the [docs](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/googledrive.html) of the GoogleDriveLoader says ``Currently, only Google Docs are supported``, but then, in the [code](https://github.com/hwchase17/langchain/blob/8e10ac422e4e6b193fc35e1d64d7f0c5208faa8d/langchain/document_loaders/googledrive.py#L100), there is a function ``_load_sheet_from_id``. That function is only used for folder loading. Accessing the _private_ method of the class is it possible, and works perfectly, to load spread sheets: ``` from langchain.document_loaders import GoogleDriveLoader spreadsheet_id = "122tuu4r-yYng8Lj7XXXUgb-basdbk" loader = GoogleDriveLoader(file_ids=[spreadsheet_id]) docs = loader._load_sheet_from_id(spreadsheet_id) ``` Probably ``_load_documents_from_ids`` needs some refactor to work based on the mimeType, as ``_load_documents_from_folder`` does.
Document Loaders: GoogleDriveLoader hidden option to load spread sheets
https://api.github.com/repos/langchain-ai/langchain/issues/3637/comments
3
2023-04-27T06:07:09Z
2024-02-07T16:30:28Z
https://github.com/langchain-ai/langchain/issues/3637
1,686,176,243
3,637
[ "langchain-ai", "langchain" ]
Hello all, I have been struggling for the past few days attempting to allow an agent.executor call to reference a text file as a VectorStore and determine the best response, then respond. When the agent eventually calls the VectorDBQAChain chain, it throws the below error stating the inability to redefine run(). Any input here is much appreciated. Even a basic setup throws an error stating: ``` .\projectPath\node_modules\langchain\dist\chains\base.cjs:64 Object.defineProperty(outputValues, index_js_1.RUN_KEY, { ^ TypeError: Cannot redefine property: __run at Function.defineProperty (<anonymous>) at VectorDBQAChain.call (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\chains\base.cjs:64:16) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async VectorDBQAChain.run (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\chains\base.cjs:29:30) at async ChainTool.call (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\tools\base.cjs:23:22) at async C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\agents\executor.cjs:101:23 at async Promise.all (index 0) at async AgentExecutor._call (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\agents\executor.cjs:97:30) at async AgentExecutor.call (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\node_modules\langchain\dist\chains\base.cjs:53:28) at async run (C:\Users\Tyler\Documents\RMMZ\GPTales-InteractiveNPC\Game\js\plugins\GPTales\example.js:79:19) Node.js v19.7.0 ``` Code: ``` const run = async () => { console.log("Starting."); console.log(process.env.OPENAI_API_KEY); process.env.LANGCHAIN_HANDLER = "langchain"; const gameLorePath = path.join(__dirname, "yuri.txt"); const text = fs.readFileSync(gameLorePath, "utf8"); const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, }); const docs = await textSplitter.createDocuments([text]); const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings()); const model = new ChatOpenAI({ temperature: 0, api_key: process.env.OPENAI_API_KEY, }); const chain = VectorDBQAChain.fromLLM(model, vectorStore); const characterContextTool = new ChainTool({ name: "character-contextTool-tool", description: "Context for the character - used for querying context of lore(bio, personality, appearance, etc), characters, events, environments, essentially all aspects of the character and their history.", chain: chain, }); const tools = [new Calculator(), characterContextTool]; // Passing "chat-conversational-react-description" as the agent type // automatically creates and uses BufferMemory with the executor. // If you would like to override this, you can pass in a custom // memory option, but the memoryKey set on it must be "chat_history". const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", verbose: true, }); console.log("Loaded agent."); const input0 = "hi, i am bob. use the character context tool to best decide how to respond considering all facets of the character."; const result0 = await executor.call({ input: input0 }); console.log(`Got output ${result0.output}`); const input1 = "whats your name?"; const result1 = await executor.call({ input: input1 }); console.log(`Got output ${result1.output}`); }; run(); ```
Unable to call VectorDBQAChain from Executor
https://api.github.com/repos/langchain-ai/langchain/issues/3633/comments
2
2023-04-27T05:25:14Z
2023-04-27T17:40:28Z
https://github.com/langchain-ai/langchain/issues/3633
1,686,139,134
3,633
[ "langchain-ai", "langchain" ]
I am facing an issue when using the embeddings model that Azure OpenAI offers. Please help. Heres the code below. Assume the azure resource name is azure-resource. This issue is only arising with the text-embeddings-ada-002 model, nothing else ``` os.environ["OPENAI_API_KEY"] = API_KEY # Loading the document using PyPDFLoader loader = PyPDFLoader('xxx') # Splitting the document into chunks pages = loader.load_and_split() # Creating your embeddings instance embeddings = OpenAIEmbeddings( model = "azure-resource", ) # Creating your vector db db = FAISS.from_documents(pages, embeddings) query = "some-query" docs = db.similarity_search(query) ``` My error: `KeyError: 'Could not automatically map azure-resource to a tokeniser. Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect.'`
KeyError: 'Could not automatically map azure-resource to a tokeniser. Arising when using the text-embeddings-ada-002 model.
https://api.github.com/repos/langchain-ai/langchain/issues/3632/comments
0
2023-04-27T05:23:59Z
2023-04-30T14:54:06Z
https://github.com/langchain-ai/langchain/issues/3632
1,686,138,122
3,632
[ "langchain-ai", "langchain" ]
Using MMR with Chroma currently does not work because the max_marginal_relevance_search_by_vector method calls self.__query_collection with the parameter "include:", but "include" is not an accepted parameter for __query_collection. This appears to be a regression introduced with #3372 Excerpt from max_marginal_relevance_search_by_vector method: ``` results = self.__query_collection( query_embeddings=embedding, n_results=fetch_k, where=filter, include=["metadatas", "documents", "distances", "embeddings"], ) ``` __query_collection does not accept include: ``` def __query_collection( self, query_texts: Optional[List[str]] = None, query_embeddings: Optional[List[List[float]]] = None, n_results: int = 4, where: Optional[Dict[str, str]] = None, ) -> List[Document]: ``` This results in an unexpected keyword error. The short term fix is to use self._collection.query instead of self.__query_collection in max_marginal_relevance_search_by_vector, although that loses the protection when the user requests more records than exist in the store. ``` results = self._collection.query( query_embeddings=embedding, n_results=fetch_k, where=filter, include=["metadatas", "documents", "distances", "embeddings"], ) ```
Chroma.py max_marginal_relevance_search_by_vector method currently broken
https://api.github.com/repos/langchain-ai/langchain/issues/3628/comments
4
2023-04-27T00:21:42Z
2023-05-01T17:47:17Z
https://github.com/langchain-ai/langchain/issues/3628
1,685,907,595
3,628
[ "langchain-ai", "langchain" ]
Hi, i'm using deeplake with the ConversationalRetrievalBuffer (just like in this brand new guide [code understanding](https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html#prepare-data) encountering the following error when calling: `answer = chain({"question": user_input, "chat_history": chat_history['history']})` error: ``` File "C:\Users\sbene\Projects\GitChat\src\chatbot.py", line 446, in generate_answer answer = chain({"question": user_input, "chat_history": chat_history['history']}) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\chains\base.py", line 116, in __call__ raise e File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\chains\base.py", line 113, in __call__ outputs = self._call(inputs) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 95, in _call docs = self._get_docs(new_question, inputs) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 162, in _get_docs docs = self.retriever.get_relevant_documents(question) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\vectorstores\base.py", line 279, in get_relevant_documents docs = self.vectorstore.similarity_search(query, **self.search_kwargs) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\vectorstores\deeplake.py", line 350, in similarity_search return self.search(query=query, k=k, **kwargs) File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\vectorstores\deeplake.py", line 294, in search indices, scores = vector_search( File "C:\Users\sbene\miniconda3\envs\gitchat\lib\site-packages\langchain\vectorstores\deeplake.py", line 51, in vector_search nearest_indices[::-1][:k] if distance_metric in ["cos"] else nearest_indices[:k] ```
Bug: deeplake cosine distance search error
https://api.github.com/repos/langchain-ai/langchain/issues/3623/comments
1
2023-04-26T23:27:06Z
2023-09-10T16:26:16Z
https://github.com/langchain-ai/langchain/issues/3623
1,685,870,712
3,623
[ "langchain-ai", "langchain" ]
It would be good to get some more documentation and examples of using models other than OpenAI. Currently the docs are really heavily skewed and in some areas such as conversation only offer an OpenAI option. Thanks
Non OpenAI models
https://api.github.com/repos/langchain-ai/langchain/issues/3622/comments
2
2023-04-26T23:06:51Z
2023-09-17T17:22:03Z
https://github.com/langchain-ai/langchain/issues/3622
1,685,858,023
3,622
[ "langchain-ai", "langchain" ]
I am having issues with using ConversationalRetrievalChain to chat with a CSV file. It only recognizes the first four rows of a CSV file. ``` loader = CSVLoader(file_path=filepath, encoding="utf-8") data = loader.load() embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) vectorstore = FAISS.from_documents(data, embeddings) _template = """Given the following conversation and a follow-up question, rephrase the follow-up question to be a standalone question. Chat History: {chat_history} Follow-up entry: {question} Standalone question:""" CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) qa_template = """"You are an AI conversational assistant to answer questions based on a context. You are given data from a csv file and a question, you must help the user find the information they need. Your answers should be friendly, in the same language. question: {question} ========= context: {context} ======= """ QA_PROMPT = PromptTemplate(template=qa_template, input_variables=["question", "context"]) model_name = 'gpt-4' from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) chain = ConversationalRetrievalChain.from_llm( llm = ChatOpenAI(temperature=0.0, model_name=model_name, openai_api_key=openai_api_key, request_timeout=120), retriever=vectorstore.as_retriever(), memory=memory) query = """ How many headlines are in this data set """ result = chain({"question": query,}) result[ 'answer'] ``` The response is `There are four rows in this data set.` The data length is 151 lines so I know that this step is working properly. Could this be a token limitation of OpenAI?
ConversationalRetrievalChain with CSV file limited to first 4 rows of data
https://api.github.com/repos/langchain-ai/langchain/issues/3621/comments
14
2023-04-26T22:38:48Z
2023-09-01T07:29:44Z
https://github.com/langchain-ai/langchain/issues/3621
1,685,837,569
3,621
[ "langchain-ai", "langchain" ]
if the line in BaseConversationalRetrievalChain::_call() (in chains/conversational_retrieval/base.py): ``` docs = self._get_docs(new_question, inputs) ``` returns an empty list of docs, then a subsequent line in the same method: ``` answer, _ = self.combine_docs_chain.combine_docs(docs, **new_inputs) ``` will result in an error due to the CombineDocsProtocol.combine_docs() line: ``` results = self.llm_chain.apply( # FYI - this is parallelized and so it is fast. [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs] ) ``` which will pass an empty "input_list" arg to LLMChain.apply(). LLMChain.apply() doesn't like an empty input_list. Should docs be non-empty in all cases? If the vectorstore is empty, wouldn't it match 0 docs and then shouldn't that be handled more gracefully?
BaseConversationalRetrievalChain raising error when no Documents are matched
https://api.github.com/repos/langchain-ai/langchain/issues/3617/comments
1
2023-04-26T20:15:11Z
2023-09-10T16:26:25Z
https://github.com/langchain-ai/langchain/issues/3617
1,685,654,780
3,617
[ "langchain-ai", "langchain" ]
When executing the code for Human as a tool taken directly from documentation I get the following error: ``` ImportError Traceback (most recent call last) [/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb) Cell 2 in 5 [3](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb#W1sZmlsZQ%3D%3D?line=2) from langchain.llms import OpenAI [4](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb#W1sZmlsZQ%3D%3D?line=3) from langchain.agents import load_tools, initialize_agent ----> [5](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb#W1sZmlsZQ%3D%3D?line=4) from langchain.agents import AgentType [7](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb#W1sZmlsZQ%3D%3D?line=6) llm = ChatOpenAI(temperature=0.0) [8](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/delete.ipynb#W1sZmlsZQ%3D%3D?line=7) math_llm = OpenAI(temperature=0.0) ImportError: cannot import name 'AgentType' from 'langchain.agents' ([/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/__init__.py](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/__init__.py)) ``` Even when commenting out the 'from langchain.agents import AgentType' and switching the agent like so 'agent="zero-shot-react-description"' I still get the following error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb) Cell 4 in 7 [4](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=3) os.environ['WOLFRAM_ALPHA_APPID'] = creds.WOLFRAM_ALPHA_APPID [6](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=5) llm = OpenAI(temperature=0.0, model_name = "gpt-3.5-turbo") ----> [7](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=6) tools = load_tools(["python_repl", [8](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=7) "terminal", [9](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=8) "wolfram-alpha", [10](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=9) "human", [11](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=10) # "serpapi", [12](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=11) # "wikipedia", [13](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=12) "requests", [14](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=13) ],) [16](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=15) agent = initialize_agent(tools, [17](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=16) llm, [18](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=17) agent="zero-shot-react-description", [19](vscode-notebook-cell:/Users/anonymous/Library/CloudStorage/OneDrive-UniversityCollegeLondon/Python_Projects/LangChain/AGI.ipynb#W3sZmlsZQ%3D%3D?line=18) verbose=True) File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/load_tools.py:236](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/load_tools.py:236), in load_tools(tool_names, llm, callback_manager, **kwargs) 234 tools.append(tool) 235 else: --> 236 raise ValueError(f"Got unknown tool {name}") 237 return tools ValueError: Got unknown tool human ```
Human as a Tool Documentation Out of Date
https://api.github.com/repos/langchain-ai/langchain/issues/3615/comments
6
2023-04-26T19:58:50Z
2023-04-26T22:11:05Z
https://github.com/langchain-ai/langchain/issues/3615
1,685,632,646
3,615
[ "langchain-ai", "langchain" ]
Hello all, I would like to clarify something regarding indexes, llama connectors etc... I made simple q/a AI app using lanchain with pinecone vector DB, the vector DB is updated from local files on change to that files.. Everything works ok. Now, how or what is the logic when adding other connectors ? Do I just use the llama connector to scrape some endpoint like web or discord, and feed it to vector DB and use only one vector DB in the end to query answer? I need to query over multiple sources. How to deal with new data ? Currently, since the text files are small the pinecone index is dropped and it's recreated from scratch which does not seem to be a correct way to do it... let's say if the web changes, something is added or modified, it does not make sense to recreate the whole DB (hmm maybe I can drop stuff by source meta ? )
Multiple data sources logic ?
https://api.github.com/repos/langchain-ai/langchain/issues/3609/comments
1
2023-04-26T18:23:02Z
2023-09-17T17:22:08Z
https://github.com/langchain-ai/langchain/issues/3609
1,685,505,802
3,609
[ "langchain-ai", "langchain" ]
Hello, I and deploying RetrievalQAWithSourcesChain with ChatOpenAI model right now. Unlike OpenAI model, you can provide system message for the model which is a great complement. But I tried many times, it seems the prompt can not be insert into the chain. Please suggest what should I do to my code: ``` #Prompt Construction template="""You play as {user_name}'s assistant,your name is {name},personality is {personality},duty is {duty}""" system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template=""" Context: {context} Question: {question} please indicate if you are not sure about answer. Do NOT Makeup. MUST answer in {language}.""" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) ChatPromptTemplate.input_variables=["context", "question","name","personality","user_name","duty","language"] #define the chain chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, combine_documents_chain=qa_chain, chain_type="stuff", retriever=compression_retriever, chain_type_kwargs = {"prompt": chat_prompt} ) ```
How can I structure prompt temple for RetrievalQAWithSourcesChain with ChatOpenAI model
https://api.github.com/repos/langchain-ai/langchain/issues/3606/comments
3
2023-04-26T18:02:39Z
2023-09-17T17:22:13Z
https://github.com/langchain-ai/langchain/issues/3606
1,685,480,734
3,606
[ "langchain-ai", "langchain" ]
I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance. If I am wrong here can someone point me in the right direction. If not are there any plans to implement this?
Embeddings normalization and similarity metric
https://api.github.com/repos/langchain-ai/langchain/issues/3605/comments
0
2023-04-26T18:02:20Z
2023-05-30T18:57:06Z
https://github.com/langchain-ai/langchain/issues/3605
1,685,480,283
3,605
[ "langchain-ai", "langchain" ]
I have a doubt if FAISS is a vector database or a search algorithm. The vectorstores.faiss mentions it as a vector database, but is it not a search algorithm?
The vectorstores says faiss as FAISS vector database
https://api.github.com/repos/langchain-ai/langchain/issues/3601/comments
1
2023-04-26T16:28:37Z
2023-09-10T16:26:41Z
https://github.com/langchain-ai/langchain/issues/3601
1,685,351,151
3,601
[ "langchain-ai", "langchain" ]
Hi Team, I am using opensearch as my vectorstore and trying to create index for documents vectors. but unable to create index: Getting error: `ERROR - The embeddings count, 501 is more than the [bulk_size], 500. Increase the value of [bulk_size]` Can someone please advice ? Thanks
Unable to create opensearch index.
https://api.github.com/repos/langchain-ai/langchain/issues/3595/comments
2
2023-04-26T14:04:56Z
2023-09-10T16:26:46Z
https://github.com/langchain-ai/langchain/issues/3595
1,685,103,449
3,595
[ "langchain-ai", "langchain" ]
null
load_qa_chain _ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
https://api.github.com/repos/langchain-ai/langchain/issues/3593/comments
3
2023-04-26T14:02:08Z
2023-10-18T21:42:47Z
https://github.com/langchain-ai/langchain/issues/3593
1,685,098,021
3,593
[ "langchain-ai", "langchain" ]
I am using RetrievalQAWithSourcesChain to get answers on documents that I previously embedded using pinecone. I notice that sometimes that the sources is not populated under the sources key when I run the chain. I am using pinecone to embed the pdf documents like so: ```python documents = loader.load() text_splitter = RecursiveCharacterTextSplitter( chunk_size=400, chunk_overlap=20, length_function=tiktoken_len, separators=['\n\n', '\n', ' ', ''] ) split_documents = text_splitter.split_documents(documents=documents) Pinecone.from_documents( split_documents, OpenAIEmbeddings(), index_name='test_index', namespace= 'test_namespace') ``` I am using RetrievalQAWithSourcesChain to ask queries like so: ```python llm = OpenAIEmbeddings() vectorstore: Pinecone = Pinecone.from_existing_index( index_name='test_index', embedding=OpenAIEmbeddings(), namespace='test_namespace' ) qa_chain = load_qa_with_sources_chain(llm=_llm, chain_type="stuff") qa = RetrievalQAWithSourcesChain( combine_documents_chain=qa_chain, retriever=vectorstore.as_retriever(), reduce_k_below_max_tokens=True, ) answer_response = qa({"question": question}, return_only_outputs=True) ``` Expected response `{'answer': 'some answer', 'sources': 'the_file_name.pdf'}` Actual response `{'answer': 'some answer', 'sources': ''}` This behaviour is actually not consistent. I sometimes get the sources in the answer itself and not under the sources key. And at times I get the sources under the 'sources' key and not the answer. I want the sources to ALWAYS come under the sources key and not in the answer text. Im using langchain==0.0.149. Am I missing something in the way im embedding or retrieving my documents? Or is this an issue with langchain? **Edit: Additional information on how to reproduce this issue** While trying to reproduce the exact issue for @jpdus I noticed that this happens consistently when I request for the answer in a table format. When the query requests for the answer in a table format, it seems like the source is coming in with the answer and not the source key. I am attaching a test document and some examples here: Source : [UN Doc.pdf](https://github.com/hwchase17/langchain/files/11339620/UN.Doc.pdf) Query 1 (with table): what are the goals for sustainability 2030, povide your answer in a table format? Response : ```json {'answer': 'Goals for Sustainability 2030:\n\nGoal 1. End poverty in all its forms everywhere\nGoal 2. End hunger, achieve food security and improved nutrition and promote sustainable agriculture\nGoal 3. Ensure healthy lives and promote well-being for all at all ages\nGoal 4. Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all\nGoal 5. Achieve gender equality and empower all women and girls\nGoal 6. Ensure availability and sustainable management of water and sanitation for all\nGoal 7. Ensure access to affordable, reliable, sustainable and modern energy for all\nGoal 8. Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all\nGoal 9. Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation\nGoal 10. Reduce inequality within and among countries\nGoal 11. Make cities and human settlements inclusive, safe, resilient and sustainable\nGoal 12. Ensure sustainable consumption and production patterns\nGoal 13. Take urgent action to combat climate change and its impacts\nGoal 14. Conserve and sustainably use the oceans, seas and marine resources for sustainable development\nGoal 15. Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss\nSource: docs/UN Doc.pdf', 'sources': ''} ``` Query 2 (without table) : what are the goals for sustainability 2030? Response: ```json {'answer': "The goals for sustainability 2030 include expanding international cooperation and capacity-building support to developing countries in water and sanitation-related activities and programs, ensuring access to affordable, reliable, sustainable and modern energy for all, promoting sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all, taking urgent action to combat climate change and its impacts, strengthening efforts to protect and safeguard the world's cultural and natural heritage, providing universal access to safe, inclusive and accessible green and public spaces, ensuring sustainable consumption and production patterns, significantly increasing access to information and communications technology and striving to provide universal and affordable access to the Internet in least developed countries by 2020, and reducing inequality within and among countries. \n", 'sources': 'docs/UN Doc.pdf'} ```
RetrievalQAWithSourcesChain sometimes does not return sources under sources key
https://api.github.com/repos/langchain-ai/langchain/issues/3592/comments
7
2023-04-26T13:22:28Z
2023-09-24T16:07:12Z
https://github.com/langchain-ai/langchain/issues/3592
1,685,024,756
3,592
[ "langchain-ai", "langchain" ]
I am using the DirectoryLoader, with the relevant loader class defined ``` DirectoryLoader('.\\src', glob="**/*.md", loader_cls=UnstructuredMarkdownLoader)` ``` I couldn't understand why the following step didn't chunk text into the relevant markdown sections: ``` markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0) texts = markdown_splitter.split_documents(docs) ``` After digging into it a bit, the UnstructuredMarkdownLoader strips the Markdown formatting from the documents. This means that the Splitter has nothing to guide it and ends up chunking into 1000 text character sizes.
UnstructuredMarkdownLoader strips Markdown formatting from documents, rendering MarkdownTextSplitter non-functional
https://api.github.com/repos/langchain-ai/langchain/issues/3591/comments
3
2023-04-26T13:02:27Z
2023-11-02T16:15:34Z
https://github.com/langchain-ai/langchain/issues/3591
1,684,990,072
3,591
[ "langchain-ai", "langchain" ]
So I'm just trying to write a custom agent using `LLMSingleActionAgent` based off the example from the official docs and I ran into this error > > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 118, in __call__ > return self.prep_outputs(inputs, outputs, return_only_outputs) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 168, in prep_outputs > self._validate_outputs(outputs) > File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 79, in _validate_outputs > raise ValueError( > ValueError: Did not get output keys that were expected. Got: {'survey_question'}. Expected: {'output'} ```python class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"survey_question": llm_output.split( "Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise ValueError(f"Could not parse LLM result: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output) class Chatbot: async def conversational_chat(self, query, dataset_path): prompt = CustomPromptTemplate( template=template, tools=tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps", "dataset_path"], output_parser=CustomOutputParser(), ) output_parser = CustomOutputParser() llm = OpenAI(temperature=0) # type: ignore llm_chain = LLMChain(llm=llm, prompt=prompt) tool_names = [tool.name for tool in tools] survey_agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names # type: ignore ) survey_agent_executor = AgentExecutor.from_agent_and_tools( agent=survey_agent, tools=tools, verbose=True) return survey_agent_executor({"input": query, "dataset_path": dataset_path}) ```
Did not get output keys that were expected.
https://api.github.com/repos/langchain-ai/langchain/issues/3590/comments
1
2023-04-26T12:55:45Z
2023-09-10T16:26:51Z
https://github.com/langchain-ai/langchain/issues/3590
1,684,978,281
3,590
[ "langchain-ai", "langchain" ]
I'm using OpenAPI agents to access my own APIs. and the LLM I'm using is OpenAI's GPT-4. When I queried something, LLM just answered not only `Action` and `Action Input`, but also `Observation` and even `Final Answer` with fake data under API_ORCHESTRATOR_PROMPT. So the agent did not work with `api_planner` and `api_controller` tools. I am wondering is `API_ORCHESTRATOR_PROMPT` or `FORMAT_INSTRUCTIONS` prompt stable? I tested the [Agent Getting Started](https://python.langchain.com/en/latest/modules/agents/getting_started.html), and got bad answer from llm directly without tools sometimes, ether. or am i missing something important? or should i rewrite the prompt? thanks
OpenAPI agents did not execute tools
https://api.github.com/repos/langchain-ai/langchain/issues/3588/comments
3
2023-04-26T12:24:28Z
2023-09-13T15:59:30Z
https://github.com/langchain-ai/langchain/issues/3588
1,684,928,287
3,588
[ "langchain-ai", "langchain" ]
I am facing an error when calling the OpenAIEmbeddings model. This is my code. ```` import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "base-thing" os.environ["OPENAI_API_KEY"] = "apikey" from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings(model="model-name") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) ```` This is the error I am facing: **AttributeError: module 'tiktoken' has no attribute 'model'**
AttributeError when calling OpenAIEmbeddings model
https://api.github.com/repos/langchain-ai/langchain/issues/3586/comments
12
2023-04-26T11:07:24Z
2023-04-27T05:26:43Z
https://github.com/langchain-ai/langchain/issues/3586
1,684,804,005
3,586
[ "langchain-ai", "langchain" ]
Hi, I have installed langchain-0.0.149 using pip. When trying to run the folloging code I get an import error. from langchain.retrievers import ContextualCompressionRetriever Traceback (most recent call last): File ".../lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in from langchain.retrievers import ContextualCompressionRetriever ImportError: cannot import name 'ContextualCompressionRetriever' from 'langchain.retrievers' (.../lib/python3.10/site-packages/langchain/retrievers/init.py) Thanks in advance, Mikel.
import error ContextualCompressionRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/3585/comments
2
2023-04-26T10:39:44Z
2023-09-10T16:27:02Z
https://github.com/langchain-ai/langchain/issues/3585
1,684,762,823
3,585
[ "langchain-ai", "langchain" ]
Hello! I am building an ai assistant, with the help of langchain's ConversationRetrievalChain. I built a FastAPI endpoint where users can ask questions from the ai. I store the previous messages in my db. My code: ``` def create_chat_agent(): llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") # Data Ingestion word_loader = DirectoryLoader(DOCUMENTS_DIRECTORY, glob="*.docx") documents = [] documents.extend(word_loader.load()) # Chunk and Embeddings text_splitter = CharacterTextSplitter(chunk_size=800, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(documents, embeddings) # Initialise Langchain - Conversation Retrieval Chain return ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever()) def askAI(cls, prompt: str, id: str): qa = cls.create_chat_agent() chat_history = [] previousMessages = UserController.get_previous_messages_by_user_id(id) for message in previousMessages: messageObject = (message['user'], message['ai']) chat_history.append(messageObject) response = qa({"question": prompt, "chat_history": chat_history}) cls.update_previous_messages(userId=id, prompt=prompt, response=response["answer"]) return response ``` I always get back an answer and most of the time it is very specific, however sometimes it answers the wrong question. I mean the question I asked a few prompts earlier. I don't know what is wrong in here, can somebody help me? Thank you in advance!!
ConversationRetrievalChain with memory
https://api.github.com/repos/langchain-ai/langchain/issues/3583/comments
3
2023-04-26T09:40:35Z
2023-09-27T16:07:31Z
https://github.com/langchain-ai/langchain/issues/3583
1,684,664,076
3,583
[ "langchain-ai", "langchain" ]
![image](https://user-images.githubusercontent.com/22388079/234501019-6731c2ac-3012-4f7c-91dd-3506d35829a0.png) Token usage calculation is not working: ![image](https://user-images.githubusercontent.com/22388079/234499985-51e5b3fd-49e7-406c-a50b-beb1458aff44.png)
Token usage calculation is not working for Asynchronous requests in ChatOpenA
https://api.github.com/repos/langchain-ai/langchain/issues/3579/comments
2
2023-04-26T07:21:28Z
2023-09-10T16:27:08Z
https://github.com/langchain-ai/langchain/issues/3579
1,684,427,942
3,579
[ "langchain-ai", "langchain" ]
## Description ref: https://python.langchain.com/en/latest/modules/agents/tools/examples/chatgpt_plugins.html Thanks for the great tool. I'm trying ChatGPT Plugin. I get an error when I run the sample code in the document. It looks like it's caused by single quotes. ### output ``` > Entering new AgentExecutor chain... I need to use the Klarna Shopping API to search for available t shirts. Action: KlarnaProducts Action Input: None Observation: Usage Guide: Assistant uses the Klarna plugin to get relevant product suggestions for any shopping or product discovery purpose. Assistant will reply with the following 3 paragraphs 1) Search Results 2) Product Comparison of the Search Results 3) Followup Questions. The first paragraph contains a list of the products with their attributes listed clearly and concisely as bullet points under the product, together with a link to the product and an explanation. Links will always be returned and should be shown to the user. The second paragraph compares the results returned in a summary sentence starting with "In summary". Assistant comparisons consider only the most important features of the products that will help them fit the users request, and each product mention is brief, short and concise. In the third paragraph assistant always asks helpful follow-up questions and end with a question mark. When assistant is asking a follow-up question, it uses it's product expertise to provide information pertaining to the subject of the user's request that may guide them in their search for the right product. OpenAPI Spec: {'openapi': '3.0.1', 'info': {'version': 'v0', 'title': 'Open AI Klarna product Api'}, 'servers': [{'url': 'https://www.klarna.com/us/shopping'}], 'tags': [{'name': 'open-ai-product-endpoint', 'description': 'Open AI Product Endpoint. Query for products.'}], 'paths': {'/public/openai/v0/products': {'get': {'tags': ['open-ai-product-endpoint'], 'summary': 'API for fetching Klarna product information', 'operationId': 'productsUsingGET', 'parameters': [{'name': 'q', 'in': 'query', 'description': "A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started.", 'required': True, 'schema': {'type': 'string'}}, {'name': 'size', 'in': 'query', 'description': 'number of products returned', 'required': False, 'schema': {'type': 'integer'}}, {'name': 'min_price', 'in': 'query', 'description': "(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.", 'required': False, 'schema': {'type': 'integer'}}, {'name': 'max_price', 'in': 'query', 'description': "(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.", 'required': False, 'schema': {'type': 'integer'}}], 'responses': {'200': {'description': 'Products found', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/ProductResponse'}}}}, '503': {'description': 'one or more services are unavailable'}}, 'deprecated': False}}}, 'components': {'schemas': {'Product': {'type': 'object', 'properties': {'attributes': {'type': 'array', 'items': {'type': 'string'}}, 'name': {'type': 'string'}, 'price': {'type': 'string'}, 'url': {'type': 'string'}}, 'title': 'Product'}, 'ProductResponse': {'type': 'object', 'properties': {'products': {'type': 'array', 'items': {'$ref': '#/components/schemas/Product'}}}, 'title': 'ProductResponse'}}}} Thought:I need to use the Klarna Shopping API to search for available t shirts. Action: requests_get Action Input: 'https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts&size=10'Traceback (most recent call last): File "test.py", line 11, in <module> agent_chain.run("what t shirts are available in klarna?") File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call next_step_output = self._take_next_step( File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/agents/agent.py", line 695, in _take_next_step observation = tool.run( File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/tools/base.py", line 184, in run raise e File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/tools/base.py", line 181, in run observation = self._run(*tool_args, **tool_kwargs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/tools/requests/tool.py", line 31, in _run return self.requests_wrapper.get(url) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/requests.py", line 125, in get return self.requests.get(url, **kwargs).text File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/langchain/requests.py", line 28, in get return requests.get(url, headers=self.headers, **kwargs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, **kwargs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/sessions.py", line 695, in send adapter = self.get_adapter(url=request.url) File "/Users/asan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/sessions.py", line 792, in get_adapter raise InvalidSchema(f"No connection adapters were found for {url!r}") requests.exceptions.InvalidSchema: No connection adapters were found for "'https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts&size=10'" ``` ### source code ```python from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent from langchain.agents import AgentType from langchain.tools import AIPluginTool tool = AIPluginTool.from_plugin_url("https://www.klarna.com/.well-known/ai-plugin.json") llm = ChatOpenAI(temperature=0) tools = load_tools(["requests_all"] ) tools += [tool] agent_chain = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent_chain.run("what t shirts are available in klarna?") ``` ### versions - ptyhon: 3.8.12 - langchain: '0.0.149' <details><summary>Details</summary> <p> ```sh ~/lab/wantedly/visit-machine-learning /langchain_plugin - ⚑ ✚ … nsmryk/add_langchain_plugin - SIGHUP :( % python --version Python 3.8.12 ~/lab/wantedly/visit-machine-learning /langchain_plugin - ⚑ ✚ … nsmryk/add_langchain_plugin :) % python Python 3.8.12 (default, Mar 30 2022, 16:26:57) [Clang 13.0.0 (clang-1300.0.29.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import langchain lang>>> langchain.__version__ '0.0.149' >>> ``` </p> </details>
ChatGPT Plugin sample code cannot be executed
https://api.github.com/repos/langchain-ai/langchain/issues/3577/comments
1
2023-04-26T06:37:33Z
2023-04-26T06:42:31Z
https://github.com/langchain-ai/langchain/issues/3577
1,684,371,023
3,577
[ "langchain-ai", "langchain" ]
I am trying to use the Pandas Agent create_pandas_dataframe_agent, but instead of using OpenAI I am replacing the LLM with LlamaCpp. I am running this in Python 3.9 on a SageMaker notebook, with a ml.g4dn.xlarge instance size. I am having trouble with running this agent and produces a weird error. The code is as follows: ![image](https://user-images.githubusercontent.com/129805157/234446661-a5fb5ed5-e632-4c63-b850-9e384b024f32.png) This is the error log: ![image](https://user-images.githubusercontent.com/129805157/234446223-499e2ef2-5df8-433d-ba05-9fbed08ce91c.png) ![image](https://user-images.githubusercontent.com/129805157/234446269-ab7ba7e5-3b68-411d-b051-08763a95df0b.png) ![image](https://user-images.githubusercontent.com/129805157/234446303-5bc807ce-747c-4b75-a2d0-7df2921ef550.png) Detailed error log below: ![image](https://user-images.githubusercontent.com/129805157/234474222-fd863554-b8d6-4db5-bba6-3edfa031d94a.png) ![image](https://user-images.githubusercontent.com/129805157/234474261-9ace2d31-809b-4b7e-a893-53f1edabacac.png) ![image](https://user-images.githubusercontent.com/129805157/234474284-0e22d4e8-b8df-4498-b963-bb275353d88c.png) ![image](https://user-images.githubusercontent.com/129805157/234474317-2297aec6-4821-45d0-8db3-6e9a81ba8eb6.png) ![image](https://user-images.githubusercontent.com/129805157/234474350-9952596c-a47e-490d-87ad-e7ed418f79fe.png) ![image](https://user-images.githubusercontent.com/129805157/234474376-a222688c-eca9-47f4-9701-b0926dfe4f1c.png)
Issue with using LlamaCpp LLM in Pandas Dataframe Agent
https://api.github.com/repos/langchain-ai/langchain/issues/3569/comments
9
2023-04-26T02:00:54Z
2023-12-13T16:10:28Z
https://github.com/langchain-ai/langchain/issues/3569
1,684,129,288
3,569
[ "langchain-ai", "langchain" ]
I want to be able to pass pure string text, not as a text file. When I attempt to do so with long documents I get the error about the file name being too long: ``` Traceback (most recent call last): File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__ return await self.app(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__ await super().__call__(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app raw_response = await run_endpoint_function( File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/routing.py", line 163, in run_endpoint_function return await dependant.call(**values) File "/home/faizi/Projects/docu-query/langchain/main.py", line 50, in query response = query_document(query, text) File "/home/faizi/Projects/docu-query/langchain/__langchain__.py", line 13, in query_document index = VectorstoreIndexCreator().from_loaders([loader]) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/indexes/vectorstore.py", line 69, in from_loaders docs.extend(loader.load()) File "/home/faizi/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/document_loaders/text.py", line 17, in load with open(self.file_path, encoding=self.encoding) as f: OSError: [Errno 36] File name too long: ``` The way I've been able to get it to work has been like so: ``` # get document from supabase where userName = userName document = supabase \ .table('Documents') \ .select('document') \ .eq('userName', userName) \ .execute() text = document.data[0]['document'] # write text to a temporary file\ temp = tempfile.NamedTemporaryFile(mode='w+t', encoding='utf-8') temp.write(text) temp.seek(0) # query the document loader = TextLoader(temp.name) index = VectorstoreIndexCreator().from_loaders([loader]) response = index.query(query) # delete the temporary file temp.close() ``` There must be a more straight forward way. Am I missing something here?
Can only load text as a text file, not as string input
https://api.github.com/repos/langchain-ai/langchain/issues/3561/comments
8
2023-04-25T22:01:15Z
2023-12-01T16:11:03Z
https://github.com/langchain-ai/langchain/issues/3561
1,683,936,827
3,561
[ "langchain-ai", "langchain" ]
We need a model gateway pattern support for Chains for following reasons: - We may have external decision making elements that would help route which LLM model would need to handle a request. E.g. vector store. - Agents don't always cut it with the customization that'd be needed. We will need custom tool building etc which is overkill for a simple routing use case. - Downstream LLMs can also be any of the numerous chain types that can be supported. This allows for a scalable LLM orchestration model with chaining beyond what's supported today which is mainly Sequential chains.
Support for model router pattern for chains to allow for dynamic routing to right chains based on vector store semantics
https://api.github.com/repos/langchain-ai/langchain/issues/3555/comments
3
2023-04-25T21:04:41Z
2023-09-24T16:07:21Z
https://github.com/langchain-ai/langchain/issues/3555
1,683,865,609
3,555
[ "langchain-ai", "langchain" ]
See `langchain.vectorstores.milvus.Milvus._worker_search` ```python # Decide to use default params if not passed in. if param is None: index_type = self.col.indexes[0].params["index_type"] param = self.index_params[index_type] ```
Milvus vector store may search failed when there are multiple indexes
https://api.github.com/repos/langchain-ai/langchain/issues/3546/comments
0
2023-04-25T19:26:09Z
2023-04-25T19:44:08Z
https://github.com/langchain-ai/langchain/issues/3546
1,683,732,337
3,546
[ "langchain-ai", "langchain" ]
Hello, I am trying to use webbaseloader to ingest content from a list of urls. ``` from langchain.indexes import VectorstoreIndexCreator from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader(urls) index = VectorstoreIndexCreator().from_loaders([loader]) ``` But I got an error like , `ValueError: Expected metadata value to be a str, int, or float, got None`
VectorstoreIndexCreator cannot load data from WebBaseLoader
https://api.github.com/repos/langchain-ai/langchain/issues/3542/comments
9
2023-04-25T18:25:55Z
2024-01-30T00:41:19Z
https://github.com/langchain-ai/langchain/issues/3542
1,683,644,253
3,542
[ "langchain-ai", "langchain" ]
From this notebook: https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html It doesn't work. I search in the issues and it seems someone are starting working on it. Not sure if it is fixed or not. Here is the error: `RPC error: [create_index], <MilvusException: (code=1, message=IndexType should be AUTOINDEX)>, <Time:{'RPC start': '2023-04-25 13:58:07.716157', 'RPC error': '2023-04-25 13:58:07.779371'}>`
Can not connect to vector store in Zilliz
https://api.github.com/repos/langchain-ai/langchain/issues/3538/comments
1
2023-04-25T17:58:38Z
2023-09-10T16:27:11Z
https://github.com/langchain-ai/langchain/issues/3538
1,683,606,201
3,538
[ "langchain-ai", "langchain" ]
``` prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template(template=system_template), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) llm = ChatOpenAI(temperature=0.9) memory = ConversationBufferMemory(return_messages=True, ai_prefix="SpongebobSquarePants", human_prefix="Bob") conversation = ConversationChain(memory = memory, prompt = prompt, llm = llm, verbose=True) ``` using `ChatPromptTemplate.from_messages` will later use the method `get_buffer_string` in the `to_string()` for the class `ChatPromptValue` in chat.py in Prompts. The format does not care of the new ai_prefix or human_prefix. How can I change that ? Thanks
AI Prefix and Human Prefix not correctly reflected in
https://api.github.com/repos/langchain-ai/langchain/issues/3536/comments
20
2023-04-25T17:14:29Z
2024-04-10T13:48:20Z
https://github.com/langchain-ai/langchain/issues/3536
1,683,552,593
3,536
[ "langchain-ai", "langchain" ]
null
How can I add an identification while adding documents to Pinecone? Also, is there any way that I can update any document that I added to Pinecone before?
https://api.github.com/repos/langchain-ai/langchain/issues/3531/comments
1
2023-04-25T15:52:43Z
2023-09-10T16:27:16Z
https://github.com/langchain-ai/langchain/issues/3531
1,683,440,009
3,531
[ "langchain-ai", "langchain" ]
![image](https://user-images.githubusercontent.com/54734925/234318510-527f0cfe-a856-4e8e-85c1-e2f7d760833f.png) Getting this error While using faiss vector store methods. Found that in code , the query embedding is wrapped around a List , ![image](https://user-images.githubusercontent.com/54734925/234319458-e97257af-a7ef-4a09-9df4-d183a62d945d.png) And then again it is wrapped in list inside the maximal_marginal_method ![image](https://user-images.githubusercontent.com/54734925/234325721-82bb19d7-1465-40f8-ab45-67ca9a39c93c.png) I hope this gets fixed !
ValueError: Number of columns in X and Y must be same. ( in Faiss maximal marginal search )
https://api.github.com/repos/langchain-ai/langchain/issues/3529/comments
2
2023-04-25T15:24:23Z
2023-09-10T16:27:22Z
https://github.com/langchain-ai/langchain/issues/3529
1,683,394,463
3,529
[ "langchain-ai", "langchain" ]
How can I add custom prompt to: ``` qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff",) qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever()) ``` There is no prompt= for this class...
Custom prompt to RetrievalQAWithSourcesChain ?
https://api.github.com/repos/langchain-ai/langchain/issues/3523/comments
26
2023-04-25T13:42:04Z
2023-12-06T17:46:45Z
https://github.com/langchain-ai/langchain/issues/3523
1,683,202,939
3,523
[ "langchain-ai", "langchain" ]
code : pages = loader.load_and_split() text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) split_docs = text_splitter.split_documents(pages) embeddings = OpenAIEmbeddings(openai_api_key=openAiKey) vector_store = chroma_db.get_vector_store(file_hash_name, embeddings) vector_store.add_documents(documents=split_docs, embedding=embeddings) vector_store.persist() qa = ConversationalRetrievalChain.from_llm(llm, vector_store.as_retriever()) # 提问 chat_history = [] result = qa({"question": query, "chat_history": chat_history}) error: thread '<unnamed>' panicked at 'assertion failed: encoder.len() == decoder.len()', src/lib.rs:458:9 stack backtrace: 0: 0x147d93944 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h1027694b54c428d0 1: 0x147da57bc - core::fmt::write::hb60cc483d75d6594 2: 0x147d91b90 - std::io::Write::write_fmt::h6c907fc10bdb865b 3: 0x147d93758 - std::sys_common::backtrace::print::h1a62458f14dd2797 4: 0x147d94ac8 - std::panicking::default_hook::{{closure}}::h03c6918072c36210 5: 0x147d94820 - std::panicking::default_hook::hd0f3cf66b6a0fb5e 6: 0x147d950ec - std::panicking::rust_panic_with_hook::h9ed2a7a45efbd034 7: 0x147d94ecc - std::panicking::begin_panic_handler::{{closure}}::h535244d6186e3534 8: 0x147d93dac - std::sys_common::backtrace::__rust_end_short_backtrace::ha542aa49031c5cb5 9: 0x147d94c68 - _rust_begin_unwind 10: 0x147db4cb0 - core::panicking::panic_fmt::hc1e7b11add95109d 11: 0x147db4d20 - core::panicking::panic::h38074b3ed47cd9d2 12: 0x147cf345c - _tiktoken::CoreBPE::new::h3232dac6b39b5b9e 13: 0x147cfe0c8 - std::panicking::try::h0e408480c04001a1 14: 0x147cf410c - _tiktoken::_::<impl _tiktoken::CoreBPE>::__pymethod___new____::h42d4913b91c5c6b0 15: 0x103698e0c - _type_call 16: 0x10360d870 - __PyObject_MakeTpCall 17: 0x103744120 - _call_function 18: 0x10373c36c - __PyEval_EvalFrameDefault 19: 0x103734e14 - __PyEval_Vector 20: 0x10360db98 - __PyObject_FastCallDictTstate 21: 0x1036a20fc - _slot_tp_init 22: 0x103698ef0 - _type_call 23: 0x10360e678 - __PyObject_Call 24: 0x103736c58 - __PyEval_EvalFrameDefault 25: 0x103734e14 - __PyEval_Vector 26: 0x103744028 - _call_function 27: 0x10373aa68 - __PyEval_EvalFrameDefault 28: 0x103734e14 - __PyEval_Vector 29: 0x103744028 - _call_function 30: 0x10373c36c - __PyEval_EvalFrameDefault 31: 0x103734e14 - __PyEval_Vector 32: 0x103611738 - _method_vectorcall 33: 0x103744028 - _call_function 34: 0x10373aaec - __PyEval_EvalFrameDefault 35: 0x103734e14 - __PyEval_Vector 36: 0x103744028 - _call_function 37: 0x10373b378 - __PyEval_EvalFrameDefault 38: 0x103734e14 - __PyEval_Vector 39: 0x103611738 - _method_vectorcall 40: 0x10360e378 - _PyVectorcall_Call 41: 0x103736c58 - __PyEval_EvalFrameDefault 42: 0x103734e14 - __PyEval_Vector 43: 0x103611738 - _method_vectorcall 44: 0x103744028 - _call_function 45: 0x10373aaec - __PyEval_EvalFrameDefault 46: 0x103734e14 - __PyEval_Vector 47: 0x103744028 - _call_function 48: 0x10373c36c - __PyEval_EvalFrameDefault 49: 0x103734e14 - __PyEval_Vector 50: 0x10379f918 - _pyrun_file 51: 0x10379f05c - __PyRun_SimpleFileObject 52: 0x10379e6a8 - __PyRun_AnyFileObject 53: 0x1037ca8b0 - _pymain_run_file_obj 54: 0x1037c9f50 - _pymain_run_file 55: 0x1037c9538 - _pymain_run_python 56: 0x1037c93cc - _Py_RunMain 57: 0x1037caa58 - _pymain_main 58: 0x1037cad1c - _Py_BytesMain Traceback (most recent call last): File "/Users/macbookpro/21/mygit/toyoung/ai/py-chat/main.py", line 33, in <module> result = pdf_service.chrom_qa_pdf(filepath, "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",) File "/Users/macbookpro/21/mygit/toyoung/ai/py-chat/services/pdf_service.py", line 242, in chrom_qa_pdf vector_store.add_documents(documents=split_docs, embedding=embeddings) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 61, in add_documents return self.add_texts(texts, metadatas, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 115, in add_texts embeddings = self._embedding_function.embed_documents(list(texts)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 275, in embed_documents return self._get_len_safe_embeddings(texts, engine=self.document_model_name) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 206, in _get_len_safe_embeddings encoding = tiktoken.model.encoding_for_model(self.document_model_name) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/model.py", line 75, in encoding_for_model return get_encoding(encoding_name) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/registry.py", line 63, in get_encoding enc = Encoding(**constructor()) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tiktoken/core.py", line 50, in __init__ self._core_bpe = _tiktoken.CoreBPE(mergeable_ranks, special_tokens, pat_str) pyo3_runtime.PanicException: assertion failed: encoder.len() == decoder.len()
thread '<unnamed>' panicked at 'assertion failed: encoder.len() == decoder.len()', src/lib.rs:458:9
https://api.github.com/repos/langchain-ai/langchain/issues/3521/comments
1
2023-04-25T13:05:54Z
2023-09-15T22:12:52Z
https://github.com/langchain-ai/langchain/issues/3521
1,683,140,695
3,521
[ "langchain-ai", "langchain" ]
I am registering this issue out to request the addition of multilingual support for Langchain python repo As the user base for Langchain grows, it is becoming increasingly important to accommodate users from different linguistic backgrounds. Adding support for the multi-language would enable a wider audience to utilize Langchain effectively, and contribute to the project's overall success. Furthermore, this would pave the way for the integration of other languages in the future, making the library even more accessible and user-friendly. To implement this feature, I will work on the following: * [ ] Incorporate a mechanism for detecting and handling different languages, starting from Korean, within the library. * [ ] Provide localized documentation and error messages for the supported languages, not limited to docs + system prompt * [ ] Enable seamless switching between languages based on user preferences.
Request for Multilingual Support in Langchain (docs + etc)
https://api.github.com/repos/langchain-ai/langchain/issues/3520/comments
4
2023-04-25T12:58:27Z
2023-09-24T16:07:27Z
https://github.com/langchain-ai/langchain/issues/3520
1,683,126,113
3,520
[ "langchain-ai", "langchain" ]
We define some callback manager and a chatbot: ``` from langchain.callbacks import OpenAICallbackHandler from langchain.callbacks.base import CallbackManager manager = CallbackManager([OpenAICallbackHandler()]) chatbot = ChatOpenAI(temperature=1, callback_manager=manager) messages = [SystemMessage(content="")] ``` Now if we use `result = chatbot(messages)` to call OpenAI API for result, it won't trigger any callback. But if we use `chat.generate_prompt()` or `chat.agenerate_prompt()`, it will trigger callbacks. I suppose this is a bug not a feature, right? https://github.com/hwchase17/langchain/blob/bee59b4689fe23dce1450bde1a5d96b0aa52ee61/langchain/chat_models/base.py#L125
BaseChatModel.__call__() doesn't trigger any callback
https://api.github.com/repos/langchain-ai/langchain/issues/3519/comments
2
2023-04-25T12:27:36Z
2023-09-17T17:22:18Z
https://github.com/langchain-ai/langchain/issues/3519
1,683,076,298
3,519
[ "langchain-ai", "langchain" ]
I am using JASON Agent and currently, it is answering only from the JSON. Even if I say 'hi' it is giving me some random answer from the given JSON. My goal is to handle smalltalk normally, answer the questions from JSON, and if it is not sure then say I don't know. I have achieved the I don't know part by modifying the prefix, but how do I handle the smalltalk?
Smalltalk in JSON Agent
https://api.github.com/repos/langchain-ai/langchain/issues/3515/comments
1
2023-04-25T10:49:24Z
2023-09-10T16:27:32Z
https://github.com/langchain-ai/langchain/issues/3515
1,682,922,264
3,515
[ "langchain-ai", "langchain" ]
I am facing a Warning similar to the one described here #3005 `WARNING:langchain.embeddings.openai:Retrying langchain.embeddings.openai.embed_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600). ` It just keeps retrying. How do I get around this?
Timeout Error OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/3512/comments
31
2023-04-25T10:27:34Z
2024-06-17T23:33:27Z
https://github.com/langchain-ai/langchain/issues/3512
1,682,889,147
3,512
[ "langchain-ai", "langchain" ]
I want to use from langchain.llms import AzureOpenAI with the following configuration: os.environ["OPENAI_API_KEY"] = api_key_35 os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" Where api_key_35 is the key for AzureOpenAI. The code is: llm = AzureOpenAI( max_tokens=1024 ,deployment_name = "gpt-35-turbo" ,openai_api_type = "azure" ,model_name="gpt-35-turbo" ) The returned result is: openai.error.AuthenticationError: Incorrect API key provided: ********************. You can find your API key at https://platform.openai.com/account/api-keys. I changed the configuration to: os.environ["OPENAI_API_KEY"] = api_key_35 os.environ["OPENAI_API_BASE"] = api_base_35 os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" Where api_key_35 is the key for AzureOpenAI. The code is: llm = AzureOpenAI( max_tokens=1024 ,deployment_name = "gpt-35-turbo" ,openai_api_type = "azure" ,model_name="gpt-35-turbo" ) The returned result is: openai.error.InvalidRequestError: Resource not found If I use the key for OpenAI instead of AzureOpenAI, it runs successfully. Why is there an error? If I use openai.ChatCompletion.create,it's worked
Can not use AzureOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/3510/comments
19
2023-04-25T10:09:52Z
2023-10-05T16:10:49Z
https://github.com/langchain-ai/langchain/issues/3510
1,682,859,456
3,510
[ "langchain-ai", "langchain" ]
I've found that some recent langchain upgrade broke our detection of "agent exceeded max iterations error", returned in https://github.com/hwchase17/langchain/blob/bee59b4689fe23dce1450bde1a5d96b0aa52ee61/langchain/agents/agent.py#L92-L96. The reason it broke is that we had to detect the response based on response text which changed, instead of some structured information. What do you think of introducing a new type of early stopping "raise" which is going to raise an exception, or passing that information in some structure way back to the application?
Agent early stopping is difficult to detect on the application level
https://api.github.com/repos/langchain-ai/langchain/issues/3509/comments
1
2023-04-25T09:47:58Z
2023-09-10T16:27:38Z
https://github.com/langchain-ai/langchain/issues/3509
1,682,823,125
3,509
[ "langchain-ai", "langchain" ]
I want to host a python agent on Gradio. It all works good. Im struggling when wanting to display not only the answer but also the AgentExecutor chain. How can I edit the code, so that also the AgentExecutor chain will be printed in the Gradio app. The Code snippet for that part is the following: ``` def answer_question(question): agent_executor = create_python_agent( llm=OpenAI(temperature=0, max_tokens=1000), tool=PythonREPLTool(), verbose=True ) answer = agent_executor.run(question) return answer ifaces = gr.Interface( fn=answer_question, inputs=gr.inputs.Textbox(label="Question"), outputs=gr.outputs.Textbox(label="Answer"), title="Question Answering Agent", description="A simple question answering agent." ) ```
Agent Executor Chain
https://api.github.com/repos/langchain-ai/langchain/issues/3506/comments
3
2023-04-25T08:54:14Z
2023-11-16T16:08:17Z
https://github.com/langchain-ai/langchain/issues/3506
1,682,728,591
3,506
[ "langchain-ai", "langchain" ]
Anthropic dose not support request timeout setting
Anthropic dose not support request timeout setting
https://api.github.com/repos/langchain-ai/langchain/issues/3502/comments
1
2023-04-25T08:34:16Z
2023-09-10T16:27:42Z
https://github.com/langchain-ai/langchain/issues/3502
1,682,697,890
3,502
[ "langchain-ai", "langchain" ]
oobabooga/text-generation-webui/ is a popular method of running various models including llama variants on GPU and via llama.cpp. It would be useful to be abl to call its api as it can run and configure LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA in various quantisations with LoRA etc. I know you have just added llama.cpp directly but I could not find any way to call the api of oobabooga/text-generation-webui/. I recall I saw someone trying to wrap kobold but I can't find their work, which I expect would be similar. Is anyone working on this? If not I will fork and have a go - it doesn't seem too difficult to wrap llm apis given the examples provided.
text-generation-webui api
https://api.github.com/repos/langchain-ai/langchain/issues/3499/comments
6
2023-04-25T07:06:54Z
2023-09-24T16:07:32Z
https://github.com/langchain-ai/langchain/issues/3499
1,682,563,350
3,499
[ "langchain-ai", "langchain" ]
null
how can i modify own LLM to adapt initialize_agent(tools,llm)?
https://api.github.com/repos/langchain-ai/langchain/issues/3498/comments
1
2023-04-25T06:43:20Z
2023-09-10T16:27:47Z
https://github.com/langchain-ai/langchain/issues/3498
1,682,531,129
3,498
[ "langchain-ai", "langchain" ]
I'm running langchain on a 4xV100 rig on AWS. Currently it only utilizes a single GPU. I was able to get it to run on all GPUs by changing, https://github.com/hwchase17/langchain/blob/a14d1c02f87d23d9ff5ab36a4c68aeb724499455/langchain/embeddings/huggingface.py#L71 to ```python print('Using MultiGPU') pool = self.client.start_multi_process_pool() embeddings = self.client.encode_multi_process(texts, pool) self.client.stop_multi_process_pool(pool) ``` as `sentence-transformers` does support multi-GPU encoding under the hood. I am sure there is a more elegant way of achieving this, although this duct-taped solution seems to work for now.
Multi GPU support
https://api.github.com/repos/langchain-ai/langchain/issues/3486/comments
5
2023-04-25T03:52:39Z
2023-10-26T16:08:39Z
https://github.com/langchain-ai/langchain/issues/3486
1,682,381,944
3,486
[ "langchain-ai", "langchain" ]
Start with the following tutorial: https://python.langchain.com/en/latest/modules/agents/agents/custom_llm_agent.html But instead of using SerpAPI, use the google search tool: ```python from langchain.agents import load_tools tools = load_tools(["google-search"]) ``` The step that creates the CustomPromptTemplate will encounter a validation error: ``` ValidationError Traceback (most recent call last) Cell In[36], line 24 21 kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) 22 return self.template.format(**kwargs) ---> 24 prompt = CustomPromptTemplate( 25 template=template, 26 tools=tools, 27 # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically 28 # This includes the `intermediate_steps` variable because that is needed 29 input_variables=["input", "intermediate_steps"] 30 ) File [/site-packages/pydantic/main.py:341]/.venv-jupyter/lib/python3.10/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for CustomPromptTemplate tools -> 0 Tool.__init__() missing 1 required positional argument: 'func' (type=type_error) ``` The problem appears to be that the result of calling `load_tools(["google-search"])` is a BaseTool and not a Tool and doesn't have a `func`. This can be fixed by modifying the CustomPromptTemplate to use BaseTool instead of Tool. ```python from langchain.tools import BaseTool class CustomPromptTemplate(StringPromptTemplate): # The template to use template: str # The list of tools available tools: List[BaseTool] def format(self, **kwargs) -> str: <SNIP> ``` However I am not sure if this is the correct fix, or if the problem is that load_tools should create a `Tool` instead of a `BaseTool`. i.e. is this a doc issue or a product issue?
Custom agent tutorial doesnt handle replacing SerpAPI with google search tool
https://api.github.com/repos/langchain-ai/langchain/issues/3485/comments
9
2023-04-25T03:42:48Z
2024-07-26T05:18:58Z
https://github.com/langchain-ai/langchain/issues/3485
1,682,376,132
3,485
[ "langchain-ai", "langchain" ]
With reference to the topic, I'm trying to build a chatbot to perform some action based on a conversation with a user. I believe I could use the "agent" module together with some "tools". However, with this combination, it seems that the tools are used to provide context (in the form of a string) for the agent to generate some text (answer some question, or etc). How do I trigger a function, and I do not need the output of this function (for example to generate an item in a todo list) and not redirecting this output to the LLM model? The action does not even need to output a string; it may not even return a value. What is the recommended way to do something like this? Implement a custom callback? A custom agent?
What is the appropriate module to use if I want to just perform an action?
https://api.github.com/repos/langchain-ai/langchain/issues/3484/comments
2
2023-04-25T03:24:43Z
2023-09-10T16:27:53Z
https://github.com/langchain-ai/langchain/issues/3484
1,682,364,708
3,484
[ "langchain-ai", "langchain" ]
`class BaseOpenAI(BaseLLM): """Wrapper around OpenAI large language models.""" openai_api_base: Optional[str] = None` Hope ChatOpenAi will support this configuration. After all gpt-3.5-turbo model is much cheaper.
BaseOpenAI supports openai_api_base configuration but ChatOpenAI doesnot support
https://api.github.com/repos/langchain-ai/langchain/issues/3483/comments
1
2023-04-25T03:22:22Z
2023-09-10T16:27:57Z
https://github.com/langchain-ai/langchain/issues/3483
1,682,363,391
3,483
[ "langchain-ai", "langchain" ]
The _call function returns result["choices"][0]["text"] For me result["choices"][0]["text"] includes both the prompt and the answer. My use case: document summary: llm = LlamaCpp(model_path=r"D:\AI\Model\vicuna-13B-1.1-GPTQ-4bit-128g.GGML.bin",n_ctx=4000, f16_kv=True) chain = load_summarize_chain(llm, chain_type="map_reduce", verbose = True) myoutput = chain.run(docs) Well, obviously this does not work, because in llms>base.py, in the _generate function, we call _call like this: generations = [] for prompt in prompts: text = self._call(prompt, stop=stop) generations.append([Generation(text=text)]) This chain is supposed to create summury, but what comes back from the LLM (prompt+output) is longer than the input (the prompt). So this chain goes into a loop. FYI, this works totally fine. llm = OpenAI(temperature=0) chain = load_summarize_chain(llm, chain_type="map_reduce", verbose = True) myoutput = chain.run(docs) And in this case I can confirm that text = self._call(prompt, stop=stop) has only the output (no prompt). I will look into what result["choices"][0]["text"] should be changed to.
Llamacpp.py _call returns both prompt and generation
https://api.github.com/repos/langchain-ai/langchain/issues/3478/comments
4
2023-04-25T02:31:49Z
2023-09-24T16:07:41Z
https://github.com/langchain-ai/langchain/issues/3478
1,682,331,011
3,478
[ "langchain-ai", "langchain" ]
Name: langchain Version: 0.0.146 Name: opensearch-py Version: 2.2.0 Even if I build opensearch in docker and run it as per langchain's official documentation, the index is randomly numbered and data is created. I am not sure if this is how it is supposed to work. I was imagining that multiple documents are usually added to a single index. Also, I get the following error when I specify index_name. File "/usr/local/lib/python3.10/site-packages/opensearchpy/connection/base.py", line 301, in _raise_error raise HTTP_EXCEPTIONS.get(status_code, TransportError)( opensearchpy.exceptions.RequestError: RequestError(400, 'resource_already_exists_exception', 'index [test_index/GEdIKgfrRO24XoRbcJCeVg ] already exists') There seems to be a duplicate index, as client.indices.create(index=index_name, body=mapping) in the from_texts function of opensearch_vector_search.py is always executed. I assume this is because the client.indices.create(index=index_name, body=mapping) is always executed. ![スクリーンショット 2023-04-25 9 07 45](https://user-images.githubusercontent.com/95115586/234141078-da4c7d97-d39a-41fd-85f1-41d4c2d6ea84.png)
The from_documents in opensensearch may not be working as expected.
https://api.github.com/repos/langchain-ai/langchain/issues/3473/comments
4
2023-04-25T00:08:37Z
2023-09-24T16:07:46Z
https://github.com/langchain-ai/langchain/issues/3473
1,682,215,942
3,473
[ "langchain-ai", "langchain" ]
Using Langchain, I used Milvus vector db to ingest all my document as per @ https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html. Now later I want to get a handle to vector_db and start to query Milvus. How do I achieve this? In the example in the link, it is querying the vector_db immediately. Imagine, once I call from_documents, I set the vector_db=None and later I need to load the collection back and query. How do I do that for Milvus?
Persist pdf data in Milvus for later use
https://api.github.com/repos/langchain-ai/langchain/issues/3471/comments
11
2023-04-24T23:55:18Z
2023-10-16T16:08:30Z
https://github.com/langchain-ai/langchain/issues/3471
1,682,202,998
3,471
[ "langchain-ai", "langchain" ]
The [`PostgresChatMessageHistory` class](https://github.com/hwchase17/langchain/blob/master/langchain/memory/chat_message_histories/postgres.py). Uses `psygopg 3`; however, the [pyproject.toml file](https://github.com/hwchase17/langchain/blob/master/pyproject.toml) only includes `psycopg2-binary` instead of `psycopg[binary]`(`psycopg 3`). Proposed solution: Add `psycopg[binary]==3.1.8` to the [pyproject.toml file](https://github.com/hwchase17/langchain/blob/master/pyproject.toml)
Missing Dependency for PostgresChatMessageHistory (dependent on psycopg 3, but psycopg 2 listed in requirements)
https://api.github.com/repos/langchain-ai/langchain/issues/3467/comments
4
2023-04-24T23:23:46Z
2023-12-22T12:47:05Z
https://github.com/langchain-ai/langchain/issues/3467
1,682,183,340
3,467
[ "langchain-ai", "langchain" ]
My query code is below: ``` pinecone.init( api_key=os.environ.get('PINECONE_API_KEY'), # app.pinecone.io environment=os.environ.get('PINECONE_ENV') # next to API key in console ) index = pinecone.Index(index_name) embeddings = OpenAIEmbeddings(openai_api_key=os.environ.get('OPENAI_API_KEY')) vectordb = Pinecone( index=index, embedding_function=embeddings.embed_query, text_key="text", ) llm=ChatOpenAI( openai_api_key=os.environ.get('OPENAI_API_KEY'), temperature=0, model_name='gpt-3.5-turbo' ) retriever = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=vectordb.as_retriever() ) tools = [Tool( func=retriever.run, description=tool_desc, name='Product DB' )] memory = ConversationBufferWindowMemory( memory_key="chat_history", # important to align with agent prompt (below) k=5, return_messages=True ) agent = initialize_agent( agent='chat-conversational-react-description', tools=tools, llm=llm, verbose=True, max_iterations=3, early_stopping_method="generate", memory=memory, ) ``` If I run: `agent({'chat_history':[], 'input':'What is a product?'})` It throws: > File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\pinecone.py", line 160, in similarity_search > text = metadata.pop(self._text_key) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > KeyError: 'text' This is the offending block in site-packages/pinecone.py: ``` for res in results["matches"]: # print('metadata.pop(self._text_key) = ' + metadata.pop(self._text_key)) metadata = res["metadata"] text = metadata.pop(self._text_key) docs.append(Document(page_content=text, metadata=metadata)) ``` If I remove my tool like the line below, everything executes (just not my tool): `tools = []` Can anyone help me fix this KeyError: 'text' issue? My versions of langchain, pinecone-client and python are 0.0.147, 2.2.1 and 3.11.3 respectively.
Pinecone retriever throwing: KeyError: 'text'
https://api.github.com/repos/langchain-ai/langchain/issues/3460/comments
17
2023-04-24T18:02:53Z
2024-05-19T00:22:40Z
https://github.com/langchain-ai/langchain/issues/3460
1,681,773,975
3,460
[ "langchain-ai", "langchain" ]
I get this error occasionally when running the calculator tool, and seems like lots of other people are dealing with weird outputs from agents [like here](https://github.com/hwchase17/langchain/issues/2276). I'm seeing just random junk on the end of my objects returned from agents: ``` File "c:\Users\djpec\Documents\GitHub\project\venv\lib\site-packages\langchain\agents\conversational_chat\output_parser.py", line 32, in parse response = json.loads(cleaned_output) File "C:\Program Files\Python39\lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Program Files\Python39\lib\json\decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 5 column 1 (char 94) ``` Inspecting the `cleaned_output` yields this when preparing for final answer (I'm not sure how to wrap this in backticks, sorry lol): ![image](https://user-images.githubusercontent.com/8185181/234075261-82e290fe-2452-404a-93f5-60ba30d04e51.png) I fixed it by importing the `regex` library and searching recursively for the largest "object" in the string in my `venv\lib\site-packages\langchain\agents\conversational_chat\output_parser.py` function ``` import regex ... if cleaned_output.endswith("```"): cleaned_output = cleaned_output[: -len("```")] if not cleaned_output.endswith("""\n}"""): pattern = r"(\{(?:[^{}]|(?R))*\})" cleaned_output = regex.search(pattern, text).group(0) cleaned_output = cleaned_output.strip() ... ```
Conversational Chat Agent: json.decoder.JSONDecodeError
https://api.github.com/repos/langchain-ai/langchain/issues/3455/comments
4
2023-04-24T17:48:32Z
2023-09-24T16:07:57Z
https://github.com/langchain-ai/langchain/issues/3455
1,681,753,632
3,455
[ "langchain-ai", "langchain" ]
On `langchain==0.0.147` I get ```python pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, model_kwargs=llm_kwargs, device=device) hf = HuggingFacePipeline(pipeline=pipe) print(hf.model_id) ``` always gives `gpt2`, irrespective of what `model` is.
model_id remains set to 'gpt2' when creating HuggingFacePipeline from pipeline
https://api.github.com/repos/langchain-ai/langchain/issues/3451/comments
6
2023-04-24T17:15:05Z
2024-02-11T16:20:36Z
https://github.com/langchain-ai/langchain/issues/3451
1,681,712,468
3,451
[ "langchain-ai", "langchain" ]
I have been trying to stream the response using AzureChatOpenAI and it didn't call my MyStreamingCallbackHandler() until I finally set verbose=True and it started to work. Is it a bug? I failed to find any indication in the docs about streaming requiring verbose=True when calling AzureChatOpenAI . ``` chat_model = AzureChatOpenAI( openai_api_base=openai_instance["api_base"], openai_api_version=openai_instance["api_version"], deployment_name=chat_model_deployment, openai_api_key=openai_instance["api_key"], openai_api_type = openai_instance["api_type"], streaming=True, callback_manager=CallbackManager([MyStreamingCallbackHandler()]), temperature=0, verbose=True ) ```
Streaming not working unless I set verbose=True in AzureChatOpenAI()
https://api.github.com/repos/langchain-ai/langchain/issues/3449/comments
3
2023-04-24T16:20:34Z
2023-08-22T17:50:42Z
https://github.com/langchain-ai/langchain/issues/3449
1,681,634,934
3,449
[ "langchain-ai", "langchain" ]
``` Traceback (most recent call last): File "/home/gptbot/cogs/search_service_cog.py", line 322, in on_message response, stdout_output = await capture_stdout( File "/home/gptbot/cogs/search_service_cog.py", line 79, in capture_stdout result = await func(*args, **kwargs) File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 807, in _call output = self.agent.return_stopped_response( File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 515, in return_stopped_response full_output = self.llm_chain.predict(**full_inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 151, in predict return self(kwargs)[self.output_key] File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 57, in _call return self.apply([inputs])[0] File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 118, in apply response = self.generate(input_list) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 61, in generate prompts, stop = self.prep_prompts(input_list) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 79, in prep_prompts prompt = self.prompt.format_prompt(**selected_inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 127, in format_prompt messages = self.format_messages(**kwargs) File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 186, in format_messages message = message_template.format_messages(**rel_params) File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 43, in format_messages raise ValueError( ValueError: variable agent_scratchpad should be a list of base messages, got { "action": "Search-Tool", "action_input": "Who is Harald Baldr?" } ``` Most of the time the agent can't parse it's own tool usage.
Broken intermediate output / parsing is grossly unreliable
https://api.github.com/repos/langchain-ai/langchain/issues/3448/comments
22
2023-04-24T16:02:21Z
2024-05-24T15:21:22Z
https://github.com/langchain-ai/langchain/issues/3448
1,681,601,217
3,448