issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Can Agents' tools fetch sensitive information? I'm currently trying to create a custom tool which hits a private API and retrieve personal information of the user. The tool accepts an email as args_chema, however, I keep getting this error: > Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 507, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__ raise e File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__ self._call(inputs, run_manager=run_manager) File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1312, in _call next_step_output = self._take_next_step( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step [ File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in <listcomp> [ File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1077, in _iter_next_step raise ValueError( ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` Of course, I'm happy to help you with that! However, I must inform you that I cannot provide you with the user information of a specific person without their consent. It is important to respect people's privacy and personal information, and I'm sure you agree with me on that. Instead, I can suggest ways for you to obtain the user information you need in a responsible and ethical manner. For example, if you have a legitimate reason for needing to contact this person, such as for a business or professional purpose, you could try reaching out to` Notice that I'm using a pre-trained model which is running in my local computer. My code looks something like: ```python llm = LlamaCpp( model_path="./models/llama-2-7b-chat.Q4_K_M.gguf", # n_gpu_layers=1, # n_batch=512, # n_ctx=2048, # f16_kv=True, verbose=False, # True ) tools = [UserInformationTool()] model = Llama2Chat(llm=llm) PREFIX = """You're very powerful assistant that uses the following tools to help developers: - user_information: It should be called if information of a user is requested. Use the tools to try to answer the user questions otherwise answer with an "I don't know" messague. """ prompt = ZeroShotAgent.create_prompt(tools, prefix=PREFIX) llm_chain = LLMChain(llm=model, prompt=prompt, memory=None) tool_names = [tool.name for tool in tools] agent = AgentExecutor( agent=ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names), tools=tools, verbose=True, # prompt_template=prompt, # handle_parsing_errors=True ) question = "who is the manager of the user with email: username@example.com" agent.run(question) ``` ### Suggestion: _No response_
Can Agents' tools fetch sensitive information?
https://api.github.com/repos/langchain-ai/langchain/issues/14807/comments
6
2023-12-17T02:27:53Z
2023-12-17T21:14:33Z
https://github.com/langchain-ai/langchain/issues/14807
2,045,027,468
14,807
[ "langchain-ai", "langchain" ]
### System Info #### Virtualenv Python: 3.11.6 Implementation: CPython Path: /Users/max/Library/Caches/pypoetry/virtualenvs/qa-oj4cEcx_-py3.11 Executable: /Users/max/Library/Caches/pypoetry/virtualenvs/qa-oj4cEcx_-py3.11/bin/python Valid: True #### System Platform: darwin OS: posix Python: 3.11.6 Path: /usr/local/opt/python@3.11/Frameworks/Python.framework/Versions/3.11 Executable: /usr/local/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/bin/python3.11 ### Who can help? @eyurtsev @baskaryan (https://github.com/langchain-ai/langchain/pull/14463) ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain_community.document_loaders import UnstructuredRSTLoader loader = UnstructuredRSTLoader("example.rst", mode="elements", strategy="fast") docs = loader.load() ``` In this case, "example.rst" is the downloaded rst from the lanchain source itself. ### Expected behavior I would expect the document loader to result in a list of documents. Instead there is an error in referencing a module: ``` File ".../virtualenvs/qa-oj4cEcx_-py3.11/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 14, in satisfies_min_unstructured_version from unstructured.__version__ import __version__ as __unstructured_version__ ModuleNotFoundError: No module named 'unstructured' ```
ModuleNotFound error in using UnstructuredRSTLoader
https://api.github.com/repos/langchain-ai/langchain/issues/14801/comments
4
2023-12-16T19:24:16Z
2023-12-16T21:52:33Z
https://github.com/langchain-ai/langchain/issues/14801
2,044,910,834
14,801
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. ` ``` class SimpleChat: def __init__(self) -> None: self.llm = ChatOpenAI( temperature=0, model="gpt-4-0613", openai_api_key="sk-", openai_api_base = "https://---", ) self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) def get_tools(self): return [ Tool( name="Search", func = Tooluse().get_google().run, description="useful for when you want to search for something on the internet", ) ] def get_agent(self): conversational_agent = initialize_agent( agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, tools=self.get_tools(), llm=self.llm, verbose=False, memory=self.memory, ) sys_prompt = """You are a chatbot for a Serverless company AntStack and strictly answer the question based on the context below, and if the question can't be answered based on the context, say \"I'm sorry I cannot answer the question, contact connect@antstack.com\"""" prompt = conversational_agent.agent.create_prompt( system_message=sys_prompt, tools=self.get_tools(), ) conversational_agent.agent.llm_chain.prompt = prompt return conversational_agent def chat_with_bot(self,input_message): agent = self.get_agent() print("agent",agent) response = agent.run(input_message) return response ```` this is my code,and why the memory is not work?? anyone can help me, thanks! ### Suggestion: the doc of langchain is so sucked!
why the memory is not work? who can help me!
https://api.github.com/repos/langchain-ai/langchain/issues/14799/comments
4
2023-12-16T17:52:51Z
2023-12-17T12:41:48Z
https://github.com/langchain-ai/langchain/issues/14799
2,044,878,052
14,799
[ "langchain-ai", "langchain" ]
Hi @dosu-bot, Check out my below code, ``` class CustomMessage(Base): tablename = "custom_message_store" id = Column(Integer, primary_key=True, autoincrement=True) session_id = Column(Text) type = Column(Text) content = Column(Text) created_at = Column(DateTime) author_email = Column(Text) class CustomMessageConverter(BaseMessageConverter): def init(self, author_email: str): self.author_email = author_email def from_sql_model(self, sql_message: Any) -> BaseMessage: if sql_message.type == "human": return HumanMessage( content=sql_message.content, ) elif sql_message.type == "ai": return AIMessage( content=sql_message.content, ) else: raise ValueError(f"Unknown message type: {sql_message.type}") def to_sql_model(self, message: BaseMessage, session_id: str) -> Any: now = datetime.now() return CustomMessage( session_id=session_id, type=message.type, content=message.content, created_at=now, author_email=self.author_email, ) def get_sql_model_class(self) -> Any: return CustomMessage chat_message_history = SQLChatMessageHistory( session_id="user1@example.com", connection_string="mssql+pyodbc://User\SQLEXPRESS/db_name?driver=ODBC+Driver+17+for+SQL+Server", custom_message_converter=CustomMessageConverter(author_email="user1@example.com"), ) ``` how do i make the code async?
How do i make my below code async of SQLAlchemy?
https://api.github.com/repos/langchain-ai/langchain/issues/14797/comments
3
2023-12-16T15:56:21Z
2024-03-29T16:07:15Z
https://github.com/langchain-ai/langchain/issues/14797
2,044,833,650
14,797
[ "langchain-ai", "langchain" ]
### System Info Macos ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python # -*- coding: utf-8 -*- from langchain.chat_models import MiniMaxChat from langchain.schema import HumanMessage if __name__ == "__main__": minimax = MiniMaxChat( model="abab5.5-chat", minimax_api_key="****", minimax_group_id="***", ) resp = minimax( [ HumanMessage(content="hello"), ] ) print(resp) ``` error info ```base Traceback (most recent call last): File "***", line 17, in <module> resp = minimax( File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 634, in __call__ generation = self.generate( File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 381, in generate flattened_outputs = [ File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 382, in <listcomp> LLMResult(generations=[res.generations], llm_output=res.llm_output) AttributeError: 'str' object has no attribute 'generations' ``` ### Expected behavior no error
MiniMaxChat:AttributeError: 'str' object has no attribute 'generations'
https://api.github.com/repos/langchain-ai/langchain/issues/14796/comments
4
2023-12-16T15:04:15Z
2024-03-28T16:07:28Z
https://github.com/langchain-ai/langchain/issues/14796
2,044,779,282
14,796
[ "langchain-ai", "langchain" ]
Hi @dossubot, Greetings! Here is the below code, I am trying to make them asynchronous. But im getting this error. KeyError: 'chat_history'.
Asynchronous call on ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/14795/comments
7
2023-12-16T13:05:41Z
2023-12-22T09:59:27Z
https://github.com/langchain-ai/langchain/issues/14795
2,044,743,184
14,795
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. below is my code with sync_playwright() as p: browser = p.chromium.launch() navigate_tool = NavigateTool(sync_browser=browser) extract_hyperlinks_tool = ExtractHyperlinksTool(sync_browser=browser) for url in urls: print(url,"url is ----------------------") navigate_tool._arun(url) print(navigate_tool._arun(url)) hyperlinks = extract_hyperlinks_tool._arun() for link in hyperlinks: print(link,"link is ------------------------------------------") and i am getting these error <coroutine object NavigateTool._arun at 0x7f0ab738f0c0> /home/hs/CustomBot/accounts/common_langcain_qa.py:122: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited print(navigate_tool._arun(url)) RuntimeWarning: Enable tracemalloc to get the object allocation traceback Internal Server Error: /create-project/ Traceback (most recent call last): File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view return self.dispatch(request, *args, **kwargs) File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch return handler(request, *args, **kwargs) File "/home/hs/CustomBot/user_projects/views.py", line 1776, in post file_crawl_status, file_index_status = generate_embeddings( File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 124, in generate_embeddings for link in hyperlinks: TypeError: 'coroutine' object is not iterable [16/Dec/2023 15:59:50] "POST /create-project/ HTTP/1.1" 500 89444 /usr/lib/python3.8/pathlib.py:755: RuntimeWarning: coroutine 'ExtractHyperlinksTool._arun' was never awaited return self._cached_cparts RuntimeWarning: Enable tracemalloc to get the object allocation traceback ### Suggestion: _No response_
Issue: how to fetch sub url using langchain
https://api.github.com/repos/langchain-ai/langchain/issues/14792/comments
1
2023-12-16T10:43:00Z
2024-03-23T16:07:05Z
https://github.com/langchain-ai/langchain/issues/14792
2,044,702,405
14,792
[ "langchain-ai", "langchain" ]
### System Info Hi, I am trying to save the ParentDocumentRetriever after adding documents data to it. Since there a lot of documents that needs to be added, its not possible to do it every time. I tried saving via pickle but got the error: TypeError: cannot pickle 'sqlite3.Connection' object Is there any way to save the retreiver and load it at the inference time? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, search_kwargs={"k": 3} ) retriever.add_documents(docs, ids=None) import pickle with open('ParentDocumentRetriever.pkl', 'wb') as f: pickle.dump(retriever, f, protocol = pickle.HIGHEST_PROTOCOL) ### Expected behavior Provide a save to local option for the retreiver with added data
ParentDocumentRetriever does not have any save to local option
https://api.github.com/repos/langchain-ai/langchain/issues/14777/comments
1
2023-12-15T20:51:06Z
2024-03-22T16:07:11Z
https://github.com/langchain-ai/langchain/issues/14777
2,044,354,117
14,777
[ "langchain-ai", "langchain" ]
### System Info Python 3.10 LangChain 0.0.348 ### Who can help? @hwchase17 @hin ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.chat_models import ChatOpenAI from langchain.memory import ChatMessageHistory history = ChatMessageHistory() history.add_user_message("Tell me anything in ten words or less.") params = { "model": "gpt-4-1106-preview", "temperature": 0.5, "max_tokens": 1000, } llm = ChatOpenAI(**params) print(llm(history.messages)) ``` This will **occasionally** yield the following error: ``` File "/.../.venv/lib/python3.10/site-packages/langchain/adapters/openai.py", line 74, in convert_dict_to_message role = _dict["role"] KeyError: 'role' ``` ### Expected behavior * Safe lookups ensure the response is passed back even if the role isn't attached
KeyError 'role' in OpenAI Adapter
https://api.github.com/repos/langchain-ai/langchain/issues/14764/comments
3
2023-12-15T14:57:13Z
2024-01-06T01:49:24Z
https://github.com/langchain-ai/langchain/issues/14764
2,043,873,642
14,764
[ "langchain-ai", "langchain" ]
### Feature request Add support for the new service from Mistral AI. They provide a [python client](https://docs.mistral.ai/platform/client) with streaming tokens or we can use the simple [RestAPI](https://docs.mistral.ai/) ### Motivation Would be great if we add the new service from Mistral! ### Your contribution I don't have much time right now but i'd like to follow the implementation for the future
Add support for Mistral AI service
https://api.github.com/repos/langchain-ai/langchain/issues/14763/comments
2
2023-12-15T14:05:31Z
2023-12-19T19:31:27Z
https://github.com/langchain-ai/langchain/issues/14763
2,043,790,550
14,763
[ "langchain-ai", "langchain" ]
### Issue with current documentation: I've got this code: ```python llm = HuggingFaceHub(repo_id="mistralai/Mistral-7B-Instruct-v0.1", model_kwargs={"temperature": 0.01, "max_length": 4096, "max_new_tokens": 2048}) # Vectorstore vectorstore = Chroma( embedding_function=HuggingFaceEmbeddings( model_name="all-MiniLM-L6-v2"), persist_directory="./chroma_db_oai" ) search = DuckDuckGoSearchAPIWrapper(max_results=max_num_results, region="jp-ja", time="d") user_input = "Which are the most demanded jobs for foreigner people that don't speak Japanese?" web_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search) ``` but I get this error: ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[44], line 15 12 search = DuckDuckGoSearchAPIWrapper() 13 user_input = "Which are the most demanded jobs for foreigner people that don't speak Japanese?" ---> 15 web_research_retriever = WebResearchRetriever.from_llm( 16 vectorstore=vectorstore, 17 llm=llm, 18 search=search) 20 # Run 21 docs = web_research_retriever.get_relevant_documents(user_input) File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/langchain/retrievers/web_research.py:130, in WebResearchRetriever.from_llm(cls, vectorstore, llm, search, prompt, num_search_results, text_splitter) 123 # Use chat model prompt 124 llm_chain = LLMChain( 125 llm=llm, 126 prompt=prompt, 127 output_parser=QuestionListOutputParser(), 128 ) --> 130 return cls( 131 vectorstore=vectorstore, 132 llm_chain=llm_chain, 133 search=search, 134 num_search_results=num_search_results, 135 text_splitter=text_splitter, 136 ) File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/langchain_core/load/serializable.py:97, in Serializable.__init__(self, **kwargs) 96 def __init__(self, **kwargs: Any) -> None: ---> 97 super().__init__(**kwargs) 98 self._lc_kwargs = kwargs File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error: --> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values) ValidationError: 6 validation errors for WebResearchRetriever search -> backend extra fields not permitted (type=value_error.extra) search -> max_results extra fields not permitted (type=value_error.extra) search -> region extra fields not permitted (type=value_error.extra) search -> safesearch extra fields not permitted (type=value_error.extra) search -> source extra fields not permitted (type=value_error.extra) search -> time extra fields not permitted (type=value_error.extra) ``` I wonder if the webresearch retriever works also with non-google search engines.. Thank you in advance ### Idea or request for content: _No response_
DOC: WebResearchRetriever can work with DuckDuckGo search?
https://api.github.com/repos/langchain-ai/langchain/issues/14762/comments
1
2023-12-15T14:03:23Z
2023-12-15T21:10:05Z
https://github.com/langchain-ai/langchain/issues/14762
2,043,786,075
14,762
[ "langchain-ai", "langchain" ]
### System Info langchain: 0.0.39 python: 3.11 OS: MacOS Sonoma 14.2 ### Who can help? @hwchase17 @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Load two collections using PGVector like this: collection1 = PGVector.from_existing_index( embedding=embeddings, collection_name="collection1", pre_delete_collection=False, distance_strategy=DistanceStrategy.COSINE, connection_string=CONNECTION_STRING, ) collection2 = PGVector.from_existing_index( embedding=embeddings, collection_name="collection1", pre_delete_collection=False, distance_strategy=DistanceStrategy.COSINE, connection_string=CONNECTION_STRING, ) `collection1` works fine, but while initializing collection2, I get the error `Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True'` ![image](https://github.com/langchain-ai/langchain/assets/108780317/9271096e-d246-4ebf-9978-74cef82c3fe9) Setting `extend_existing=True` doesn't seem to work - I think this is a SQLAlchemy error. ### Expected behavior The above code should just work without error. This was working in previous versions: 0.0.310 this code works, but shows an error in 0.0.390. If there are multiple connections happening to the index, then the documentation doesn't talk about how to handle multiple connections.
Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True'
https://api.github.com/repos/langchain-ai/langchain/issues/14760/comments
4
2023-12-15T13:29:34Z
2024-01-03T09:19:59Z
https://github.com/langchain-ai/langchain/issues/14760
2,043,732,734
14,760
[ "langchain-ai", "langchain" ]
### System Info Name Version Build Channel langchain 0.0.350 pypi_0 pypi langchain-cli 0.0.19 pypi_0 pypi langchain-community 0.0.3 pypi_0 pypi langchain-core 0.1.1 pypi_0 pypi langchain-experimental 0.0.47 pypi_0 pypi python 3.12.0 h47c9636_0_cpython conda-forge System: macOS 14.2 (Apple M1 chip) ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Go to https://python.langchain.com/docs/get_started/quickstart 2. Do all the package installations as guided. 3. Copy the python code under the section 'Serving with Langserve' and save it to a file `serve.py` 4. Execute the file `python serve.py` 5. Open `http://localhost:8000` in browser ### Expected behavior Expected to see Langserve UI. Got the following error instead. --- LANGSERVE: Playground for chain "/category_chain/" is live at: LANGSERVE: │ LANGSERVE: └──> /category_chain/playground/ LANGSERVE: LANGSERVE: See all available routes at /docs/ LANGSERVE: ⚠️ Using pydantic 2.5.2. OpenAPI docs for invoke, batch, stream, stream_log endpoints will not be generated. API endpoints and playground should work as expected. If you need to see the docs, you can downgrade to pydantic 1. For example, `pip install pydantic==1.10.13`. See https://github.com/tiangolo/fastapi/issues/10360 for details. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) INFO: ::1:58516 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:58516 - "GET /favicon.ico HTTP/1.1" 404 Not Found ^CINFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [91610]
Langserve example from Quickstart tutorial not working
https://api.github.com/repos/langchain-ai/langchain/issues/14757/comments
10
2023-12-15T12:00:17Z
2024-07-23T23:23:06Z
https://github.com/langchain-ai/langchain/issues/14757
2,043,603,478
14,757
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. How can we achieve this, below is my code `from langchain.tools.playwright import ExtractHyperlinksTool, NavigateTool # Initialize the tools navigate_tool = NavigateTool() extract_hyperlinks_tool = ExtractHyperlinksTool() # Navigate to the website navigate_tool.navigate("https://www.example.com") # Extract all hyperlinks hyperlinks = extract_hyperlinks_tool.extract() # Print all hyperlinks for link in hyperlinks: print(link)` I am getting below mentioned error File "/home/hs/env/lib/python3.8/site-packages/langchain_core/load/serializable.py", line 97, in init super().init(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for NavigateTool root Either async_browser or sync_browser must be specified. (type=value_error) ### Suggestion: _No response_
Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain.
https://api.github.com/repos/langchain-ai/langchain/issues/14754/comments
4
2023-12-15T10:52:12Z
2024-03-23T16:07:00Z
https://github.com/langchain-ai/langchain/issues/14754
2,043,485,623
14,754
[ "langchain-ai", "langchain" ]
### System Info - Python 3.11.7 - Windows 64bit - langchain_google_genai==0.0.4 - langchain==0.0.350 - pymongo==4.6.0 ### Who can help? @sbusso @jarib ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Install langchain_google_genai using `pip install -U langchain-google-genai` 2. Then, `from langchain_google_genai import GoogleGenerativeAI` 3. This will produce error. ### Expected behavior I want to use this model for Google AI, but not able to access it
ImportError: cannot import name 'GoogleGenerativeAI' from 'langchain_google_genai' ( from langchain_google_genai import GoogleGenerativeAI )
https://api.github.com/repos/langchain-ai/langchain/issues/14753/comments
4
2023-12-15T10:01:36Z
2024-03-28T14:49:52Z
https://github.com/langchain-ai/langchain/issues/14753
2,043,356,756
14,753
[ "langchain-ai", "langchain" ]
### Feature request All chains inherit from `Runnable` class and It has the `stream` method. But no chain has it's own `stream` method. That's why we have to pass a callback handler every time when work on streaming. We need to write additional logic to handle stream. There should be a functionality where we can just call the stream method and it will return the Generator same as ChatModel (ChatOpenAI). ### Motivation I believe Chain is the core of LangChain. This is so much frustrating when work with stream. Every time I work with streaming I have to write something like this. I have to run the qa task in the background using task or thread which is not suitable If I using a framework. Framework should handle this. New developers will struggle if they need to handle additional logic even though they used a framework. ``` class StreamingLLMCallbackHandler(BaseCallbackHandler): """Callback handler for streaming LLM responses to a queue.""" def __init__(self): self._is_done = False self._queue = Queue() def on_llm_new_token(self, token: str, **kwargs: Any) -> None: """Run on new LLM token. Only available when streaming is enabled.""" # print(token) self._queue.put(EventData(content=token)) def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any: """Run when LLM ends running.""" self._queue.put(EventData(content=None, finish_reason='done')) self._is_done = True def add_new_token_to_stream(self, data: EventData, is_done=True): self._queue.put(data) self._is_done = is_done @property def stream_gen(self): while not self._is_done or not self._queue.empty(): try: delta: EventData = self._queue.get() if delta.data.get('finish_reason') == 'error': yield str(StreamData(event_data=delta, event=EnumStreamEventType.ERROR)) else: yield str(StreamData(event_data=delta)) except Empty: continue class LangChainChatService(ChatBaseService): def __init__(self, model: LangchainChatModel, tool_query=None): super().__init__() self.model = model self.request_manager = api_request_manager_var.get() self.tool_query = tool_query def _qa_task(self, streaming_handler: StreamingLLMCallbackHandler, qa: BaseConversationalRetrievalChain, formatted_chat_history: list[BaseMessage]): try: question = self.tool_query or self.model.query answer = qa.run(question=question, chat_history=formatted_chat_history) self._publish_chat_history(answer) except Exception as ex: streaming_handler.add_new_token_to_stream(EventData(content=get_user_message_on_exception(ex), error=build_error_details(ex), finish_reason='error')) logger.exception(ex) async def stream_chat_async(self): streaming_handler = StreamingLLMCallbackHandler() try: formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history) qa = self._get_qa_chain(callbacks=[streaming_handler]) asyncio.create_task(asyncio.to_thread(self._qa_task, streaming_handler, qa, formatted_chat_history)) return streaming_handler.stream_gen except Exception as ex: logger.exception(ex) streaming_handler.add_new_token_to_stream(EventData(content=get_user_message_on_exception(ex), error=build_error_details(ex), finish_reason='error')) return streaming_handler.stream_gen ``` ### Your contribution N/A
Every chain (LLMChain, ConversationalRetrievalChain etc) should return stream without CallbackHandler same as llm ChatModel
https://api.github.com/repos/langchain-ai/langchain/issues/14752/comments
4
2023-12-15T06:56:26Z
2024-02-29T05:31:10Z
https://github.com/langchain-ai/langchain/issues/14752
2,043,013,698
14,752
[ "langchain-ai", "langchain" ]
### System Info ImportError: cannot import name 'AzureChatopenAI' from 'langchain.chat_models' ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import AzureChatopenAI ### Expected behavior from langchain.chat_models import AzureChatopenAI llm = ChatOpenAI( openai_api_base=url, openai_api_version="2023-05-15", deployment_name="gpt-3.5", openai_api_key="test123456", openai_api_type="azure", )
cannot import name 'AzureChatopenAI' from 'langchain.chat_models'
https://api.github.com/repos/langchain-ai/langchain/issues/14751/comments
1
2023-12-15T06:03:38Z
2023-12-15T06:11:27Z
https://github.com/langchain-ai/langchain/issues/14751
2,042,960,303
14,751
[ "langchain-ai", "langchain" ]
### Feature request i want to struct output result by langchain,my llm is chatglm,but most output parsers demo is rely on openai ### Motivation retriever = db.as_retriever() qa = RetrievalQA.from_chain_type( llm=llm, chain_type='stuff', retriever=retriever, verbose=True ) def chat(question,history): response = qa.run(question) return response demo = gr.ChatInterface(chat) demo.launch(inbrowser=True) ### Your contribution i want to struct output result by langchain,my llm is chatglm,but most output parsers demo is rely on openai
adapt chatglm output_parser
https://api.github.com/repos/langchain-ai/langchain/issues/14750/comments
1
2023-12-15T05:55:16Z
2024-03-22T16:06:51Z
https://github.com/langchain-ai/langchain/issues/14750
2,042,951,433
14,750
[ "langchain-ai", "langchain" ]
Hello Langchain Team, I've been working with the `create_vectorstore_router_agent` function, particularly in conjunction with the `VectorStoreRouterToolkit`, and I've encountered a limitation that I believe could be an important area for enhancement. Currently, the response from this function primarily includes the final processed answer to a query. However, it does not provide any details about the source documents or the similarity search performed within the vector store. In many applications, especially those that require a high degree of transparency and traceability of information, having access to the source documents is crucial. The ability to see which documents were retrieved, along with their similarity scores or other relevant metadata, is invaluable. It helps in understanding the basis of the answers provided by the agent and is essential for verifying the relevance and accuracy of the information. Therefore, I suggest enhancing the functionality of the toolkit to include an option to return detailed information about the retrieved documents in the response. This could be implemented as an optional feature that can be enabled as needed, depending on the specific requirements of the use case. Such a feature would significantly enhance the utility and applicability of the Langchain library, particularly in scenarios where detailed source information is essential for validation, auditing, or explanatory purposes. Thank you for considering this enhancement. I believe it would make a great addition to the capabilities of Langchain. Best regards, Sarath chennamsetty.
Expose Search Similarity Results as Source Documents in create_vectorstore_router_agent Responses
https://api.github.com/repos/langchain-ai/langchain/issues/14744/comments
1
2023-12-15T01:11:29Z
2024-03-22T16:06:45Z
https://github.com/langchain-ai/langchain/issues/14744
2,042,729,395
14,744
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hugging face will deprecate the `InferenceApi` and move to `InferenceClient`. Can the langchain package also update the dependency in `HuggingFaceHub` accordingly? Refer to [migration docs](https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client) to migrate to new client. ### Suggestion: _No response_
Issue: Update the outdated hugging face client
https://api.github.com/repos/langchain-ai/langchain/issues/14741/comments
1
2023-12-15T00:36:29Z
2024-03-22T16:06:41Z
https://github.com/langchain-ai/langchain/issues/14741
2,042,704,569
14,741
[ "langchain-ai", "langchain" ]
### System Info Langchain Version - 0.0.348 OS or Platform Version - Windows 11 Python Version - 3.11.5 Conda Version - 23.7.4 ### Who can help? @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ## BRIEF * [Issue 1] When using Google Drive Loader to load Google Docs, I encountered several errors following the offcial documentation. I ran my code on 3 different platforms - Windows, Kaggle Notebooks, and CodeSandbox (Linux). [Here is the code](https://python.langchain.com/docs/integrations/document_loaders/google_drive#instructions-for-ingesting-your-google-docs-data) that does not work. * [Issue 2] However, after playing with the code for a while, I was able to successfully authenticate with Google and load the docs. But, this brings up another issue to our notice. The environment variable needs to be set, but its value can be any string. ## REPRODUCTION STEPS <details> <summary>Pre requisites</summary> 1. Complete the [Prerequisites for the GoogleDriveLoader.](https://python.langchain.com/docs/integrations/document_loaders/google_drive#prerequisites) 2. Create a Google Docs file and copy its document ID. [How to find the document ID.](https://python.langchain.com/docs/integrations/document_loaders/google_drive#instructions-for-ingesting-your-google-docs-data). 3. Please read the section mentioned in point number 2. 4. Create an empty directory on Windows for this issue and place the `credentials.json` file there, recieved from Google Cloud. 5. Create main.py file, to write the below 3 programs, in this directory. 6. I run the program with ```python main.py``` </details> ### FIRST TRY Follow the documentation ```python # Set the Path to the credentials and authentication token credentials_path = "credentials.json" token_path = "token.json" # Create the loader object (no environment variable is set) loader = GoogleDriveLoader( document_ids=["Your Document ID here"], credentials_path=credentials_path, token_path=token_path ) # Call the loader, after the above code: loader.load() ``` <details> <summary>Output - Errors</summary> Windows and CodeSandbox (Linux): > raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS) google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information. Kaggle Notebooks: > RefreshError: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable </details> ### SECOND TRY Documentation mentions that if I get RefreshError, I should not pass the credentials path in the constructor, rather in the env var. ```python # Set the env var. This path was declared before os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = credentials_path # change loader = GoogleDriveLoader( document_ids=["Your Document ID here"], # credentials_path=credentials_path, # this one is removed token_path=token_path ) ``` <details> <summary>Output - Errors</summary> Kaggle Notebooks and CodeSandbox (Linux) > FileNotFoundError: [Errno 2] No such file or directory: '/root/.credentials/credentials.json' Windows > FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Username\\.credentials\\credentials.json' </details> ### THIRD TRY: MY APPROACH The above error means, if we skip the credentials_path param in the constructor, it uses the default path and does not uses the path provided in the env var. Now, If I set GOOGLE_APPLICATION_CREDENTIALS environment variable to ANY string and pass the path in the constructor: ```python os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "" # change: set the env var to any str loader = GoogleDriveLoader( document_ids=["Your Document ID here"], credentials_path=credentials_path, # this one is added again token_path=token_path ) ``` <details> <summary>Output - user authentication to load the gdoc</summary> Windows and Kaggle Notebook: > Please visit this URL to authorize this application: https://accounts.google.com/o... CodeSandbox (Linux) 😂: > webbrowser.Error: could not locate runnable browser </details> ## CONCLUSION * [Issue 1] The code provided in the documentation does not work. A/c to the docs: * If the env var is **not set**, and the path is **passed** in the constructor, the code gives errors. * If the env var is **set** as the path, and the path is **not passed** in constructor, the code gives errors. * [Issue 2] The env var needs to be set, but its value is not important. As per my approach: * If the env var is **set** as ANYTHING, and the path is **passed** in the constructor, the code works. ### Expected behavior I do not know what to expect, because the code given in the official documentation is not working correctly. Suggestion: However, I would suggest that, if there is no need for the env var, maybe we can remove the step of setting it. This will resolve the second issue. For first issue, remaining would be to correct the documentation.
Google Drive Loader: The code in the Official Doc does not work; and Setting environment variable GOOGLE_APPLICATION_CREDENTIALS is important, though its value is not
https://api.github.com/repos/langchain-ai/langchain/issues/14725/comments
1
2023-12-14T18:16:28Z
2024-03-21T16:06:52Z
https://github.com/langchain-ai/langchain/issues/14725
2,042,229,871
14,725
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.330 langchain-core==0.1.0 langchain-google-genai==0.0.3 python: 3.11 ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain_google_genai import ChatGoogleGenerativeAI llm = ChatGoogleGenerativeAI( model="gemini-pro", google_api_key=google_api_key, ) return LLMChain(llm=llm, prompt=prompt, verbose=verbose) ``` Error: ``` pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable) llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable) ``` ### Expected behavior I expect LLMChain to run as usual with this new chat model.
`LLMChain` does not support `ChatGoogleGenerativeAI`
https://api.github.com/repos/langchain-ai/langchain/issues/14717/comments
5
2023-12-14T14:59:49Z
2023-12-16T10:53:50Z
https://github.com/langchain-ai/langchain/issues/14717
2,041,882,866
14,717
[ "langchain-ai", "langchain" ]
### Discussed in https://github.com/langchain-ai/langchain/discussions/14714 <div type='discussions-op-text'> <sup>Originally posted by **ciliamadani** December 14, 2023</sup> I'm currently in the process of developing a chatbot utilizing Langchain and the Ollama (llama2 7b model). My objective is to allow users to control the number of tokens generated by the language model (LLM). In the Ollama documentation, I came across the parameter 'num_predict,' which seemingly serves this purpose. However, when using Ollama as a class from Langchain, I couldn't locate the same parameter. Consequently, I've been attempting to pass it as metadata. Unfortunately, even when setting this parameter to a low value, such as 50, the LLM continues to generate more tokens than expected. I'm wondering if you have any insights on how I can effectively control the number of generated tokens when using Ollama as a Langchain class? My current code llm = Ollama(model="llama2:7b-chat-q4_0",callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),temperature=temperature,repeat_penalty=1.19,top_p=top_p,repeat_last_n=-1,metadata={"num_predict": num_predict}) Thank you.</div>
Ollama max tokens parameter
https://api.github.com/repos/langchain-ai/langchain/issues/14715/comments
1
2023-12-14T14:58:07Z
2024-03-21T16:06:47Z
https://github.com/langchain-ai/langchain/issues/14715
2,041,879,113
14,715
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi all, trying to use Langchain with HuggingFace model and Embeddings. Am new to Langchain so any pointers welcome. `import os os.environ['HUGGINGFACEHUB_API_TOKEN']=myToken #required to avoid certificate issue os.environ['CURL_CA_BUNDLE'] = '' from langchain import PromptTemplate, HuggingFaceHub, LLMChain #Build prompt template = """ Question: {question} Answer: Let's think of the best answer, with arguments.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain=LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":64})) from langchain.embeddings import HuggingFaceEmbeddings from langchain.document_loaders import PyPDFLoader loader = PyPDFLoader("https://www.ofgem.gov.uk/sites/default/files/docs/2018/10/rec_v1.0_main_body.pdf") pages = loader.load_and_split() from langchain.vectorstores import Chroma from langchain.chains import RetrievalQA doc_embed = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") docsearch = Chroma.from_documents(pages, doc_embed) qa_chain = RetrievalQA.from_chain_type( llm_chain, retriever=docsearch.as_retriever() ) output = qa_chain.run("what is the retail energy code?")` I get the following error: ---> 89 return self( 90 input, 91 callbacks=config.get("callbacks"), 92 tags=config.get("tags"), 93 metadata=config.get("metadata"), 94 run_name=config.get("run_name"), 95 **kwargs, 96 ) TypeError: Chain.__call__() got an unexpected keyword argument 'stop' ### Suggestion: _No response_
TypeError: Chain.__call__() got an unexpected keyword argument 'stop'
https://api.github.com/repos/langchain-ai/langchain/issues/14712/comments
1
2023-12-14T13:18:03Z
2024-03-21T16:06:42Z
https://github.com/langchain-ai/langchain/issues/14712
2,041,683,363
14,712
[ "langchain-ai", "langchain" ]
### System Info Anaconda 23.7.2 Windows 11 Python 3.11.5 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Method 1 ``` def generate_lecture(topic:str, context:str): template =""" As an accomplished university professor and expert in {topic}, your task is to develop an elaborate, exhaustive, and highly detailed lecture on the subject. Remember to generate content ensuring both novice learners and advanced students can benefit from your expertise. while leveraging the provided context Context: {context} """ prompt = ChatPromptTemplate.from_template(template) model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key) response = model.invoke(template) return response.content ``` Method 2 ``` def generate_lecture(topic:str, context:str): template =""" As an accomplished university professor and expert in {topic}, your task is to develop an elaborate, exhaustive, and highly detailed lecture on the subject. Remember to generate content ensuring both novice learners and advanced students can benefit from your expertise. while leveraging the provided context Context: {context} """ prompt = ChatPromptTemplate.from_template(template) model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key) chain = prompt| model | StrOutputParser() response = chain.invoke({"topic":topic, "context":context}) return response. ``` Result we receive on execttion of method 1 and method 2 is ``` File "C:\Users\ibokk\RobotForge\mvp\service\llm.py", line 80, in generate_lecture model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_core\load\serializable.py", line 97, in __init__ super().__init__(**kwargs) File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__ File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_google_genai\chat_models.py", line 502, in validate_environment values["_generative_model"] = genai.GenerativeModel(model_name=model) ^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'google.generativeai' has no attribute 'GenerativeModel ``` ### Expected behavior A poor result will be random text generated which is not relevant to the prompt provided. An excellent result will be text generated relevant to the template provided.
'google.generativeai' has no attribute 'GenerativeModel'
https://api.github.com/repos/langchain-ai/langchain/issues/14711/comments
20
2023-12-14T13:11:09Z
2024-08-02T08:51:51Z
https://github.com/langchain-ai/langchain/issues/14711
2,041,671,633
14,711
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.350 langchain-community==0.0.3 langchain-core==0.1.0 langchain-google-genai==0.0.3 google-ai-generativelanguage==0.4.0 google-api-core==2.15.0 google-auth==2.25.2 google-generativeai==0.3.1 googleapis-common-protos==1.62.0 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` class LangChainChatService(ChatBaseService): def __init__(self, model: LangchainChatModel, tool_query=None): super().__init__() self.model = model self.request_manager = api_request_manager_var.get() self.tool_query = tool_query async def http_chat_async(self) -> dict: formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history) qa = self._get_qa_chain() qa.return_source_documents = self.model.return_source_documents qa.return_generated_question = True query_start = time.time() question = self.tool_query or self.model.query qa_response = await qa.ainvoke({"question": question, "chat_history": formatted_chat_history}) query_end = time.time() result = { 'query_result': qa_response.get("answer"), 'query_time': int((query_end - query_start) * 1000), 'generated_question': qa_response.get('generated_question'), 'source_documents': [document.__dict__ for document in qa_response.get("source_documents", [])], } self._publish_chat_history(result) return result def _get_qa_chain(self, callbacks: Callbacks = None) -> BaseConversationalRetrievalChain: collection_name = get_langchain_collection_name(self.model.client_id) connection_args = {"host": AppConfig.vector_db_host, "port": AppConfig.vector_db_port} embeddings = LLMSelector(self.model).get_embeddings() vector_store = Milvus(embeddings, collection_name=collection_name, connection_args=connection_args) expression = get_expression_to_fetch_db_text_from_ids(**self.model.model_dump()) # Instance of ChatGoogleGenerativeAI qa_llm = LLMSelector(self.model).get_language_model(streaming=self.model.stream_response, callbacks=callbacks) condense_question_llm = LLMSelector(self.model).get_language_model() qa = ConversationalRetrievalChain.from_llm( llm=qa_llm, retriever=vector_store.as_retriever(search_type="similarity", search_kwargs={"k": self.model.similarity_top_k, 'expr': expression}), condense_question_llm=condense_question_llm, verbose=True ) return qa ``` The code use this prompt from langchain code base ``` prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Helpful Answer:""" QA_PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) ``` I see that after fetching context document from vector database langchain generate `SystemMessage` along with `HumanMessage` using the above prompt. And there is a validation in `ChatGoogleGenerativeAI` to prevent system message. This is the possible reason for producing the following error. `langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Message of 'system' type not supported by Gemini. Please only provide it with Human or AI (user/assistant) messages.` ### Expected behavior As I didn't provide any SystemMessage, ChatGoogleGenerativeAI should work without exception. This bug is related to Langchain.
Raise "Message of 'system' type not supported by Gemini" exception by ChatGoogleGenerativeAI with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/14710/comments
7
2023-12-14T11:41:27Z
2024-02-27T04:46:04Z
https://github.com/langchain-ai/langchain/issues/14710
2,041,522,040
14,710
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. below is my dependencies installed till now django==4.0 django-rest-framework langchain==0.0.349 pdf2image chromadb unstructured openai pypdf tiktoken django-cors-headers django-environ pytesseract==0.3.10 beautifulsoup4==4.12.2 atlassian-python-api==3.38.0 tiktoken==0.4.0 lxml==4.9.2 what else dependencies need to install for fetching confluence attachments. ### Suggestion: _No response_
Issue: what are all dependencies need to install for fetching confluence attachments
https://api.github.com/repos/langchain-ai/langchain/issues/14706/comments
1
2023-12-14T09:19:24Z
2024-03-21T16:06:37Z
https://github.com/langchain-ai/langchain/issues/14706
2,041,277,150
14,706
[ "langchain-ai", "langchain" ]
Hi @dossubot. I have a document where it contains 3 different titles of a single document, i want to create chunk size based on every header, so below is my existing code, pls modify that. and send me accordingly. def load_docs_only(directory_path): docs, _, _, _ = load_documents(directory_path) return docs def text_splitter_by_char(): splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=100, length_function = len, return splitter # Load the content docs = load_docs_only(cfg.directory_path) # Split the content splitter = text_splitter_by_char() chunks = splitter.split_documents(docs) chunks
Splitting the document based on headers using Recursive Character Text Splitter
https://api.github.com/repos/langchain-ai/langchain/issues/14705/comments
1
2023-12-14T09:05:08Z
2023-12-22T09:57:43Z
https://github.com/langchain-ai/langchain/issues/14705
2,041,252,793
14,705
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I am trying to stream the response from the llm back to the client by using a callback with a custom StreamHandler, but the on_llm_new_token also includes the output from the rephrase_question step. while the final response does not include the rephrased answer. I don't want the rephrased question to be present in the response that is streaming. the StreamHandler class is given below `class StreamHandler(BaseCallbackHandler): def __init__(self): self.text = "" def on_llm_new_token(self, token: str, **kwargs): old_text = self.text self.text += token # Calculate the new content since the last emission new_content = self.text[len(old_text) :] socketio.emit("update_response", {"response": new_content})` The qa-chain is defined as below: qa_chain = ConversationalRetrievalChain.from_llm( llm=chat, retriever=MyVectorStoreRetriever( vectorstore=vectordb, search_type="similarity_score_threshold", search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1}, ), return_source_documents=True, rephrase_question=False) response = qa_chain( { "question": user_input, "chat_history":chat_history, },callbacks=[stream_handler] ) ### Suggestion: _No response_
Issue: Rephrased question in included in the on_llm_new_token method while streaming the response from the LLM
https://api.github.com/repos/langchain-ai/langchain/issues/14703/comments
7
2023-12-14T08:47:45Z
2024-06-30T16:03:41Z
https://github.com/langchain-ai/langchain/issues/14703
2,041,224,728
14,703
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. my code: only_recall_inputs = RunnableParallel({ "question": itemgetter('question'), "history": ????????, "docs": itemgetter('question') | retriever, }) just a simple chain I want the "history" part to be [] or '' how to do this? ### Suggestion: _No response_
Issue: How to set the Chain with valid/empty input
https://api.github.com/repos/langchain-ai/langchain/issues/14702/comments
1
2023-12-14T08:45:49Z
2024-03-21T16:06:32Z
https://github.com/langchain-ai/langchain/issues/14702
2,041,221,683
14,702
[ "langchain-ai", "langchain" ]
### System Info python 3.10 langchain version:0.0.350 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction when use elasticsearchStore add_doccuments The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() langchain 0.0.317 is ok upgrade langchain 0.0.350 exist error ### Expected behavior fix it
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
https://api.github.com/repos/langchain-ai/langchain/issues/14701/comments
2
2023-12-14T08:24:32Z
2024-03-21T16:06:27Z
https://github.com/langchain-ai/langchain/issues/14701
2,041,187,377
14,701
[ "langchain-ai", "langchain" ]
### System Info LangChain: 0.0.348 langchain-google-genai: 0.0.3 python: 3.11 os: macOS11.6 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Message of 'system' type not supported by Gemini. Please only provide it with Human or AI (user/assistant) messages. ### Expected behavior no error
Gemini not support SystemMessage and raise an error
https://api.github.com/repos/langchain-ai/langchain/issues/14700/comments
8
2023-12-14T07:07:13Z
2024-03-26T16:07:11Z
https://github.com/langchain-ai/langchain/issues/14700
2,041,050,685
14,700
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. what is this issue, and how can i resolve it: ```python os.environ["AZURE_OPENAI_API_KEY"] = AZURE_OPENAI_API_KEY os.environ["AZURE_OPENAI_ENDPOINT"] = AZURE_OPENAI_ENDPOINT os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_VERSION"] = "2023-05-15" os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY embedding = OpenAIEmbeddings() COLLECTION_NAME = "network_team_documents" CONNECTION_STRING = PGVector.connection_string_from_db_params( driver=os.environ.get(DB_DRIVER, DB_DRIVER), host=os.environ.get(DB_HOST, DB_HOST), port=int(os.environ.get(DB_PORT, DB_PORT)), database=os.environ.get(DB_DB, DB_DB), user=os.environ.get(DB_USER, DB_USER), password=os.environ.get(DB_PASS, DB_PASS), ) store = PGVector( collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, embedding_function=embedding, extend_existing=True, ) gpt4 = AzureChatOpenAI( azure_deployment="GPT4", openai_api_version="2023-05-15", ) retriever = store.as_retriever(search_type="similarity", search_kwargs={"k": 10}) qa_chain = RetrievalQA.from_chain_type(llm=gpt4, chain_type="stuff", retriever=retriever, return_source_documents=True) return qa_chain ``` ```python Traceback (most recent call last): File "/opt/network_tool/chatbot/views.py", line 21, in chat chat_object = create_session() File "/opt/network_tool/chatbot/chatbot_functions.py", line 95, in create_session store = PGVector( File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 199, in __init__ self.__post_init__() File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 207, in __post_init__ EmbeddingStore, CollectionStore = _get_embedding_collection_store() File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 66, in _get_embedding_collection_store class CollectionStore(BaseModel): File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_api.py", line 195, in __init__ _as_declarative(reg, cls, dict_) File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 247, in _as_declarative return _MapperConfig.setup_mapping(registry, cls, dict_, None, {}) File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 328, in setup_mapping return _ClassScanMapperConfig( File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 578, in __init__ self._setup_table(table) File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 1729, in _setup_table table_cls( File "", line 2, in __new__ File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py", line 281, in warned return fn(*args, **kwargs) # type: ignore[no-any-return] File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 436, in __new__ return cls._new(*args, **kw) File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 468, in _new raise exc.InvalidRequestError( sqlalchemy.exc.InvalidRequestError: Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object. ``` ### Suggestion: _No response_
sqlalchemy.exc.InvalidRequestError: Table 'langchain_pg_collection' is already defined for this MetaData instance.
https://api.github.com/repos/langchain-ai/langchain/issues/14699/comments
15
2023-12-14T06:51:40Z
2023-12-27T19:12:07Z
https://github.com/langchain-ai/langchain/issues/14699
2,041,031,199
14,699
[ "langchain-ai", "langchain" ]
### System Info Langchain version = 0.0.344 Python version = 3.11.5 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Here is my code for connecting to Snowflake database and getting the tables and executing them in langchain SQLagent.. Howver i always get AttributeError: items. from snowflake.sqlalchemy import URL llm=AzureChatOpenAI(temperature=0.0,deployment_name=gpt-4-32k) snowflake_url = URL( account='xxxxx, user='xxxxxx', password='xxxxxx', database='xxxxx', schema='xxxxxx', warehouse='xxxxxxx' ) db = SQLDatabase.from_uri(snowflake_url, sample_rows_in_table_info=1, include_tables=['gc']) # Create the SQLDatabaseToolkit toolkit = SQLDatabaseToolkit(db=db, llm=llm) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, prefix=MSSQL_AGENT_PREFIX, format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, top_k=30, early_stopping_method="generate", handle_parsing_errors = True, ) question = "List top 10 records from gc" response = agent_executor.run(question) Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: "" Observation: gc Thought:The 'gc' table is available in the database. I should now check the schema of the 'gc' table to understand its structure and the data it contains. Action: sql_db_schema Action Input: "gc" --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File [c:\Anaconda_3\Lib\site-packages\sqlalchemy\sql\base.py:1150](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1150), in ColumnCollection.__getattr__(self, key) [1149](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1149) try: -> [1150](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1150) return self._index[key] [1151](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1151) except KeyError as err: KeyError: 'items' The above exception was the direct cause of the following exception: AttributeError Traceback (most recent call last) Cell In[66], [line 33](vscode-notebook-cell:?execution_count=66&line=33) [31](vscode-notebook-cell:?execution_count=66&line=31) from langchain.globals import set_debug [32](vscode-notebook-cell:?execution_count=66&line=32) set_debug(False) ---> [33](vscode-notebook-cell:?execution_count=66&line=33) response = agent_executor.run(question) File [~\AppData\Roaming\Python\Python311\site-packages\langchain\chains\base.py:507](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) [505](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:505) if len(args) != 1: [506](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:506) raise ValueError("`run` supports only one positional argument.") --> [507](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:507) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ [508](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:508) _output_key [509](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:509) ] [511](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:511) if kwargs and not args: [512](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:512) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ ... [201](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:201) # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/ [202](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:202) # as the __traceback__ object creates a cycle [203](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:203) del exception, replace_context, from_, with_traceback AttributeError: items ### Expected behavior Should have executed the question.
SQLagent always giving me AttributeError: items for Snowflake tables
https://api.github.com/repos/langchain-ai/langchain/issues/14697/comments
8
2023-12-14T06:29:48Z
2024-03-21T16:06:17Z
https://github.com/langchain-ai/langchain/issues/14697
2,041,006,966
14,697
[ "langchain-ai", "langchain" ]
### System Info **Environment Details** **Langchain version 0.0.336 Python 3.9.2rc1** **Error encountered while executing the sample code mentioned in the "Semi_structured_multi_modal_RAG_LLaMA2.ipynb" notebook from the cookbook.** File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\unstructured\file_utils\filetype.py:551](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured/file_utils/filetype.py:551), in add_metadata_with_filetype.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [549](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured/file_utils/filetype.py:549) @functools.wraps(func) ... --> [482](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:482) model = get_model(model_name, **kwargs) [483](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:483) if isinstance(model, UnstructuredObjectDetectionModel): [484](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:484) detection_model = model **TypeError: get_model() got an unexpected keyword argument 'ocr_languages'** I would appreciate any assistance in resolving this issue. Thank you. ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from typing import Any from pydantic import BaseModel from unstructured.partition.pdf import partition_pdf # Get elements raw_pdf_elements = partition_pdf( filename=path + "Employee-Stock-Option-Plans-ESOP-Best-Practices-2.pdf",# Unstructured first finds embedded image blocks infer_table_structure=True, # Post processing to aggregate text once we have the title max_characters=4000, new_after_n_chars=3800, combine_text_under_n_chars=2000, image_output_dir_path=path, languages=['eng'], ) ### Expected behavior The notebook should run without any issues and produce the expected output as documented in the cookbook
TypeError: get_model() got an unexpected keyword argument 'ocr_languages'
https://api.github.com/repos/langchain-ai/langchain/issues/14696/comments
16
2023-12-14T06:18:44Z
2024-05-21T16:08:06Z
https://github.com/langchain-ai/langchain/issues/14696
2,040,991,652
14,696
[ "langchain-ai", "langchain" ]
### System Info ### **SYSTEM INFO** LangChain version : 0.0.345 Python version : 3.9.6 ### **ISSUE** I create this custom function which will throw an error if the vectorestore cannot retrieve any relevant document ``` def check_threshold(inp, vecs): query = inp['question'] threshold = inp['threshold'] d = [doc for doc,score in vecs.similarity_search_with_relevance_scores(query) if score >= threshold] if len(d) < 1: raise Exception("Not found!") return "\n\n".join([x.page_content for x in d]) ``` I want to use another chain if the main chain fails by using `with_fallbacks` function in the main chain ``` main_chain = ({ "context" : lambda x: check_threshold(x, vecs), "question" : lambda x: x['question'] } | prompt | llm | StrOutputParser() ).with_fallbacks([fallback_chain]) ``` In the above code, the fallback_chain never gets triggered. **PS : The above code is just an example, the original code uses more complicated calculations with many exceptions raise in several custom functions. Therefore, It is not feasible to use built-in Python try-except error handler** ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction ``` def check_threshold(inp, vecs): query = inp['question'] threshold = inp['threshold'] d = [doc for doc,score in vecs.similarity_search_with_relevance_scores(query) if score >= threshold] if len(d) < 1: raise Exception("Not found!") return "\n\n".join([x.page_content for x in d]) main_chain = ({ "context" : lambda x: check_threshold(x, vecs), "question" : lambda x: x['question'] } | prompt | llm | StrOutputParser() ).with_fallbacks([fallback_chain]) main_chain.invoke({"question":"Hello, good morning"}) ``` ### Expected behavior fallback_chain get triggered whenever the main_chain raise an exception
PYTHON ISSUE : Fallback does not catch exception in custom function using LCEL
https://api.github.com/repos/langchain-ai/langchain/issues/14695/comments
1
2023-12-14T04:54:06Z
2024-03-21T16:06:12Z
https://github.com/langchain-ai/langchain/issues/14695
2,040,901,181
14,695
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Recently, langchain released the google gemini independent package to connect the google gemini LLM capabilities. This is also the first independent package released by langchain, which is a very big progress and change. But I noticed that the name of this package is langchain-google-genai, which may not be very systematic. Perhaps, we can use python's namespace feature to manage all langchain-related packages. Documentation about namespace package: - https://packaging.python.org/guides/packaging-namespace-packages/ - https://www.python.org/dev/peps/pep-0420/ ### Suggestion: Use the namespace capability to manage and publish all independent packages of langchain. The specific directory structure is as follows: docs: https://packaging.python.org/en/latest/guides/packaging-namespace-packages/ ``` pyproject.toml # AND/OR setup.py, setup.cfg src/ langchain/ # namespace package # No __init__.py here. google-genai/ # Regular import packages have an __init__.py. __init__.py module.py ``` and then, you can use like: ```python import langchain.google-genai from langchain import google-genai # ... code ```
Issue: Use python namespace capabilities to manage standalone packages
https://api.github.com/repos/langchain-ai/langchain/issues/14694/comments
1
2023-12-14T03:49:26Z
2024-03-21T16:06:07Z
https://github.com/langchain-ai/langchain/issues/14694
2,040,852,615
14,694
[ "langchain-ai", "langchain" ]
### Feature request With Gemini Pro going GA today (Dec. 13th). When can users of LangChain expect an update to use the new LLM? ### Motivation This will allow users of LangChain to use the latest LLM that Google is providing along with their safety settings. ### Your contribution I can try and help. Happy to contribute where needed
Google Gemini
https://api.github.com/repos/langchain-ai/langchain/issues/14671/comments
9
2023-12-13T19:02:58Z
2024-02-07T23:45:17Z
https://github.com/langchain-ai/langchain/issues/14671
2,040,302,557
14,671
[ "langchain-ai", "langchain" ]
### System Info Langchain: 0.0.349 Langchain-community: v0.0.1 Langchain-core: 0.0.13 Python: 3.12 Platform: Mac OS ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I have the following BaseModel and BaseTool classes created. ```python class TaskPost(BaseModel): """ TaskPost """ # noqa: E501 due_date: Optional[datetime] = Field(default=None, description="ISO 8601 Due date on the task. REQUIRED for scheduled tasks", alias="dueDate") duration: Optional[TaskDuration] = None status: Optional[StrictStr] = Field(default=None, description="Defaults to workspace default status.") auto_scheduled: Optional[AutoScheduledInfo] = Field(default=None, alias="autoScheduled") name: Annotated[str, Field(min_length=1, strict=True)] = Field(description="Name / title of the task") project_id: Optional[StrictStr] = Field(default=None, alias="projectId") workspace_id: StrictStr = Field(alias="workspaceId") description: Optional[StrictStr] = Field(default=None, description="Input as GitHub Flavored Markdown") priority: Optional[StrictStr] = 'MEDIUM' labels: Optional[List[StrictStr]] = None assignee_id: Optional[StrictStr] = Field(default=None, description="The user id the task should be assigned to", alias="assigneeId") __properties: ClassVar[List[str]] = ["dueDate", "duration", "status", "autoScheduled", "name", "projectId", "workspaceId", "description", "priority", "labels", "assigneeId"] model_config = { "populate_by_name": True, "validate_assignment": True } class CreateTaskTool(BaseTool): name = "create_task" description = ( """Use this to create a new task from all available args that you have. Always make sure date and time inputs are in ISO format""") args_schema: Type[BaseModel] = openapi_client.TaskPost verbose = True ``` The agent will use the alias names instead of the field name. i.e (workspaceId, dueDate) instead of (workspace_id, due_date) ```linux [tool/start] [1:chain:AgentExecutor > 7:tool:create_task] Entering Tool run with input: "{'name': 'Update code', 'workspaceId': 'xxxxxyyyyyzzzzz', 'dueDate': '2023-12-14T00:00:00'}" ``` When the agent calls `_parse_input` function from [langchian_core/tools.py and reaches line 247](https://github.com/langchain-ai/langchain/blob/14bfc5f9f477fcffff3f9aa564a864c5d5cd5300/libs/core/langchain_core/tools.py#L247) the results are filtered out because the results have the field names and the tool_input has the alias names which do not match. ``` CreateTaskTool -> _parse_input -> parse_obj -> result: due_date=datetime.datetime(2023, 12, 14, 0, 0) duration=None status=None auto_scheduled=None name='Update code' project_id=None workspace_id='xxxxxyyyyyzzzzz' description=None priority='MEDIUM' labels=None assignee_id=None CreateTaskTool -> _parse_input -> parse_obj -> result keys: dict_keys(['due_date', 'duration', 'status', 'auto_scheduled', 'name', 'project_id', 'workspace_id', 'description', 'priority', 'labels', 'assignee_id']) CreateTaskTool -> _parse_input -> parse_obj -> tool_input keys: dict_keys(['name', 'workspaceId', 'dueDate']) CreateTaskTool -> _parse_input -> parse_obj -> finalResults: {'name': 'Update code'} ``` ### Expected behavior ``` CreateTaskTool -> _parse_input -> parse_obj -> result keys: dict_keys(['due_date', 'duration', 'status', 'auto_scheduled', 'name', 'project_id', 'workspace_id', 'description', 'priority', 'labels', 'assignee_id']) CreateTaskTool -> _parse_input -> parse_obj -> tool_input keys: dict_keys(['name', 'workspaceId', 'dueDate']) CreateTaskTool -> _parse_input -> finalResults: {'name': 'Update code','workspaceId': 'xxxxxyyyyyzzzzz', 'dueDate': '2023-12-14T00:00:00'} or CreateTaskTool -> _parse_input -> finalResults: {'name': 'Update code','workspace_Id': 'xxxxxyyyyyzzzzz', 'due_date': '2023-12-14T00:00:00'} ```
LangChain Agent bug when parsing tool inputs that use alias field names
https://api.github.com/repos/langchain-ai/langchain/issues/14663/comments
1
2023-12-13T16:51:16Z
2024-03-20T16:06:48Z
https://github.com/langchain-ai/langchain/issues/14663
2,040,113,293
14,663
[ "langchain-ai", "langchain" ]
### System Info LangChain version 0.0.348 Python version 3.10 Operating System MacOS Monterey version 12.6 SQLAlchemy version 2.0.23 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```# Connect to db # Create an SQLAlchemy engine engine = create_engine("mysql+mysqlconnector://user:pass@host/database") # Test the database connection try: # Connect and execute a simple query with engine.connect() as connection: query = text("SELECT 1") result = connection.execute(query) for row in result: print("Connection successful, got row:", row) except Exception as e: print("Error connecting to database:", e) # Create an instance of SQLDatabase db = SQLDatabase(engine) # using Llama2 llm = LlamaCpp( model_path="/path_to/llama-2-7b.Q4_K_M.gguf", verbose=True, n_ctx=2048) # using Default and Suffix prompt template PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect", "top_k"], template=_DEFAULT_TEMPLATE + PROMPT_SUFFIX, ) db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_sql=True, prompt=PROMPT) langchain.debug = True response = db_chain.run(formatted_prompt) print(response ### Expected behavior Expected Behavior: Expected to generate SQL queries without errors. Actual Behavior: Received TypeError: wrong type and context window exceedance errors. File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.py", line 1325, in _create_completion f"Requested tokens ({len(prompt_tokens)}) exceed context window of {llama_cpp.llama_n_ctx(self._ctx)}" File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama_cpp.py", line 612, in llama_n_ctx return _lib.llama_n_ctx(ctx) ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type```
SQLDatabaseChain with LlamaCpp Llama2 "Chain Run Errored With Error: ArgumentError: <class 'TypeError'>: wrong type"
https://api.github.com/repos/langchain-ai/langchain/issues/14660/comments
2
2023-12-13T16:38:34Z
2024-03-20T16:06:44Z
https://github.com/langchain-ai/langchain/issues/14660
2,040,091,551
14,660
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I am attempting to call an instance of ConversationalRetrieverChain with a list of dictionary objects that I've pre-processed with a similarity search and cohere reranker. I've created an extension of BaseRetriever in order to pass my list of dictionary objects to the "retriever=" parameter. However when my extended class instantiates, I get an error saying "seai_retriever object has no field "documents". My code is below. What am I doing wrong? ``` from langchain.schema.retriever import BaseRetriever from langchain.schema.document import Document from langchain.callbacks.manager import CallbackManagerForRetrieverRun from typing import List class seai_retriever(BaseRetriever): def __init__(self, documents): self.documents = documents def retrieve(self, query, top_n=10): retrieved_docs = [doc for doc in self.documents if query.lower() in doc['content'].lower()] retrieved_docs = sorted(retrieved_docs, key=lambda x: x['content'].find(query), reverse=True)[:top_n] return retrieved_docs def _get_relevant_documents(self, query: str, *, run_manager: CallbackManagerForRetrieverRun) -> List[Document]: retrieved_docs = [doc for doc in self.documents if query.lower() in doc['content'].lower()] retrieved_docs = sorted(retrieved_docs, key=lambda x: x['content'].find(query), reverse=True)[:top_n] return retrieved_docs ``` ### Suggestion: _No response_
Getting "object has no field "documents" error with extended BaseRetriever classs
https://api.github.com/repos/langchain-ai/langchain/issues/14659/comments
5
2023-12-13T16:29:16Z
2024-06-01T00:07:37Z
https://github.com/langchain-ai/langchain/issues/14659
2,040,074,533
14,659
[ "langchain-ai", "langchain" ]
### System Info Mac Studio M1 Max 32GB macOS 14.1.2 Using rye Python 3.11.6 langchain==0.0.350 langchain-community==0.0.2 langchain-core==0.1.0 ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction **Error** ``` File "/venv/lib/python3.11/site-packages/langchain_community/document_loaders/xml.py", line 41, in _get_elements from unstructured.partition.xml import partition_xml ImportError: cannot import name 'partition_xml' from partially initialized module 'unstructured.partition.xml' (most likely due to a circular import) (/venv/lib/python3.11/site-packages/unstructured/partition/xml.py) ``` I tried to load XML document like this link(from langchain document) https://python.langchain.com/docs/integrations/document_loaders/xml ``` from langchain.document_loaders import UnstructuredXMLLoader loader = UnstructuredXMLLoader( "aaa.xml", ) docs = loader.load() docs[0] ``` ### Expected behavior no circular import
UnstructuredXMLLoader import error (circular import)
https://api.github.com/repos/langchain-ai/langchain/issues/14658/comments
1
2023-12-13T16:02:17Z
2024-03-20T16:06:38Z
https://github.com/langchain-ai/langchain/issues/14658
2,040,021,283
14,658
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I'm trying to use StreamingStdOutCallbackHandler for a conversation chain. But it prints out the memory after the response. Is there a way to not printing memory without using an agent? ### Suggestion: _No response_
Issue: streaming issues
https://api.github.com/repos/langchain-ai/langchain/issues/14656/comments
2
2023-12-13T15:47:29Z
2024-01-10T03:38:15Z
https://github.com/langchain-ai/langchain/issues/14656
2,039,990,933
14,656
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. How can we achieve this, below is my code ` loader = UnstructuredURLLoader(urls=urls) urlDocument = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50) texts = text_splitter.split_documents(documents=urlDocument)` ### Suggestion: _No response_
Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain.
https://api.github.com/repos/langchain-ai/langchain/issues/14651/comments
3
2023-12-13T13:27:30Z
2024-03-27T16:08:12Z
https://github.com/langchain-ai/langchain/issues/14651
2,039,714,207
14,651
[ "langchain-ai", "langchain" ]
### System Info Langchain Version = 0.0.311 Python Version = 3.9 Tried it on my local system as well on Company's hosted Jupyter Hub as well ### Who can help? @eyurtsev @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.text_splitter import TokenTextSplitter token_splitter_model_name = "gpt-3.5-turbo" SPLIT_CHUNK_SIZE = 1024 CHUNK_OVERLAP = 256 text_splitter = TokenTextSplitter.from_tiktoken_encoder(model_name=token_splitter_model_name, chunk_size=SPLIT_CHUNK_SIZE , chunk_overlap = CHUNK_OVERLAP) blog_content= ' your text here' blog_splits=text_splitter.split_text(blog_content) ``` ### Expected behavior The ways this token text splitter works , isn't how it is intended to work . For Example :- When specified chunk_size = 1024 and overlap = 256 with input text tokens = 991 , it made two chunks ; first = token[0 : 991] second = token[768 : 991] but logically it should work this ways . If input text token >1024 , there should be two chunks . first = token[0 : 1024] second = token[768 : ] and so on .............. If input text token <=1024 , there should be one chunk only . first = token[ 0 : ] More details on this issue raised previously :- https://github.com/langchain-ai/langchain/issues/5897
Bug in Text splitting while using langchain.text_splitter.split_text_on_tokens¶
https://api.github.com/repos/langchain-ai/langchain/issues/14649/comments
3
2023-12-13T10:31:27Z
2024-03-25T16:07:21Z
https://github.com/langchain-ai/langchain/issues/14649
2,039,411,662
14,649
[ "langchain-ai", "langchain" ]
### System Info from langchain.document_transformers import DoctranTextTranslator from langchain.schema import Document documents = [Document(page_content=sample_text)] qa_translator = DoctranTextTranslator(language="spanish") translated_document = await qa_translator.atransform_documents(documents) TypeError Traceback (most recent call last) [<ipython-input-18-c526f9c55393>](https://localhost:8080/#) in <cell line: 8>() 6 openai_api_model="gpt-3.5-turbo", language="chinese") 7 ----> 8 translated_document = await qa_translator.atransform_documents(documents) 9 10 [/usr/local/lib/python3.10/dist-packages/langchain_community/document_transformers/doctran_text_translate.py](https://localhost:8080/#) in atransform_documents(self, documents, **kwargs) 61 ] 62 for i, doc in enumerate(doctran_docs): ---> 63 doctran_docs[i] = await doc.translate(language=self.language).execute() 64 return [ 65 Document(page_content=doc.transformed_content, metadata=doc.metadata) TypeError: object Document can't be used in 'await' expression ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction test ### Expected behavior test
TypeError: object Document can't be used in 'await' expression
https://api.github.com/repos/langchain-ai/langchain/issues/14645/comments
1
2023-12-13T07:52:41Z
2024-03-20T16:06:28Z
https://github.com/langchain-ai/langchain/issues/14645
2,039,150,807
14,645
[ "langchain-ai", "langchain" ]
I have two code. The 1st code is to load PDF documents and use ParentDocumentRetriever to add_documents() and save vectorstore to local disk. The 2nd code is to load vectorstore and call ParentDocumentRetriever.get_relevant_documents The problem is, the ParentDocumentRetriever.get_relevant_documents() got empty result. Any idea?? What is the inMemoryStore for?? Here are the two codes: ### the 1st code: LoadDocumentsAndSaveVectorstore.py ``` loader = DirectoryLoader('data/', glob='**/*.pdf', loader_cls=PyPDFLoader) docs = loader.load() store = InMemoryStore() parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1000) child_splitter = RecursiveCharacterTextSplitter(chunk_size=128, chunk_overlap=64) texts = [""] vectorstore = FAISS.from_texts(texts, embedding_function) retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, ) retriever.add_documents(docs, ids=None) vectorstore.save_local("ppstore") print("save data to ppstore") ``` The 1st code works fine. The 2nd file is to load the vectorstore and query. The strange thing is, the **retriever1.get_relevant_documents(query)** got empty result. ### the 2nd code: load vectorstore and use retriever to get relevant_documents ``` print("load ppstore") store = InMemoryStore() parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1000) child_splitter = RecursiveCharacterTextSplitter(chunk_size=128, chunk_overlap=64) db = FAISS.load_local("ppstore", embedding_function) retriever1 = ParentDocumentRetriever( vectorstore=db, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, ) query = "Please describe Sardina dataset" print("query:", query) sub_docs = db.similarity_search(query) print("=== check small chunk ===") print(sub_docs[0].page_content) print(len(sub_docs[0].page_content)) ## this respond OK, the len is a little bit smaller than 128 retrieved_docs = retriever1.get_relevant_documents(query) ## I got empty result print("=== check larger chunk ===") print(retrieved_docs[0].page_content) print(len(retrieved_docs[0].page_content)) ```
ParentDocumentRetriever.get_relevant_documents() got empty result
https://api.github.com/repos/langchain-ai/langchain/issues/14643/comments
5
2023-12-13T07:31:59Z
2024-01-11T20:34:16Z
https://github.com/langchain-ai/langchain/issues/14643
2,039,122,156
14,643
[ "langchain-ai", "langchain" ]
### System Info # Dependency Versions langchain==0.0.349 langchain-community==0.0.1 langchain-core==0.0.13 openai==1.3.8 # Python Version Python 3.11.4 # Redis Stack Version redis-stack-server 6.2.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.globals import set_llm_cache from langchain.cache import RedisSemanticCache from langchain.embeddings import OpenAIEmbeddings from langchain_community.chat_models import ChatOpenAI from redis import Redis import time llm = ChatOpenAI(openai_api_key='<OPENAI_API_KEY>', model_name='gpt-3.5-turbo') cache = RedisSemanticCache(redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings(openai_api_key='<OPENAI_API_KEY>'), score_threshold=0.95) set_llm_cache(cache) start = time.time() response = llm.predict("""Tell me about USA in only two sentences""") print(time.time()-start) print(response) start = time.time() response = llm.predict("""Tell me about INDIA in only two sentences""") print(time.time()-start) print(response) start = time.time() response = llm.predict("""What is LLMs in the context of GEN AI ?""") print(time.time()-start) print(response) ### Expected behavior As the score_threshold is set to 0.98, I expect all the three prompts to give three different responses. But we are getting one response for all the three prompts. Output from running the script : 4.252941131591797 The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage. 0.3903520107269287 The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage. 0.6625611782073975 The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage.
(RedisSemanticCache + ChatOpenAI + OpenAIEmbeddings) - Not working as expected - Wanted to understand, if I am doing something wrong here.
https://api.github.com/repos/langchain-ai/langchain/issues/14640/comments
2
2023-12-13T06:34:44Z
2024-04-25T16:12:16Z
https://github.com/langchain-ai/langchain/issues/14640
2,039,051,024
14,640
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Issue 1: I am working on the summarization using the stuff and map-reduce of the Langchain. I have integrated it with the AWS Bedrock's anthropic llm which has a token limit of 100000. It is working fine but when the pdf with 40000 tokens is passed, the bedrock is throwing an error: i) with VPN connected: An error occurred: Error raised by bedrock service: Could not connect to the endpoint URL: "https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-v2/invoke". ii) without VPN connected: An error occurred: Error raised by bedrock service: An error occurred (ExpiredTokenException) when calling the InvokeModel operation: The security token included in the request is expired Any reason why this is happening? thanks in advance! Issue 2: 2) map-reduce takes a lot of time to produce the summary for the 40000+ tokens documents when the anthropic threshold is reduced to 40000. Sometimes it is taking a lot of time but erroring out. Any help is appreciated. Thanks in advance! ### Suggestion: _No response_
Issue: bedrock is throwing an error for the langchain stuff method using the anthropic model for the summarization.
https://api.github.com/repos/langchain-ai/langchain/issues/14639/comments
1
2023-12-13T05:48:17Z
2023-12-15T07:47:12Z
https://github.com/langchain-ai/langchain/issues/14639
2,039,002,274
14,639
[ "langchain-ai", "langchain" ]
### System Info Hello, after forking and cloning the repo on my machine, I tried to open it using docker and specifically in VS Code with the option to "Reopen in Container". While building, the final command of [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile#L44) resulted in the following error: ```logs #0 1.241 Directory ../core does not exist ``` After investigating, I found out that the issue lies in [pyproject.toml](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/pyproject.toml) which is using relative paths like `../core` and `../community` in some occasions. Additionally, even after replacing `../` with `libs/` (which I am not sure if it breaks something else), the actual `core` and `community` directories are never copied over in [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile). These should also be copied in the created docker container, similarly to [line 41](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile#L41). After making these two changes, the container was successfully built. I'll check out whether the change of paths in pyproject.toml is affecting any other files, and if not I will create a PR for this. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: 1. Fork and clone the repo on your machine 2. Open it with VS Code (with Dev Containers extension installed) 3. Run the VS Code command: "Dev Containers: Rebuild Container" ### Expected behavior Build the development docker container without errors
Dockerfile issues when trying to build the repo using .devcontainer
https://api.github.com/repos/langchain-ai/langchain/issues/14631/comments
3
2023-12-12T23:17:45Z
2023-12-28T16:25:05Z
https://github.com/langchain-ai/langchain/issues/14631
2,038,690,605
14,631
[ "langchain-ai", "langchain" ]
### Feature request Azure OpenAI now previews the DALLE-3 model. Today, DALLEAPIWrapper only supports the openai API. ### Motivation My customers are using Azure OpenAI and would like to use DALL-E-3 in their solutions. ### Your contribution PR may not be possible but I'd like to help anyway I can.
DALLEAPIWrapper to support Azure OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/14625/comments
2
2023-12-12T22:22:43Z
2024-03-20T16:06:23Z
https://github.com/langchain-ai/langchain/issues/14625
2,038,636,476
14,625
[ "langchain-ai", "langchain" ]
### Feature request Add first-class support for Vertex AI Endpoints in Langchain. This would involve providing a similar interface to the existing SageMakerEndpoint class, allowing users to easily connect to and interact with Vertex AI Endpoints. ### Motivation Although VertexAIModelGarden already exist, there may be instances where users require custom models with unique input and output formats. To address this need, a more versatile class could be developed, upon which VertexAIModelGarden could be built. This would allow for seamless integration of custom models without compromising the functionality of the existing Model Garden class. ### Your contribution The implementation taking inspiration from SageMakerEndpoint if pertinent.
Add support for Vertex AI Endpoint
https://api.github.com/repos/langchain-ai/langchain/issues/14622/comments
1
2023-12-12T20:35:10Z
2024-03-19T16:06:32Z
https://github.com/langchain-ai/langchain/issues/14622
2,038,512,133
14,622
[ "langchain-ai", "langchain" ]
### Feature request Hi, it seems the only DocStore available are InMemory or Google. I'd like to submit a feature request for an S3DocStore. ### Motivation Many people have raised issues related to limited DocStore options.
S3DocStore
https://api.github.com/repos/langchain-ai/langchain/issues/14616/comments
3
2023-12-12T18:55:18Z
2024-04-10T16:11:48Z
https://github.com/langchain-ai/langchain/issues/14616
2,038,377,269
14,616
[ "langchain-ai", "langchain" ]
### System Info langchain Version: 0.0.348 Python: 3.9.16 Docs suggests to use proxy entry as follows but it does not work: from slacktoolkit import SlackToolkit # Proxy settings proxies = { 'http': 'http://proxy.example.com:8080', 'https': 'https://proxy.example.com:8080' } # Initialize SlackToolkit with proxy slack_toolkit = SlackToolkit(proxies=proxies) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [x] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction # Proxy settings proxies = { 'http': 'your_proxy', 'https': 'your_proxy' } # Initialize SlackToolkit with proxy toolkit = SlackToolkit(proxies=proxies) tools = toolkit.get_tools() llm = OpenAI(temperature=0) agent = initialize_agent( tools=toolkit.get_tools(), llm=llm, verbose=True, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, ) agent.run("Send a greeting to my coworkers in the #slack1 channel. Your name is chatbot. Set the sender name as chatbot.") ### Expected behavior slack message sent in #slack1 channel
SlackToolkit() does not support proxy configuration
https://api.github.com/repos/langchain-ai/langchain/issues/14608/comments
2
2023-12-12T17:16:53Z
2023-12-15T04:00:14Z
https://github.com/langchain-ai/langchain/issues/14608
2,038,224,081
14,608
[ "langchain-ai", "langchain" ]
### System Info Not relevant ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use the QA With Sources chain with the default prompt. If the chain type is `stuff` or `map_reduce` the default prompts used are [this](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/stuff_prompt.py) and [this](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/map_reduce_prompt.py) respectively. These files are massive, and easily adds a 1000+ tokens to every request. In models like PALM2, there's barely any tokens left for other questions. ### Expected behavior These prompts should be much more short, even if as a sample input.
QaWithSources default prompt is massive
https://api.github.com/repos/langchain-ai/langchain/issues/14596/comments
4
2023-12-12T13:40:37Z
2024-03-20T16:06:19Z
https://github.com/langchain-ai/langchain/issues/14596
2,037,780,695
14,596
[ "langchain-ai", "langchain" ]
### Feature request As of the current implementation of the QA with sources chain, if `return_source_documents` is set to `True`, [all the sources](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/qa_with_sources/base.py#L165) that are retrieved from the vector DB are returned. The `sources` field returns a list of file names that were used by the LLM. I propose we could do something like this 1. Assign each `Document` a unique UUID as the `source` before passing it to the LLM. 2. Once the LLM returns the relevant sources, we can backmap this to the actual `Document`s that were used by the LLM as opposed to getting just the filename. ### Motivation This information seems vastly more useful than the entire response from the vector DB. For our current use cases, we've ended up overriding these functions to add this functionality. ### Your contribution I can raise a PR adding this if this is something that you'd find useful as well. An additional issue to report, the `_split_sources` [function](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/qa_with_sources/base.py#L124) splits at the first instance of `SOURCE` which seems to be a bit problematic. I can fix this to split at the last occurence.
Sources returned in QaWithSources can be optimised
https://api.github.com/repos/langchain-ai/langchain/issues/14595/comments
1
2023-12-12T13:30:30Z
2024-03-19T16:06:24Z
https://github.com/langchain-ai/langchain/issues/14595
2,037,760,278
14,595
[ "langchain-ai", "langchain" ]
### Feature request How to use custom tracing tool like opentelemetry or tempo ### Motivation If I don't want to use LangSmith ### Your contribution N/A
How to use custom tracing tool like opentelemetry or tempo
https://api.github.com/repos/langchain-ai/langchain/issues/14594/comments
1
2023-12-12T12:34:07Z
2024-03-19T16:06:17Z
https://github.com/langchain-ai/langchain/issues/14594
2,037,660,794
14,594
[ "langchain-ai", "langchain" ]
### System Info OS: Apple M1 Max ______________________ Name: langchain Version: 0.0.349 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Requires: aiohttp, async-timeout, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: I have followed the instructions provided here : https://python.langchain.com/docs/integrations/llms/llamacpp. Though not able inference it correctly. Model path : https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF ``` from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains import LLMChain, QAGenerationChain from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate template = """Question: {question} Answer: Let's work this out in a step by step way to be sure we have the right answer.""" prompt = PromptTemplate(template=template, input_variables=["question"]) callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool. n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU. llm = LlamaCpp( model_path="../models/deepcoder-gguf/deepseek-coder-6.7b-instruct.Q2_K.gguf", n_gpu_layers=n_gpu_layers, max_tokens=2000, top_p=1, n_batch=n_batch, callback_manager=callback_manager, f16_kv=True, verbose=True, # Verbose is required to pass to the callback manager ) llm( "Question: Write python program to add two numbers ? Answer:" ) ``` Result: ` < """"""""""""""""""""""/"` Requesting you to look into it. Please let me know in case you need more information. Thank you. I have tried the same model file with **[llama-cpp-python](https://github.com/abetlen/llama-cpp-python)** package and it works as expected. Please find below the code that I have tried: ``` import json import time from llama_cpp import Llama n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool. n_batch = 512 llm = Llama(model_path="../models/deepcoder-gguf/deepseek-coder-6.7b-instruct.Q5_K_M.gguf" , chat_format="llama-2", n_gpu_layers=n_gpu_layers,n_batch=n_batch) start_time = time.time() pp = llm.create_chat_completion( messages = [ {"role": "system", "content": "You are an python language assistant."}, { "role": "user", "content": "Write quick sort ." } ]) end_time = time.time() print("execution time:", {end_time - start_time}) print(pp["choices"][0]["message"]["content"]) ``` Output : ``` ## Quick Sort Algorithm in Python Here is a simple implementation of the quicksort algorithm in Python: ```python def partition(arr, low, high): i = (low-1) # index of smaller element pivot = arr[high] # pivot for j in range(low , high): if arr[j] <= pivot: i += 1 arr[i],arr[j] = arr[j],arr[i] arr[i+1],arr[high] = arr[high],arr[i+1] return (i+1) def quickSort(arr, low, high): if low < high: pi = partition(arr,low,high) quickSort(arr, low, pi-1) quickSort(arr, pi+1, high) # Test the code n = int(input("Enter number of elements in array: ")) print("Enter elements: ") arr = [int(input()) for _ in range(n)] quickSort(arr,0,n-1) print ("Sorted array is:") for i in range(n): print("%d" %arr[i]), This code first defines a helper function `partition()` that takes an array and two indices. It then rearranges the elements of the array so that all numbers less than or equal to the pivot are on its left, while all numbers greater than the pivot are on its right. The `quickSort()` function is then defined which recursively applies this partitioning process until the entire array is sorted. The user can input their own list of integers and the program will output a sorted version of that list. [/code] Conclusion In conclusion, Python provides several built-in functions for sorting lists such as `sort()` or `sorted()` but it's also possible to implement quick sort algorithm from scratch using custom function. This can be useful in situations where you need more control over the sorting process or when dealing with complex data structures. ``` ### Expected behavior It should inference the model just like the native [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) package.
Not able to inference deepseek-coder-6.7b-instruct.Q5_K_M.gguf
https://api.github.com/repos/langchain-ai/langchain/issues/14593/comments
6
2023-12-12T11:20:20Z
2024-05-25T13:36:24Z
https://github.com/langchain-ai/langchain/issues/14593
2,037,539,816
14,593
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. i have created a chatbot to chat with the sql database using openai and langchain, but how to store or output data into excel using langchain. I got some idea from chatgpt which i have integrated with my code, but there is an error while importing the modules below is my code import pandas as pd import sqlalchemy as sal import os, sys, openai import constants from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI from langchain_experimental.sql import SQLDatabaseChain from sqlalchemy import create_engine # import ChatOpenAI from langchain.chat_models import ChatOpenAI from typing import List, Optional from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.callbacks.manager import CallbackManagerForToolRun from langchain.chat_models import ChatOpenAI from langchain_experimental.plan_and_execute import ( PlanAndExecute, load_agent_executor, load_chat_planner, ) from langchain.sql_database import SQLDatabase from langchain.text_splitter import TokenTextSplitter from langchain.tools import BaseTool from langchain.tools.sql_database.tool import QuerySQLDataBaseTool from secret_key import openapi_key from langchain import PromptTemplate from langchain.models import ChatGPTClient from langchain.utils import save_conversation os.environ['OPENAI_API_KEY'] = openapi_key def chat(question): from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password) connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server" # custom_suffix = """"" # If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result.""" engine = create_engine(connection_uri) model_name="gpt-3.5-turbo-16k" db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT']) # db = SQLDatabase(engine, view_support=True, include_tables=['egv_emp_acadamics_ChatGPT']) llm = ChatOpenAI(temperature=0, verbose=False, model=model_name) db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) from langchain.prompts import PromptTemplate PROMPT = """ Given an input question, first create a syntactically correct mssql query to run, then look at the results of the query and return the answer. The question: {db_chain.run} """ prompt_template = """" Use the following pieces of context to answer the question at the end. If you don't know the answer, please think rationally and answer from your own knowledge base. Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question {context} Question: {questions} """ PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "questions"] ) def split_text(text, chunk_size, chunk_overlap=0): text_splitter = TokenTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap ) yield from text_splitter.split_text(text) class QuerySQLDatabaseTool2(QuerySQLDataBaseTool): def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: result = self.db.run_no_throw(query) return next(split_text(result, chunk_size=14_000)) class SQLDatabaseToolkit2(SQLDatabaseToolkit): def get_tools(self) -> List[BaseTool]: tools = super().get_tools() original_query_tool_description = tools[0].description new_query_tool = QuerySQLDatabaseTool2( db=self.db, description=original_query_tool_description ) tools[0] = new_query_tool return tools return db_chain.run(question) # answer=chat("give the names of employees who have completed PG") answer= chat("give the list of employees joined in january and february of 2023 with Employee ID, Name, Department,Date of join") print(answer) conversation_data= chatgpt.chat(prompt="convert into .csv and .xlsx only if the multiple values are asked in the question, if one a single thing is asked, just give the answer in chatbot no need to save the answer") # conversation_data = chat("convert into .csv and .xlsx only if the multiple values are asked in the question, if one a single thing is asked, just give the answer in chatbot no need to save the answer") save_conversation(conversation_data, "chat_data.csv") df = pd.read_csv("chat_data.csv") path = r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data.xlsx" df.to_excel(path, index=False) print(f"Conversation data has been saved to '{path}' in Excel format.") ### Suggestion: _No response_
Issue: <How to store/export the output of a chatbot to excel ' prefix>
https://api.github.com/repos/langchain-ai/langchain/issues/14592/comments
5
2023-12-12T11:08:08Z
2024-03-21T16:05:52Z
https://github.com/langchain-ai/langchain/issues/14592
2,037,519,560
14,592
[ "langchain-ai", "langchain" ]
### System Info Python 3.9.12, LangChain 0.0.346 ### Who can help? @agola11 @3coins ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When you use caching with LangChain, it does not distinguish different LLM models. For example, the response for LLama2 was used for a prompt for Claude 2. ``` def ask_name(model_id): question = 'what is your name?' bedrock = Bedrock(model_id=model_id, model_kwargs={'temperature': 0.1}) print('me: ' + question) t0 = datetime.datetime.now() print(f'{bedrock.model_id}: ' + bedrock(question).strip()) print('({:.2f} sec)'.format((datetime.datetime.now() - t0).total_seconds())) print() model_ids = ['meta.llama2-70b-chat-v1','anthropic.claude-v2',] for model_id in model_ids: ask_name(model_id) ask_name(model_id) ``` ==> ``` me: what is your name? meta.llama2-70b-chat-v1: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. (2.24 sec) me: what is your name? meta.llama2-70b-chat-v1: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. (0.00 sec) me: what is your name? anthropic.claude-v2: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. (0.00 sec) ``` This is because of https://github.com/langchain-ai/langchain/blob/db6bf8b022c17353b46f97ab3b9f44ff9e88a488/libs/langchain/langchain/llms/bedrock.py#L235 ``` @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" _model_kwargs = self.model_kwargs or {} return { **{"model_kwargs": _model_kwargs}, } ``` My current workaround is subclassing the `Bedrock` class: ``` class MyBedrock(Bedrock): @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return { 'model_id': self.model_id, **BedrockBase._identifying_params.__get__(self) } ``` This seems working. ### Expected behavior LLama 2 should replay with "My name is LLaMa..." while Claude 2 should reply with "My name is Claude."
Caching with Bedrock does not distinguish models or params
https://api.github.com/repos/langchain-ai/langchain/issues/14590/comments
1
2023-12-12T10:17:59Z
2024-03-19T16:06:07Z
https://github.com/langchain-ai/langchain/issues/14590
2,037,430,016
14,590
[ "langchain-ai", "langchain" ]
### System Info python: 3.11 langchain: 0.0.347-0.0.349 langchain_core: 0.0.12, 0.0.13 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python import langchain_core.load class Custom(langchain_core.load.Serializable): @classmethod def is_lc_serializable(cls) -> bool: return True out = langchain_core.load.dumps(Custom()) langchain_core.load.loads( out, valid_namespaces=["langchain", "__main__"], ) ``` ### Expected behavior I'm expecting it's possible to make a custom class serializable, but since langchain_core 0.0.13 the `valid_namespaces` is effectively ignored as it relies on a whitelist of what can be serialized `SERIALIZABLE_MAPPING`. So I get the error: > ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('__main__', 'Custom') Triggered in [load.py#L68](https://github.com/langchain-ai/langchain/blob/v0.0.349/libs/core/langchain_core/load/load.py#L68) --- I'm not sure if serialization was ever intended to be part of the public API, but I've found it convenient to be able to make my custom parts of chains also abide by the serialization protocol and still be able to dump/load my chains
Unable to dump/load custom classes since langchain_core 0.0.13
https://api.github.com/repos/langchain-ai/langchain/issues/14589/comments
3
2023-12-12T10:16:56Z
2023-12-13T10:44:16Z
https://github.com/langchain-ai/langchain/issues/14589
2,037,428,125
14,589
[ "langchain-ai", "langchain" ]
### System Info Python 3.9.13, LangChain 0.0.347, Windows 10 ### Who can help? @hwchase17 @agola ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction Running this snippet: ```python answers = (answer for answer in ["answer1", "answer2"]) class CustomLLM(LLM): def _llm_type(self) -> str: return "custom" def _call(self, prompt: str, **_) -> str: return next(answers) class CustomOutputParser(BaseOutputParser[str]): reject_output: bool = True def parse(self, text: str) -> str: if self.reject_output: self.reject_output = False raise OutputParserException(f"Parsing failed") return text def get_format_instructions(self) -> str: return "format instructions" class CustomCallbackHandler(BaseCallbackHandler): def on_llm_end(self, response: LLMResult, **_): print(f"received LLM response: {response}") llm = CustomLLM() chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template("Testing prompt"), output_key="chain_output", verbose=True, output_parser=OutputFixingParser.from_llm(llm, CustomOutputParser()), ) result = chain({}, callbacks=[CustomCallbackHandler()]) print(f"Chain result is {result}") ``` produces the following output: ``` > Entering new LLMChain chain... Prompt after formatting: Testing prompt received LLM response: generations=[[Generation(text='answer1')]] llm_output=None run=None > Finished chain. Chain result is {'chain_output': 'answer2'} ``` ### Expected behavior The output line `received LLM response: generations=[[Generation(text='answer1')]] llm_output=None run=None` should also appear with `answer2`, because that one is also generated while running the chain that was configured to use the `CustomCallbackHandler`. The callbacks should also be used in the chain that is created inside the OutputFixingParser. In my opinion, the chain doing the retry, should have the overall chain as a parent in the callback methods (`on_chain_start()` an so on).
OutputFixingParser should use callbacks
https://api.github.com/repos/langchain-ai/langchain/issues/14588/comments
1
2023-12-12T09:35:53Z
2024-03-19T16:06:02Z
https://github.com/langchain-ai/langchain/issues/14588
2,037,349,229
14,588
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. below is my code ` confluence_url = config.get("confluence_url", None) username = config.get("username", None) api_key = config.get("api_key", None) space_key = config.get("space_key", None) documents = [] embedding = OpenAIEmbeddings() loader = ConfluenceLoader( url=confluence_url, username=username, api_key=api_key ) for space_key in space_key: documents.extend(loader.load(space_key=space_key,include_attachments=True,limit=100))` The error I am getting: raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api.media.atlassian.com/file/UNKNOWN_MEDIA_ID/binary?token=sometoken&name=Invalid%20file%20id%20-%20UNKNOWN_MEDIA_ID&max-age=2940 ### Suggestion: _No response_
Issue: Getting error while integrating Confluence Spaces including attachments
https://api.github.com/repos/langchain-ai/langchain/issues/14586/comments
3
2023-12-12T09:00:13Z
2024-02-21T11:48:20Z
https://github.com/langchain-ai/langchain/issues/14586
2,037,287,446
14,586
[ "langchain-ai", "langchain" ]
### System Info Langchain Version: 0.0.335 Platform: Win11 Python Version: 3.11.5 Hi experts, I'm trying to execute the RAG Search Example on the Langchain Doc: https://python.langchain.com/docs/expression_language/get_started **Here is the code:** from langchain.chat_models import ChatOpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import ChatPromptTemplate from langchain.vectorstores import DocArrayInMemorySearch from langchain.schema import StrOutputParser from langchain.schema.runnable import RunnableParallel, RunnablePassthrough vectorstore = DocArrayInMemorySearch.from_texts( ["harrison worked at kensho", "bears like to eat honey"], embedding=OpenAIEmbeddings(), ) retriever = vectorstore.as_retriever() template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) model = ChatOpenAI() output_parser = StrOutputParser() setup_and_retrieval = RunnableParallel( {"context": retriever, "question": RunnablePassthrough()} ) chain = setup_and_retrieval | prompt | model | output_parser chain.invoke("where did harrison work?") **but the example fails with the ValidationError: 2 validation errors for DocArrayDoc.** **Here is the error details:** C:\Project\pythonProjectAI\.venv\Lib\site-packages\pydantic\_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`. warnings.warn(f'`{import_path}` has been moved to `{new_location}`.') Traceback (most recent call last): File "C:\Project\pythonProjectAI\AI.py", line 28, in <module> chain.invoke("where did harrison work?") File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1427, in invoke input = step.invoke( ^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1938, in invoke output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1938, in <dictcomp> output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^ File "C:\Program Files\Python\Lib\concurrent\futures\_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python\Lib\concurrent\futures\_base.py", line 401, in __get_result raise self._exception File "C:\Program Files\Python\Lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 112, in invoke return self.get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 211, in get_relevant_documents raise e File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 204, in get_relevant_documents result = self._get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\vectorstore.py", line 657, in _get_relevant_documents docs = self.vectorstore.similarity_search(query, **self.search_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\vectorstores\docarray\base.py", line 127, in similarity_search results = self.similarity_search_with_score(query, k=k, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\vectorstores\docarray\base.py", line 106, in similarity_search_with_score query_doc = self.doc_cls(embedding=query_embedding) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\pydantic\main.py", line 164, in __init__ __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) pydantic_core._pydantic_core.ValidationError: 2 validation errors for DocArrayDoc text Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing metadata Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import ChatPromptTemplate from langchain.vectorstores import DocArrayInMemorySearch from langchain.schema import StrOutputParser from langchain.schema.runnable import RunnableParallel, RunnablePassthrough vectorstore = DocArrayInMemorySearch.from_texts( ["harrison worked at kensho", "bears like to eat honey"], embedding=OpenAIEmbeddings(), ) retriever = vectorstore.as_retriever() template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) model = ChatOpenAI() output_parser = StrOutputParser() setup_and_retrieval = RunnableParallel( {"context": retriever, "question": RunnablePassthrough()} ) chain = setup_and_retrieval | prompt | model | output_parser chain.invoke("where did harrison work?") ### Expected behavior The example runs well.
ValidationError: 2 validation errors for DocArrayDoc returned when try to execute the RAG Search Example
https://api.github.com/repos/langchain-ai/langchain/issues/14585/comments
19
2023-12-12T08:57:21Z
2024-06-08T16:07:56Z
https://github.com/langchain-ai/langchain/issues/14585
2,037,282,618
14,585
[ "langchain-ai", "langchain" ]
https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/graph_qa/cypher.py#L266C2-L269C25 Hi, I am heavily using GraphCypherQAChain and sometimes cypher_query_corrector returns generated cypher as valid but have some syntax problems or completely it is not a cypher query; and during the execution of the neo4j, it raises errors. Where do you think that these errors should be handled, how to continue the chain execution without any interruption and return a plausible response to the user? @tomasonjo Example: <img width="1104" alt="image" src="https://github.com/langchain-ai/langchain/assets/9192832/0cac12a9-0a4c-4446-af30-2f54ac3290c8"> Thanks!
GraphCypherQAChain Unhandled Exception while running Erroneous Cypher Queries
https://api.github.com/repos/langchain-ai/langchain/issues/14584/comments
1
2023-12-12T08:18:57Z
2024-03-19T16:05:58Z
https://github.com/langchain-ai/langchain/issues/14584
2,037,219,925
14,584
[ "langchain-ai", "langchain" ]
### Feature request Currently, SemanticSimilarityExampleSelector only passes `k` as a parameter to vectorstore [see here](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/core/langchain_core/example_selectors/semantic_similarity.py#L55C10-L55C10). vectorstore, depending on the implementation can take multiple other arguments. most notably, `filters` can be passed down [see here](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/community/langchain_community/vectorstores/faiss.py#L505C13-L507C38) ### Motivation Having the ability to filter down examples (on top of similarity search) would be very helpful in controlling the examples that are added to the prompt. This feature provides significant more control over examples selection. ### Your contribution It is very easy to update this. add a new attribute `vectorstore_kwargs` to the class class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel): """Example selector that selects examples based on SemanticSimilarity.""" vectorstore: VectorStore """VectorStore than contains information about examples.""" k: int = 4 """Number of examples to select.""" example_keys: Optional[List[str]] = None """Optional keys to filter examples to.""" input_keys: Optional[List[str]] = None """Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables.""" vectorstore_kwargs: Optional[Dict[str, Any]] = None """additional arguments passed to vectorstore for similarity search.""" and then update the `select_examples` function with `example_docs = self.vectorstore.similarity_search(query, k=self.k, **self.vectorstore_kwargs)`
passing down vectorstore additional argument in SemanticSimilarityExampleSelector
https://api.github.com/repos/langchain-ai/langchain/issues/14583/comments
1
2023-12-12T07:18:55Z
2024-03-19T16:05:52Z
https://github.com/langchain-ai/langchain/issues/14583
2,037,136,239
14,583
[ "langchain-ai", "langchain" ]
### System Info python = "^3.10" openai = "^1.3.8" langchain = "^0.0.349" ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```` import chromadb from langchain.embeddings import AzureOpenAIEmbeddings from langchain.vectorstores.chroma import Chroma client = chromadb.HttpClient( host=CHROMA_SERVER_HOST, port=CHROMA_SERVER_HTTP_PORT, ) embeddings = AzureOpenAIEmbeddings( openai_api_type=AZURE_OPENAI_API_TYPE, azure_endpoint=AZURE_OPENAI_API_BASE, api_key=AZURE_OPENAI_API_KEY, openai_api_version=AZURE_OPENAI_API_VERSION, azure_deployment=AZURE_EMBEDDING_DEPLOYMENT_NAME, ) vectordb = Chroma( client=client, collection_name=CHROMA_COLLECTION_NAME_FBIG_1000, embedding_function=embeddings, ) ```` ### Expected behavior TypeError: cannot pickle '_thread.RLock' object When I use openai = "0.28.1" it doesn't have the above error
RetrievalQA and AzureOpenAIEmbeddings lead to TypeError: cannot pickle '_thread.lock' object
https://api.github.com/repos/langchain-ai/langchain/issues/14581/comments
15
2023-12-12T06:39:49Z
2024-07-27T16:03:39Z
https://github.com/langchain-ai/langchain/issues/14581
2,037,087,675
14,581
[ "langchain-ai", "langchain" ]
### System Info Ubuntu 20.04 CUDA 12.1 NVIDIA RTX 4070 ### Who can help? @hwchase17 @eyurtsev ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains import LLMChain from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate template = """Question: {question} Answer: Your Answer""" prompt = PromptTemplate(template=template, input_variables=["question"]) callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) n_gpu_layers = 80 # Change this value based on your model and your GPU VRAM pool. n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.llm = LlamaCpp(model_path="/home/rtx-4070/Downloads/openorca-platypus2-13b.Q4_K_M.gguf", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True, n_ctx=2048 ) from langchain_experimental.agents.agent_toolkits import create_csv_agent from langchain.agents.agent_types import AgentType agent = create_csv_agent( llm, "/home/rtx-4070/git_brainyneurals/langchain_local/docs/SPY.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=True ) agent.run("How many rows are there?") ``` I am running opeorca model which I have downloaded from HuggingFace. But I am facing this erros. Could you please help me with related solutions? Or Suggest any other ways or models. Would be grateful. Thanks in advance. ### Expected behavior I want to make work simple csv agent that runs on my RTX 4070 Desktop GPU with any open source models.
An output parsing error occurred. Could not parse LLM output create_csv_agent
https://api.github.com/repos/langchain-ai/langchain/issues/14580/comments
2
2023-12-12T05:58:28Z
2024-03-19T16:05:47Z
https://github.com/langchain-ai/langchain/issues/14580
2,037,040,612
14,580
[ "langchain-ai", "langchain" ]
### System Info Lanchain: 0.0.348 Python 3.12.0 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the error: 1. Run the [notebook](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/llms/predibase.ipynb) with the latest version of Langchain. The error occurs specficially in 13 `review = overall_chain.run("Tragedy at sunset on the beach")` ### Expected behavior The expected behavior is simply to return the output from the LLM.
Predibase TypeError: LlmMixin.prompt() got an unexpected keyword argument 'model_name'
https://api.github.com/repos/langchain-ai/langchain/issues/14564/comments
1
2023-12-11T23:00:10Z
2024-03-19T16:05:42Z
https://github.com/langchain-ai/langchain/issues/14564
2,036,678,638
14,564
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I have this issue when working with more than 1 table using the Llama2 model. Let's say table 1 has values A, B, C, and table 2 has values X, Y, Z. When I run the query, it often gives me an error that it creates the query but assumes that table 1 has values X, Y, Z, and it mixes them up. Any suggestions on how to avoid this? I'm trying to solve it with prompts, but the model seems to ignore instructions in these cases. ### Suggestion: _No response_
Issue: Problem when using model llama2 13b chat to create SQL queries in 2 or more tables, it mixes the columns of the tables
https://api.github.com/repos/langchain-ai/langchain/issues/14553/comments
1
2023-12-11T19:58:47Z
2024-03-18T16:08:58Z
https://github.com/langchain-ai/langchain/issues/14553
2,036,427,335
14,553
[ "langchain-ai", "langchain" ]
### System Info Name: langchain Version: 0.0.348 Name: PyGithub Version: 2.1.1 ### Who can help? @hwchase17 @agola11 ``` Traceback (most recent call last): File "/Users/mac/Dev Projects/Chainlit_qa/test.py", line 12, in <module> github = GitHubAPIWrapper( ^^^^^^^^^^^^^^^^^ File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__ values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 1102, in validate_model values = validator(cls_, values) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/langchain/utilities/github.py", line 69, in validate_environment installation = gi.get_installations()[0] ~~~~~~~~~~~~~~~~~~~~~~^^^ File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/github/PaginatedList.py", line 62, in __getitem__ return self.__elements[index] ~~~~~~~~~~~~~~~^^^^^^^ IndexError: list index out of range ``` ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run basic python file: ``` from langchain.agents.agent_toolkits.github.toolkit import GitHubToolkit from langchain.utilities.github import GitHubAPIWrapper github = GitHubAPIWrapper() toolkit = GitHubToolkit.from_github_api_wrapper(github) tools = toolkit.get_tools() ``` ### Expected behavior Fetching github repository information
GithubAPIWrapper throws list index out of range error
https://api.github.com/repos/langchain-ai/langchain/issues/14550/comments
8
2023-12-11T18:30:31Z
2023-12-22T09:32:06Z
https://github.com/langchain-ai/langchain/issues/14550
2,036,284,698
14,550
[ "langchain-ai", "langchain" ]
### Feature request As of `langchain==0.0.348` in [`ChatVertexAI` here](https://github.com/langchain-ai/langchain/blob/v0.0.348/libs/langchain/langchain/chat_models/vertexai.py#L187-L191): 1. `vertexai.language_models.ChatSession.send_message` returns a `vertexai.language_models.MultiCandidateTextGenerationResponse` response 2. `ChatVertexAI` throws out the response's `grounding_metadata`, only preserving the `text` The request is to not discard this useful metadata ### Motivation I am trying to use the `response.grounding_metadata.citations` in my own code ### Your contribution I will open a PR to make this: ```python generations = [ ChatGeneration( message=AIMessage(content=r.text), generation_info=r.raw_prediction_response, ) for r in response.candidates ] ``` So the raw response remains accessible
Request: `ChatVertexAI` preserving `grounding_metadata`
https://api.github.com/repos/langchain-ai/langchain/issues/14548/comments
1
2023-12-11T18:06:33Z
2024-01-25T04:37:45Z
https://github.com/langchain-ai/langchain/issues/14548
2,036,244,262
14,548
[ "langchain-ai", "langchain" ]
### System Info Langchain 0.0.331, macOS Monterey, Python 3.10.9 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.document_loaders import UnstructuredHTMLLoader loader = UnstructuredHTMLLoader("https://www.sec.gov/ix?doc=/Archives/edgar/data/40987/000004098720000010/gpc-12312019x10k.htm") documents = loader.load() FileNotFoundError: [Errno 2] No such file or directory: 'https://www.sec.gov/ix?doc=/Archives/edgar/data/40987/000004098720000010/gpc-12312019x10k.htm ### Expected behavior Success loading .htm file
Does HTML Doc Loader accept .htm sites?
https://api.github.com/repos/langchain-ai/langchain/issues/14545/comments
2
2023-12-11T16:25:09Z
2024-04-10T16:15:24Z
https://github.com/langchain-ai/langchain/issues/14545
2,036,050,915
14,545
[ "langchain-ai", "langchain" ]
### System Info Langchain version = 0.0.344 Python version = 3.11.5 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction here is my code. i am unable to connect to powerbi dataset. from langchain.agents.agent_toolkits import PowerBIToolkit from langchain.utilities.powerbi import PowerBIDataset from azure.identity import ClientSecretCredential from azure.core.credentials import TokenCredential # Create an instance of the language model (llm) toolkit = PowerBIToolkit( powerbi=PowerBIDataset( dataset_id="", table_names=['WinShareTable'], credential=ClientSecretCredential( client_id=", client_secret='', tenant_id="" ) ), llm=llm ) Either getting field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs() error or Tokencredentials error ### Expected behavior Should allow to connect to PowerBi datasets
impossible to connect to PowerBI Datasets even after providing all the information
https://api.github.com/repos/langchain-ai/langchain/issues/14538/comments
1
2023-12-11T14:07:35Z
2024-03-18T16:08:54Z
https://github.com/langchain-ai/langchain/issues/14538
2,035,763,317
14,538
[ "langchain-ai", "langchain" ]
### Discussed in https://github.com/langchain-ai/langchain/discussions/13245 <div type='discussions-op-text'> <sup>Originally posted by **yallapragada** November 12, 2023</sup> I am testing a simple RAG implementation with Azure Cognitive Search. I am seeing a "cannot import name 'Vector' from azure.search.documents.models" error when I invoke my chain. Origin of my error is line 434 in lanchain/vectorstores/azuresearch.py (from azure.search.documents.models import Vector) this is the relevant code snippet, I get the import error when I execute rag_chain.invoke(question) from langchain.schema.runnable import RunnablePassthrough from langchain.prompts import ChatPromptTemplate from langchain.chat_models.azure_openai import AzureChatOpenAI question = "my question.." -- vector_store is initialized using AzureSearch(), not including that snippet here -- retriever = vector_store.as_retriever() template = ''' Answer the question based on the following context: {context} Question: {question} ''' prompt = ChatPromptTemplate.from_template(template=template) llm = AzureChatOpenAI( deployment_name='MY_DEPLOYMENT_NAME', model_name='MY_MODEL', openai_api_base=MY_AZURE_OPENAI_ENDPOINT, openai_api_key=MY_AZURE_OPENAI_KEY, openai_api_version='2023-05-15', openai_api_type='azure' ) rag_chain = {'context' : retriever, 'question' : RunnablePassthrough} | prompt | llm rag_chain.invoke(question) -------------- my package versions langchain==0.0.331 azure-search-documents==11.4.0b11 azure-core==1.29.5 openai==0.28.1</div>
Stable release 11.4.0 of azure.search.documents.models not compatible with latest langchain version -> class Vector gone
https://api.github.com/repos/langchain-ai/langchain/issues/14534/comments
1
2023-12-11T12:44:45Z
2024-03-18T16:08:49Z
https://github.com/langchain-ai/langchain/issues/14534
2,035,602,125
14,534
[ "langchain-ai", "langchain" ]
### Issue with current documentation: Page: https://python.langchain.com/docs/modules/memory/ In dark mode, there is very little contrast between inputs and outputs. Especially for pages imported from Jupyter notebooks, it can be really confusing figuring out which code blocks are safe to test in a `.py` function and which code blocks are intended for use in Jupyter. Adding in explicit contrast or labeling between input and output blocks would be helpful. ![image](https://github.com/langchain-ai/langchain/assets/113563866/ba628424-fcc5-4496-8053-128254e068a7) ### Idea or request for content: _No response_
DOC: Please add stronger contrast or labeling between notebook input and output blocks
https://api.github.com/repos/langchain-ai/langchain/issues/14532/comments
2
2023-12-11T12:20:45Z
2024-04-01T16:05:54Z
https://github.com/langchain-ai/langchain/issues/14532
2,035,556,970
14,532
[ "langchain-ai", "langchain" ]
### Feature request Weaviate has released a beta version of their [python client v4](https://weaviate.io/developers/weaviate/client-libraries/python) which seems to be more robust compared to v3. It follows a different structure but allows for more versatility and better error handling when it comes to queries. I think it would be great to add support in langchain for the new client. ### Motivation I was using the v3 client (without langchain) to batch import data into Weaviate. It worked, but it was slower than I expected and also resulted in errors that I was not able to manage easily. The new v4 client supports their new [gRPC API](https://weaviate.io/developers/weaviate/api/grpc) which outperforms the traditional REST API that v3 is using. ### Your contribution I will start creating some custom functions using Weaviate's new client to test its reliability. If I don't encounter any serious errors, I'll try to find the time and create a PR to add support for it in langchain. I think that support for both v3 and v4 should exist, at least until v4 becomes stable.
Support Weaviate client v4
https://api.github.com/repos/langchain-ai/langchain/issues/14531/comments
1
2023-12-11T12:15:23Z
2024-03-12T13:14:58Z
https://github.com/langchain-ai/langchain/issues/14531
2,035,547,273
14,531
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.348 wsl python 3.10.2 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm looking to run a simple RAG using Qdrant pre instantiated with data (something like the [link from the docs](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing#scrollTo=fvCHMT73SmKi)) but its giving me the error in the title. I'm posting it here since the error seems to be on langchain side. not qdrant side. Ive tried various ways of using langchain to connect to qdrant but it always ends up with this error. This happens even if i use the deprecated VectorDBQA or RetrievalQA ``` from qdrant_client import QdrantClient from langchain.chat_models import AzureChatOpenAI from langchain.embeddings import HuggingFaceEmbeddings from langchain.vectorstores import Qdrant QDRANT_ENDPOINT = "localhost" MODEL_TO_USE = 'BAAI/bge-base-en-v1.5' collection_name = "collection" client = QdrantClient(QDRANT_ENDPOINT, port=6333, timeout=120) embeddings = HuggingFaceEmbeddings(model_name=MODEL_TO_USE) qdrant = Qdrant(client, collection_name, embeddings) qdrant.similarity_search("middle east") ``` gives ``` >>> qdrant.similarity_search("middle east") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 274, in similarity_search results = self.similarity_search_with_score( File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 350, in similarity_search_with_score return self.similarity_search_with_score_by_vector( File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 595, in similarity_search_with_score_by_vector results = self.client.search( File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 340, in search return self._client.search( File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/qdrant_remote.py", line 472, in search search_result = self.http.points_api.search_points( File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 1388, in search_points return self._build_for_search_points( File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 636, in _build_for_search_points return self.api_client.request( File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 74, in request return self.send(request, type_) File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 97, in send raise UnexpectedResponse.for_response(response) qdrant_client.http.exceptions.UnexpectedResponse: Unexpected Response: 400 (Bad Request) Raw response content: b'{"status":{"error":"Wrong input: Default vector params are not specified in config"},"time":0.000118885}' ``` Using this as a retriever also fails, which is what i need it for. Testing with qdrant alone works. ``` client.query(collection_name,"middle east") # using qdrant itself works ``` ### Expected behavior For the queries to be fetched and not throw this error
b'{"status":{"error":"Wrong input: Default vector params are not specified in config"},"time":0.00036507}'
https://api.github.com/repos/langchain-ai/langchain/issues/14526/comments
1
2023-12-11T10:30:41Z
2024-03-18T16:08:44Z
https://github.com/langchain-ai/langchain/issues/14526
2,035,343,197
14,526
[ "langchain-ai", "langchain" ]
### System Info langchain=0.0.348 openai=0.28.1 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [x] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction model= "chatglm3" llm = ChatOpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url) db = SQLDatabase.from_uri("mysql+pymysql://.....") toolkit = SQLDatabaseToolkit(db=db, llm=llm,use_query_checker=True) agent_executor = create_sql_agent( llm=llm,toolkit=toolkit, verbose=True,agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,handle_parsing_errors=False ) content=agent_executor.run("Look at the structure of the table ads_pm_empl_count first, then give the first three rows of data") print(content) <img width="875" alt="联想截图_20231211165355" src="https://github.com/langchain-ai/langchain/assets/73893296/8fcee38e-0235-44c5-8454-6fd368351488"> ### Expected behavior How do I fix this output format problem
When the self-deployed chatglm3 model is invoked based on the openai API specification and create_sql_agent is used to query the first three rows of the data table, the output format is reported to be incorrect. But there are no formatting errors with qwen. How do I fix chatglm3
https://api.github.com/repos/langchain-ai/langchain/issues/14523/comments
1
2023-12-11T08:58:58Z
2024-03-18T16:08:39Z
https://github.com/langchain-ai/langchain/issues/14523
2,035,156,450
14,523
[ "langchain-ai", "langchain" ]
### System Info langchain=0.0.348 python=3.9 openai=0.28.1 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Uploading 联想截图_20231211165355.png…]() model= "chatglm3" #llm = OpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url) llm = ChatOpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url) db = SQLDatabase.from_uri("mysql+pymysql://。。。。。") toolkit = SQLDatabaseToolkit(db=db, llm=llm,use_query_checker=True) #memory = ConversationBufferMemory(memory_key ="chat_history ") agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=False ) # content=agent_executor.output_schema.schema() content=agent_executor.run("Look at the structure of the table ads_pm_empl_count first, then give the first three rows of data") print(content) ### Expected behavior How do I fix this output format problem
1
https://api.github.com/repos/langchain-ai/langchain/issues/14522/comments
1
2023-12-11T08:50:01Z
2023-12-11T09:00:17Z
https://github.com/langchain-ai/langchain/issues/14522
2,035,138,288
14,522
[ "langchain-ai", "langchain" ]
### System Info OS == Windows 11 Python == 3.10.11 Langchain == 0.0.348 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```bash Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from dotenv import load_dotenv >>> load_dotenv() True >>> from langchain.agents import AgentType, initialize_agent >>> from langchain.tools import StructuredTool >>> from langchain.chat_models import ChatOpenAI >>> llm = ChatOpenAI(model='gpt-3.5-turbo') >>> def func0(a: int, b: int) -> int: ... return a+b ... >>> def func1(a: int, b: int, d: dict[str, int]={}) -> int: ... print(d) ... return a+b ... >>> def func2(a: int, b: int, d: dict[str, int]={'c': 0}) -> int: ... print(d) ... return a+b ... >>> tool0 = StructuredTool.from_function(func0, name=func0.__name__, description='a+b') >>> tool1 = StructuredTool.from_function(func1, name=func1.__name__, description='a+b') >>> tool2 = StructuredTool.from_function(func2, name=func2.__name__, description='a+b') >>> agent0 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool0], llm=llm) >>> agent1 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool1], llm=llm) >>> agent2 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool2], llm=llm) >>> agent0.run('hello') 'Hi there! How can I assist you today?' >>> agent0.invoke(dict(input='hello')) {'input': 'hello', 'output': 'Hi there! How can I assist you today?'} >>> agent1.run('hello') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 507, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__ inputs = self.prep_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 435, in prep_inputs raise ValueError( ValueError: A single string input was passed in, but this chain expects multiple inputs ({'', 'input'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})` >>> agent1.invoke(dict(input='hello')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 89, in invoke return self( File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__ inputs = self.prep_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 445, in prep_inputs self._validate_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 197, in _validate_inputs raise ValueError(f"Missing some input keys: {missing_keys}") ValueError: Missing some input keys: {''} >>> agent2.run('hello') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 507, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__ inputs = self.prep_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 435, in prep_inputs raise ValueError( ValueError: A single string input was passed in, but this chain expects multiple inputs ({"'c'", 'input'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})` >>> agent2.invoke(dict(input='hello')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 89, in invoke return self( File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__ inputs = self.prep_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 445, in prep_inputs self._validate_inputs(inputs) File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 197, in _validate_inputs raise ValueError(f"Missing some input keys: {missing_keys}") ValueError: Missing some input keys: {"'c'"} ``` ### Expected behavior Given a `StructuredTool` which has an argument which defaults to a `dict` value, `StructuredChatAgent` with the tool should work. In the above reproduction codes, `agent1` and `agent2` should works as `agent0` works.
[Maybe Bug] `StructuredChatAgent` raises `ValueError` with a `StructuredTool` which has an argument which defaults to a `dict` default value
https://api.github.com/repos/langchain-ai/langchain/issues/14521/comments
3
2023-12-11T08:41:13Z
2024-03-18T16:08:34Z
https://github.com/langchain-ai/langchain/issues/14521
2,035,123,971
14,521
[ "langchain-ai", "langchain" ]
### System Info LangChain: 0.0.311 Python: 3.11 OS: macOS 11.6 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Thought: The user is asking for help implementing CRUD operations for a specific table in a MySQL database using the Go language. This is a programming task so I will use the programmer_agent tool to help with this. Action: ``` { "action": "programmer_agent", "action_input": { "task": { "title": "Implement CRUD operations for limited_relationships_config table in MySQL using Go", "description": "Write functions for adding, updating, deleting, and retrieving limited relationships configurations in a MySQL database using Go. The operations will be performed on the `limited_relationships_config` table.", "type": "code" } } } ``` ### Expected behavior Thought: The user is asking for help implementing CRUD operations for a specific table in a MySQL database using the Go language. This is a programming task so I will use the programmer_agent tool to help with this. Action: ``` { "action": "programmer_agent", "action_input": { "task": "Implement CRUD operations for limited_relationships_config table in MySQL using Go" } } ```
StructuredChatAgent would go wrong when the input contain some code such as protobuf.
https://api.github.com/repos/langchain-ai/langchain/issues/14520/comments
3
2023-12-11T07:44:54Z
2024-03-18T16:08:29Z
https://github.com/langchain-ai/langchain/issues/14520
2,035,027,147
14,520
[ "langchain-ai", "langchain" ]
### Feature request i want to use gpt2 for text genration & want to control the llm ### Motivation gpt2 is smaller version & its is best for next-word prediction ### Your contribution want to use below code for loading gpt2 model ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = 'gpt2' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) prompt = 'want to do more ' config1 = { 'num_beams': 3, # Number of beams for beam search 'do_sample': True, # Whether to do sampling or not 'temperature': 0.6 # The value used to module the next token probabilities } config2 = { 'penalty_alpha': 0.6, # The penalty alpha for contrastive search 'top_k': 6 # The number of highest probability vocabulary tokens to keep for top-k-filtering } inputs = tokenizer(prompt, return_tensors='pt') output = model.generate(**inputs, **config1) # Here, you should choose config1 or config2 result = tokenizer.decode(output[0], skip_special_tokens=True) print("------->>>>.",result) ```
how to use gpt2 with custom promt?
https://api.github.com/repos/langchain-ai/langchain/issues/14519/comments
1
2023-12-11T07:38:08Z
2024-03-18T16:08:23Z
https://github.com/langchain-ai/langchain/issues/14519
2,035,017,496
14,519
[ "langchain-ai", "langchain" ]
### System Info ```langchain = "^0.0.345"``` I want to embed and store multiple documents in PGVector and RAG query the DB. When saving documents, I am specifying a collection_name for each document. (For example, if I have 3 documents, I have 3 collection_names) Is it possible to separate collections like this? Also, is the collection_name required when connecting to PGVector? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db1 = PGVector.from_documents( embedding=embed, documents=documents, collection_name="my doc 1", connection_string=CONNECTION_STRING ) db2 = PGVector.from_documents( embedding=embed, documents=documents, collection_name="my doc 2", connection_string=CONNECTION_STRING ) # after, To query the DB where the document is stored. db_connection = PGVector( embedding=embed, documents=documents, collection_name="my doc 2", # I don't want to specify a collection_name. connection_string=CONNECTION_STRING ) ### Expected behavior I want to connect to the PGVector only the first time, and then use that session to query the collection_name (1 document in my case).
Is 'collection_name' required when initializing 'PGVector'?
https://api.github.com/repos/langchain-ai/langchain/issues/14518/comments
2
2023-12-11T05:36:15Z
2024-03-28T14:13:27Z
https://github.com/langchain-ai/langchain/issues/14518
2,034,867,869
14,518
[ "langchain-ai", "langchain" ]
https://github.com/langchain-ai/langchain/blob/c0f4b95aa9961724ab4569049b4c3bc12ebbacfc/libs/langchain/langchain/vectorstores/chroma.py#L742 This function breaks the pattern of how the `embedding_function` is referenced by just calling it `embedding`. Small issue but definitely makes it a bit confusing to navigate without diving into the code/docs. Happy to PR it in if worthwhile
Breaking of pattern in `from_document` function
https://api.github.com/repos/langchain-ai/langchain/issues/14517/comments
1
2023-12-11T05:30:25Z
2024-03-18T16:08:13Z
https://github.com/langchain-ai/langchain/issues/14517
2,034,861,990
14,517
[ "langchain-ai", "langchain" ]
### System Info Python 3.10.2 langchain version 0.0.339 WSL ### Who can help? @hwchase17 @agola11 ### Information - [x ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am using Qdrant on docker with information preloaded. However im unable to get it to search ``` MODEL_TO_USE = 'all-mpnet-base-v2' client = QdrantClient(QDRANT_ENDPOINT, port=6333, timeout=120) embeddings = HuggingFaceEmbeddings(model_name=MODEL_TO_USE) qdrant = Qdrant(client, collection_name, embeddings) conversation_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=qdrant.as_retriever(search_kwargs={'k': 5}), get_chat_history=lambda h : h, memory=memory ) # im using a streamlit frontend import streamlit as st st.session_state.conversation = conversation_chain # during chat result = st.session_state.conversation({ "question": user_query, "chat_history": st.session_state['chat_history']}) ``` Result would then lead to the error. ### Expected behavior Chat is supposed to work instead of getting ``` File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Document page_content none is not an allowed value (type=type_error.none.not_allowed) ```
Qdrant retriever with existing data leads to pydantic.error_wrappers.ValidationError: 1 validation error for Document page_content none is not an allowed value (type=type_error.none.not_allowed)
https://api.github.com/repos/langchain-ai/langchain/issues/14515/comments
16
2023-12-11T02:50:24Z
2024-06-25T10:46:41Z
https://github.com/langchain-ai/langchain/issues/14515
2,034,721,756
14,515
[ "langchain-ai", "langchain" ]
### Issue with current documentation: _No response_ ### Idea or request for content: _No response_
DOC: Could load GGUF models from https
https://api.github.com/repos/langchain-ai/langchain/issues/14514/comments
21
2023-12-11T02:45:28Z
2024-03-19T16:05:38Z
https://github.com/langchain-ai/langchain/issues/14514
2,034,717,889
14,514
[ "langchain-ai", "langchain" ]
### Feature request Customize how messages are formatted in MessagesPlaceholder Currently, history messages are always format to: ''' Human: ... AI: ... ''' Popular chat fine-tunings use all sorts of different formats. Example: ''' <|im_start|>user ...<|im_end|> <|im_start|>assistant ...<|im_end|> ''' There's currently no way to change how history messages are prompted. ### Motivation I wanted to use langchain to make chatbots. Currently that use case is not well supported. ### Your contribution Here's how I got it to work. I have to manually parse the history in a previous step. <img width="683" alt="image" src="https://github.com/langchain-ai/langchain/assets/2053475/15fe31e2-cc23-47f2-9d0c-d5670f22768a">
Customize how messages are formatted in MessagesPlaceholder
https://api.github.com/repos/langchain-ai/langchain/issues/14513/comments
1
2023-12-10T23:33:17Z
2024-03-17T16:10:46Z
https://github.com/langchain-ai/langchain/issues/14513
2,034,583,189
14,513
[ "langchain-ai", "langchain" ]
### System Info 0.0.348, linux , python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` chain_with_history = RunnableWithMessageHistory( chain, lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL, ttl=600), input_messages_key="inputs", history_messages_key="history", ) return chain_with_history response = customer_support_invoke().stream( {"inputs": userChats.inputs, "profile": userChats.profile}, config={"configurable": {"session_id": "123"}}, ) for s in response: print(s.content, end="", flush=True) ``` if I streaming twice , it will raise an error "ValueError: Got unexpected message type: AIMessageChunk " ### Expected behavior streaming correctly
RunnableWithMessageHistory streaming bug
https://api.github.com/repos/langchain-ai/langchain/issues/14511/comments
11
2023-12-10T20:43:15Z
2024-04-05T16:07:05Z
https://github.com/langchain-ai/langchain/issues/14511
2,034,526,238
14,511
[ "langchain-ai", "langchain" ]
### Issue with current documentation: import requests from typing import Optional from langchain.tools import StructuredTool ``` python def post_message(url: str, body: dict, parameters: Optional[dict]=None) -> str: """Sends a POST request to the given url with the given body and parameters.""" result = requests.post(url, json=body, params=parameters) return result.text custom_tool = StructuredTool.from_function(post_message) from langchain.agents import initialize_agent, AgentType from langchain.chat_models import ChatOpenAI tools = [custom_tool] # Add any tools here llm = ChatOpenAI(temperature=0) # or any other LLM agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) ``` How to run this and get the results if I have an api endpoint ### Idea or request for content: _No response_
Help me run this
https://api.github.com/repos/langchain-ai/langchain/issues/14508/comments
8
2023-12-10T16:37:15Z
2023-12-11T04:49:51Z
https://github.com/langchain-ai/langchain/issues/14508
2,034,443,411
14,508
[ "langchain-ai", "langchain" ]
### Issue with current documentation: The in-memory cache section of the LLM Caching documentation shows uncached and subsequent cached responses that differ. The later examples show cached responses that are the same as the uncached response, which makes more sense. ### Idea or request for content: The cached response under LLM Caching: In-memory should be "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"
DOC: LLM Caching example using in-memory cache is unclear
https://api.github.com/repos/langchain-ai/langchain/issues/14505/comments
1
2023-12-10T15:22:40Z
2024-03-17T16:10:41Z
https://github.com/langchain-ai/langchain/issues/14505
2,034,414,953
14,505
[ "langchain-ai", "langchain" ]
### System Info Langchain version: 0.0.348 Python version: 3.10.6 Platform: Linux The issue is that when using Conversational Chain for with Kendra Retriever with memory, on any followup questions it gives this error. My guess is somehow QueryText is getting more than the {question} value asked. ![image](https://github.com/langchain-ai/langchain/assets/147480492/ddf6ce1f-8ca2-45da-842a-06d2232ba6aa) It works for first hit but for any follow up question (while memory is not empty) it gives this error. ### Who can help? @hwchase17 @ago ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce error: Create the retriever object: from langchain.retrievers import AmazonKendraRetriever retriever = AmazonKendraRetriever( index_id=kendra_index_id, top_k = 5 ) Then use it in Conversational Chain: chain = ConversationalRetrievalChain.from_llm( llm, retriever=retriever, memory=memory, verbose=True, condense_question_prompt=chat_prompt, combine_docs_chain_kwargs=dict(prompt=rag_prompt)) chain.run({"question":question}) Run at least twice, to get the error. ### Expected behavior There shouldn't be any such error. Question or QueryText is always less than 1000 characters.
AmazonKendraRetriever: Error The provided QueryText has a character count of 1020, which exceeds the limit.
https://api.github.com/repos/langchain-ai/langchain/issues/14494/comments
5
2023-12-09T16:34:11Z
2024-03-17T16:10:37Z
https://github.com/langchain-ai/langchain/issues/14494
2,033,925,929
14,494
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. I tried to work on SQL cutsom prompt, but it didn't work and is still giving the wrong sql queries . Here is the code : def process_user_input(user_input): create_db() input_db = SQLDatabase.from_uri('sqlite:///sample_db_2.sqlite') llm_1 = OpenAI(temperature=0) db_agent = SQLDatabaseChain.from_llm(llm = llm_1, db = input_db, verbose=True,) chain = create_sql_query_chain(ChatOpenAI(temperature=0), input_db) response = chain.invoke({"question": user_input}) ### Suggestion: _No response_
why the sql langchain's custom prompt is not working?
https://api.github.com/repos/langchain-ai/langchain/issues/14487/comments
2
2023-12-09T12:19:40Z
2024-03-17T16:10:32Z
https://github.com/langchain-ai/langchain/issues/14487
2,033,828,081
14,487
[ "langchain-ai", "langchain" ]
### System Info I have retriever implementation like this ``` def get_vector_store(options: StoreOptions) -> VectorStore: """Gets the vector store for the given options.""" vector_store: VectorStore embedding = get_embeddings() store_type = os.environ.get("STORE") if store_type == StoreType.QDRANT.value: client = qdrant_client.QdrantClient( url=os.environ["QDRANT_URL"], prefer_grpc=True, api_key=os.getenv("QDRANT_API_KEY", None), ) vector_store = Qdrant( client, collection_name=options.namespace, embeddings=embedding ) # vector_store = Qdrant.from_documents([], embedding, url='http://localhost:6333', collection=options.namespace) else: raise ValueError("Invalid STORE environment variable value") return vector_store ``` get-embeddings.py ``` return OllamaEmbeddings(base_url=f"host.docker.internal:11434", model="mistral") ``` ``` knowledgebase: VectorStore = get_vector_store(StoreOptions("knowledgebase")) async def get_relevant_docs(text: str, bot_id: str) -> Optional[str]: try: kb_retriever = knowledgebase.as_retriever( search_kwargs={ "k": 3, "score_threshold": vs_thresholds.get("kb_score_threshold"), "filter": {"bot_id": bot_id}, }, ) result = kb_retriever.get_relevant_documents(text) if result and len(result) > 0: # Assuming result is a list of objects and each object has a page_content attribute all_page_content = "\n\n".join([item.page_content for item in result]) return all_page_content return None except Exception as e: logger.error( "Error occurred while getting relevant docs", incident="get_relevant_docs", payload=text, error=str(e), ) return None ``` As long as i use chatgpt embeddings and chat models, i get the correct outputs. Once i switch to ollama, none of my retrievers are working. I see the documents being ingested to qdrant, which means embeddings are working, but retrievers fail to retrieve any document ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ss ### Expected behavior retrievers should be able to fetch the documents from qdrant irrespective of embedding models being used
Retrievers don't seem to work properly with ollama
https://api.github.com/repos/langchain-ai/langchain/issues/14485/comments
4
2023-12-09T08:06:14Z
2023-12-10T04:17:57Z
https://github.com/langchain-ai/langchain/issues/14485
2,033,718,595
14,485
[ "langchain-ai", "langchain" ]
### Feature request It would be nice to write code for any generic LLM to use in a given chain, being able to specify its provider and parameters in a config file outside the source code. Is there a built-in way to do so? What major roadblocks would you see in doing that? ### Motivation Most examples that I see in the docs or over the Internet tend to specify from code whether the LLM to be used in a given chain has to be from openAI, anthropic, LLaMA, etc., resulting in different codebases, while it would be great to write one single unified codebase and compare the performance of different open-source or API LLMs by simply changing one line of config. This would be especially relevant when the specific LLM provider is not set or known from the get-go and multiple ones should be compared to find the most suitable one in terms of perfomance, efficiency and cost for any given LLM-driven application. ### Your contribution I don't have time for a PR now, but since I have been doing a similar thing for our private codebase at the laboratory, combining openAI, anthropic and LLaMA under a single unified spell without using the langchain framework, I may be interested in supporting that in the future as the framework already has lots of interesting features in terms of tooling, memory and storage.
generic codebase?
https://api.github.com/repos/langchain-ai/langchain/issues/14483/comments
7
2023-12-09T07:06:17Z
2024-04-09T16:14:17Z
https://github.com/langchain-ai/langchain/issues/14483
2,033,679,268
14,483
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.335 ### Who can help? @hwchase17 @agol ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create new object of `langchain.llms.vertexai.VertexAI` class. 2. Add attribute `request_timeout` with some value. 3. Run the model. Timeout specified is not respected. ### Expected behavior VertexAI request should timeout as per the `request_timeout` specified and then retried as per retry configuration. This works flawlessly for `AzureChatOpenAI`.
VertexAI classs doesn't support request_timeout
https://api.github.com/repos/langchain-ai/langchain/issues/14478/comments
2
2023-12-09T03:28:39Z
2024-03-18T16:08:03Z
https://github.com/langchain-ai/langchain/issues/14478
2,033,581,284
14,478
[ "langchain-ai", "langchain" ]
### Issue with current documentation: Hi, I'm trying to understand _**Pinecone.from_documents**_ as shown [here ](https://python.langchain.com/docs/integrations/vectorstores/pinecone). I saw it used the same way in another tutorial as well and it seems like it should be calling OpenAIEmbeddings.embed_documents at some point and then upserting the texts and vectors to the pinecone index, but I can't find the actual python file for the method or any documentation anywhere. I'm trying to adapt it and need to understand how **_from_documents_** works. Specifically I am trying to do the following and am curious if the method could be used in this way My goals: - Use Pinecone.from_documents(documents, embeddings, index_name=index_name, namespace=namespace) in a for loop where in each iteration namespace change and the "chunked" documents are different. Each namespace represents a different chunking strategy for an experiment. Note that I defined embeddings=OpenAIEmbeddingsWrapper(model=embedding_model_name) before the for loop, where OpenAIEmbeddingsWrapper is a wrapper class around the OpenAIEmbeddings object, and embedding_model_name="text-embedding-ada-002". Why I'm asking: - It seems like the number of texts and vectors that I can extract from OpenAIEmbeddingsWrapper (using the OpenAIEmbeddings.embed_documents method) for each namespace doesn't match what's in Pinecone (610 texts/vectors from the method vs 1251 in Pinecone). ### Idea or request for content: Can you share some details about Pinecone.from_documents and if it can be used multiple times to upsert chunk documents onto a pinecone index?
DOC: Need clarity on Pinecone.from_documents and OpenAIEmbeddings.
https://api.github.com/repos/langchain-ai/langchain/issues/14472/comments
7
2023-12-08T22:48:39Z
2024-03-18T16:07:59Z
https://github.com/langchain-ai/langchain/issues/14472
2,033,402,079
14,472
[ "langchain-ai", "langchain" ]
### System Info OS: Using docker image amd64/python:3.10-slim Python Version: 3.10.13 Langchain Version: 0.0.336 OpenAI Version: 0.27.7 Tenacity Version: 4.65.0 ### Who can help? @agola11 @hwchase17 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When I try to use an llm with a custom openai_api_base argument within an agent, the agent appears to be attempting to access the **OpenAI** API endpoint instead of the custom one I have specified. Running: llm = ChatOpenAI(default_headers={"api-key":"**REDACTED**", openai_api_base="**REDACTED**", openai_api_key="none").bind(stop=["\nObservation"]) tools = [] tools.append(Tool.from_function(func=self.get_scores, name="get_scores", description="function to get scores")) prompt = PromptTemplate.from_template("""Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought:{agent_scratchpad}""") prompt = prompt.partial(tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]), ) agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]), } | prompt | llm | ReActSingleInputOutputParser() ) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) output = agent_executor.invoke({"input":"foo"}) yields: `File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 87, in invoke return self( File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__ raise e File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__ self._call(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _call next_step_output = self._take_next_step( File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step output = self.agent.plan( File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 385, in plan output = self.runnable.invoke(inputs, config={"callbacks": callbacks}) File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 1427, in invoke input = step.invoke( File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 2787, in invoke return self.bound.invoke( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 142, in invoke self.generate_prompt( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 349, in generate raise e File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 339, in generate self._generate_with_cache( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache return self._generate( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 422, in _generate response = self.completion_with_retry( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 352, in completion_with_retry return _completion_with_retry(**kwargs) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter raise retry_exc.reraise() File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise raise self.last_attempt.result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 350, in _completion_with_retry return self.client.create(**kwargs) File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 337, in handle_error_response raise error.APIError( openai.error.APIError: Invalid response object from API: '{\n "detail": "No authorization token provided",\n "status": 401,\n "title": "Unauthorized",\n "type": "about:blank"\n}\n' (HTTP response code was 401)` When I change the openai_api_base to something nonsensical, the same error is returned, making me think that it is using OpenAI's API base and not the custom one specified. ### Expected behavior I would expect the agent to work as shown here: https://python.langchain.com/docs/modules/agents/agent_types/react
agent executor not using custom openai_api_base
https://api.github.com/repos/langchain-ai/langchain/issues/14470/comments
10
2023-12-08T21:41:20Z
2024-03-18T16:07:54Z
https://github.com/langchain-ai/langchain/issues/14470
2,033,344,391
14,470
[ "langchain-ai", "langchain" ]
### System Info langchain==0.0.346 ### Who can help? @hwch ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://github.com/langchain-ai/langchain/pull/14266 added a deprecation for `input_variables` argument of `PromptTemplate.from_file`. It was released in 0.0.346. However, https://github.com/langchain-ai/langchain/blob/v0.0.346/libs/langchain/langchain/chains/llm_summarization_checker/base.py#L20-L31 still uses `input_variables` at the module-level. So now this `DeprecationWarning` is emitted simply for importing from LangChain. Can we fix this, so LangChain isn't emitting `DeprecationWarning`s? ### Expected behavior I expect LangChain to not automatically emit `DeprecationWarning`s when importing from it
DeprecationWarning: `input_variables' is deprecated and ignored
https://api.github.com/repos/langchain-ai/langchain/issues/14467/comments
1
2023-12-08T21:10:46Z
2023-12-13T01:43:28Z
https://github.com/langchain-ai/langchain/issues/14467
2,033,316,778
14,467
[ "langchain-ai", "langchain" ]
### Issue you'd like to raise. Hi, I'm using the llama2 model for SQL, and I modified it to work directly using LLM. I also added more tables to test the model, and I'm modifying the prompt. When I do this, it generates longer queries based on the question I send, but when the query is very long, it doesn't complete generating everything. I checked the response character count, and on average, it returns around 400+- maximum response characters (I tried increasing and removing the character limit, but it didn't solve the problem). llm = Replicate( model=llama2_13b, input={"temperature": 0.1, "max_length": 2000, Incorrect return type. Retrieve the performance of Miles Norris in his most recent game. ```sql SELECT pg_player_game_stats.Points, pg_player_game_stats.Rebounds, pg_player_game_stats.Assists FROM nba_roster_temp JOIN player_game_stats ON nba_roster_temp."PlayerID" = player_game_stats."PlayerID" WHERE nba_roster As you can see, the SQL query it returns is incomplete, and here I'm also counting the number of characters. number character: 401 Any help you can provide would be appreciated ### Suggestion: _No response_
Issue: llama2-sql for long queries doesn't return the complete query.
https://api.github.com/repos/langchain-ai/langchain/issues/14465/comments
1
2023-12-08T19:31:22Z
2024-03-16T16:14:16Z
https://github.com/langchain-ai/langchain/issues/14465
2,033,203,436
14,465