issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import json import urllib.request from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec urllib.request.urlretrieve("https://api.snyk.io/rest/openapi/2024-02-21", "openapi_spec.json") with open("openapi_spec.json", encoding='utf-8') as f: raw_openapi_spec = json.load(f) reduced_spec = reduce_openapi_spec(raw_openapi_spec) ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/matt/Dev/langchain_test/./test_case.py", line 14, in <module> reduced_spec = reduce_openapi_spec(raw_openapi_spec) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 45, in reduce_openapi_spec endpoints = [ File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 46, in <listcomp> (name, description, dereference_refs(docs, full_schema=spec)) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 74, in dereference_refs else _infer_skip_keys(schema_obj, full_schema) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 55, in _infer_skip_keys keys += _infer_skip_keys(v, full_schema) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 55, in _infer_skip_keys keys += _infer_skip_keys(v, full_schema) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 51, in _infer_skip_keys ref = _retrieve_ref(v, full_schema) File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 18, in _retrieve_ref out = out[int(component)] KeyError: 400 ### Description When using a JSON OpenAPI file, reduce_openapi_spec fails with a Key error on numerical strings. This is due to https://github.com/langchain-ai/langchain/pull/14745, which casts all digits to ints. The fix is probably to do if component.isdigit() and isinstance(out[component], int): but I wasn't sure how you'd want to fix it. ### System Info pip freeze | grep langchain langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.27
Key error when using dereference_refs from langchain_community.agent_toolkits.openapi.spec when using JSON OpenAPI spec file
https://api.github.com/repos/langchain-ai/langchain/issues/18325/comments
0
2024-02-29T14:32:09Z
2024-06-08T16:13:46Z
https://github.com/langchain-ai/langchain/issues/18325
2,161,405,297
18,325
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel class TestModel(BaseModel): test_attribute: str if __name__ == '__main__': pydantic_output_parser = PydanticOutputParser(pydantic_object=TestModel) print(pydantic_output_parser) ``` ### Error Message and Stack Trace (if applicable) ``` pydantic.v1.error_wrappers.ValidationError: 1 validation error for PydanticOutputParser pydantic_object subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel) ``` ### Description Hello, Starting from version 0.1.6, there arises an issue with existing pydantic v2 `BaseModel` child classes triggering an error. This problem is directly associated with a particular [commit](https://github.com/langchain-ai/langchain/commit/852973d6169fee3e80f3b361453dd14980dd8797#diff-f86ea1cb10fcab7e1a505fab4aad6b8e4ad3fc33128f7d0c74474166c66bb608): In the file `pydantic.py` (now relocated to core), the attribute `pydantic_object` of the `PydanticOutputParser` class underwent a change from a generic type variable `T = TypeVar("T", bound=BaseModel)` to `Type[BaseModel]`. Consequently, this alteration shifted the validator used in constructors from `any_class_validator` to `make_class_validator`. Thus, all child BaseModel classes are checked whether they are subclass of pydantic v1 `BaseModel` It worked on 0.1.5. I am aware that [Example 2 in the documentation](https://python.langchain.com/docs/guides/pydantic_compatibility) advises against using pydantic v2 BaseModels. However, **I find it unfortunate that upgrading to version 0.1.6 breaks compatibility with pydantic v2.** Thank you. ### System Info python 3.10 langchain >= 0.1.6 mac Sonoma 14.1.2
Breaking pydantic v2 compatibility in output parsers from 0.1.6
https://api.github.com/repos/langchain-ai/langchain/issues/18322/comments
1
2024-02-29T13:26:59Z
2024-04-26T14:41:02Z
https://github.com/langchain-ai/langchain/issues/18322
2,161,257,560
18,322
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain.prompts import PromptTemplate from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint from langchain.chains import LLMChain llm = HuggingFaceEndpoint( repo_id="google/flan-t5-large", temperature=0, max_new_tokens=250, huggingfacehub_api_token=HUGGINGFACE_TOKEN ) prompt_tpl = PromptTemplate( template="What is the good name for a company that makes {product}", input_variables=["product"] ) chain = LLMChain(llm=llm, prompt=prompt_tpl) print(chain.invoke("colorful socks")) ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/michaelchu/Documents/agent/agent.py", line 20, in <module> print(chain.invoke("colorful socks")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 115, in generate return self.llm.generate_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 741, in generate output = self._generate_helper( ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper raise e File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper self._generate( File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_community/llms/huggingface_endpoint.py", line 256, in _call response = self.client.post( ^^^^^^^^^^^^^^^^^ File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/inference/_client.py", line 242, in post hf_raise_for_status(response) File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: AxsbrX3A4JxXuBdYC7fv-) Bad request: The following `model_kwargs` are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list) ### Description Hi, folks. I'm just trying to run a simple LLMChain and getting the Bad Request due to model_kwargs checking. I found there are several same issue being raised, however it haven't fixed in the latest release of langchain. Please help to take a look, thanks! **_Previous Issue being raised_**: https://github.com/langchain-ai/langchain/issues/10848 ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000 > Python Version: 3.12.1 (main, Feb 14 2024, 09:50:51) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.1.27 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.10 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Bad request: The following `model_kwargs` are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list)
https://api.github.com/repos/langchain-ai/langchain/issues/18321/comments
14
2024-02-29T13:19:43Z
2024-07-16T13:24:54Z
https://github.com/langchain-ai/langchain/issues/18321
2,161,242,032
18,321
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from pprint import pprint from langchain_core.utils.function_calling import convert_to_openai_function from pydantic import v1 class BiblioExtraction(v1.BaseModel): title: str authors: list[str] f = convert_to_openai_function(BiblioExtraction) pprint(f) ``` ### Error Message and Stack Trace (if applicable) ```python {'description': '', 'name': 'BiblioExtraction', 'parameters': {'properties': {'authors': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['title', 'authors'], 'type': 'object'}} ``` ### Description I am trying to use `langchain.chains.openai_functions.create_structured_output_runnable` to extract citation information from the text, and then I found that I cannot extract the `title` field. After investigation, I discovered that langchain [introduced an undocumented behavior](https://github.com/langchain-ai/langchain/commit/ef42d9d559bf8e9c7de85f20fe9a67cc78c3030a#diff-5244d0e3a3878e2e86fbdac70ff585a1f956939c46ef65e53c77fd896bc03bd6R53) in version 0.1.4, which deletes the `title` field in schema. ### System Info ``` ❯ python -m langchain_core.sys_info System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023 > Python Version: 3.10.13 (main, Nov 16 2023, 15:58:41) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.1.27 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.5 > langchain_openai: 0.0.5 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
convert_to_openai_function will drop `title` field in `output_schema` if it's a Pydantic model.
https://api.github.com/repos/langchain-ai/langchain/issues/18319/comments
3
2024-02-29T11:45:13Z
2024-07-04T16:07:58Z
https://github.com/langchain-ai/langchain/issues/18319
2,161,064,475
18,319
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I searched the LangChain documentation with the integrated search. - [X] I added a very descriptive title to this issue. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). - [X] I used the GitHub search to find a similar question and didn't find it. ### Example Code Here is a simple version of the code I'm trying to use. ``` llm = LlamaCpp(model_path=MODEL_PATH, n_ctx=2048, n_batch=512, max_tokens=-1) history = RedisChatMessageHistory(session_id=session_id, url="redis://localhost:6379") memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=2048, chat_memory=history) conversation = LLMChain(llm=llm, prompt=prompt, memory=memory) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description While developing a chatbot, I used `ConversationTokenBufferMemory` with `RedisChatMessageHistory` to provide support for different users. It is aimed for storing and retrieving history as needed and loading it into memory to pass into the chain. But when it reaches `max_token_limit`, instead of forgetting the left most history to maintain the context length window, the app crashes because context length exceeded the limit. What's wrong? `ConversationTokenBufferMemory` is supposed to automatically keep track of total tokens used in history and only keep the latest `n` number of tokens to avoid crashes, right? ### It might be a bug or a lack of support in the `ConversationTokenBufferMemory` class. Expecting a solution that might sound something like this : > If `chat_memory` = an object of `RedisChatMessageHistory`, then it should have a feature where it retrieves the entire history data from Redis as it is supposed to, but cleans up the history data to keep the last `max_token_limit` number of tokens before passing it as history in the chain. This way, when Langchain is used in production and has to juggle between multiple different users, this might come in handy. ### System Info Platform : Windows Python : 3.10.9 Langchain : Latest while posting this.
ConversationTokenBufferMemory with RedisChatMessageHistory doesn't work, as it crashes due to exceeding the context limit.
https://api.github.com/repos/langchain-ai/langchain/issues/18303/comments
6
2024-02-29T06:50:29Z
2024-03-01T06:34:17Z
https://github.com/langchain-ai/langchain/issues/18303
2,160,530,801
18,303
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: The documentation page for BigQueryVectorSearch is broken: https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search. Anything I can help fix it? ### Idea or request for content: _No response_
DOC: documentation page for `BigQueryVectorSearch` is broken
https://api.github.com/repos/langchain-ai/langchain/issues/18296/comments
2
2024-02-29T01:22:55Z
2024-06-08T16:13:40Z
https://github.com/langchain-ai/langchain/issues/18296
2,160,164,321
18,296
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code class CustomSQLTool(BaseTool): name = "SQL_TOOL" description = "useful when you need to answer questions residing on spark tables" llm = AzureChatOpenAI() k = 30 def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" spark_sql = SparkSQL(schema='schema_name', include_tables=['table_name']) toolkit = SparkSQLToolkit(db=spark_sql, llm=self.llm) spark_sql_agent_executor = create_spark_sql_agent( llm=chatllm, #chatllm is the Azure open AI GPT model reference toolkit=toolkit, verbose=True, top_k=self.k, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, early_stopping_method="generate", agent_executor_kwargs={"handle_parsing_errors": True} ) return spark_sql_agent_executor.run(query) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("CustomSQLTool does not support async") ### Error Message and Stack Trace (if applicable) [68f78b9qrn] 2024-02-27T20:13:01.+0000 ERROR src.mlflowserving.scoring_server Encountered an unexpected error while evaluating the model. Verify that the input is compatible with the model for inference. Error ''CustomSQLTool' object has no attribute 'is_single_input'' [68f78b9qrn] Traceback (most recent call last): [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/src/mlflowserving/scoring_server/__init__.py", line 457, in transformation [68f78b9qrn] raw_predictions = model.predict(data, params=params) [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py", line 492, in predict [68f78b9qrn] return _predict() [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py", line 478, in _predict [68f78b9qrn] return self._predict_fn(data, params=params) [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/model.py", line 473, in predict [68f78b9qrn] return self.python_model.predict(self.context, self._convert_input(model_input)) [68f78b9qrn] File "<command-1070239644302950>", line 10, in predict [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/conversational/base.py", line 109, in from_llm_and_tools [68f78b9qrn] cls._validate_tools(tools) [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/conversational/base.py", line 91, in _validate_tools [68f78b9qrn] validate_tools_single_input(cls.__name__, tools) [68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/utils.py", line 9, in validate_tools_single_input [68f78b9qrn] if not tool.is_single_input: [68f78b9qrn] AttributeError: 'CustomSQLTool' object has no attribute 'is_single_input' [68f78b9qrn] ### Description I am using langchain==0.0.330 (also tried with 0.0.347) and creating a custom SQL class to query spark tables in my Azure Databricks Environment. my custom SQL class is the child class of BaseTool. I am registering the whole thing in a mlflow custom pyfunc model. when I register the model in modle registery it is successfull and when I load and query it is working fine. when I deploy it as a model serving endpoint it fails stating that the my class CustomSQLTool does not have attribute is_single_input. As far as I understand this is available in validate tool in basetool class and I need not override it. both model serving and model registry accept the same input string wrapped in pandas dataframe. ### System Info langchain 0.0.330 and python 3.9 on Databricks runtime 12.2 ML.
Custom Agent Class fails with object has no attribute 'is_single_input'
https://api.github.com/repos/langchain-ai/langchain/issues/18292/comments
1
2024-02-29T00:21:13Z
2024-06-08T16:13:35Z
https://github.com/langchain-ai/langchain/issues/18292
2,160,102,070
18,292
[ "langchain-ai", "langchain" ]
### Checked other resources - [x] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code Using this code gives the first type of exception "You must provide an embedding function to compute embeddings." ``` python import chromadb from chromadb.utils import embedding_functions from langchain.chains.query_constructor.base import AttributeInfo from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain_openai import ChatOpenAI from langchain_community.vectorstores import Chroma metadata_field_info = [ AttributeInfo( name="date", description="Date description", type="integer", ), ] document_content_description = "Content description" llm = ChatOpenAI(temperature=0, key) class ChromaDbInstance: def __init__(self) -> None: self._client = chromadb.HttpClient( host=f"http://localhost:{CHROMA_DB_PORT}", ) self._collection_texts = self._client.get_or_create_collection( name=CHROMA_MAIN_COLLECTION_NAME ) self._retriever = SelfQueryRetriever.from_llm( llm, Chroma( client=self._client, collection_name=CHROMA_MAIN_COLLECTION_NAME ), document_content_description, metadata_field_info, ) @property def count(self): return self._collection_texts.count() def add_text_to_db(self, text): try: self._collection_texts.add( documents=[text], ids=[str(self.count + 1)]] ) except Exception as e: print("-- Error adding new text to db --", e) def query_db(self, query): return self._retriever.invoke(query) ``` If I pass the "embedding_function" to Chroma initialization - I get another error: "AttributeError: 'ONNXMiniLM_L6_V2' object has no attribute 'embed_query'" ``` python Chroma( client=self._client, collection_name=CHROMA_MAIN_COLLECTION_NAME, embedding_function=embedding_functions.DefaultEmbeddingFunction(), ), ``` ### Error Message and Stack Trace (if applicable) Initial case when not providing any embedding_function to `langchain_community.vectorstores.Chroma`: ``` Traceback (most recent call last): File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 74, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/server.py", line 20, in echo_endpoint return {"response": db.query_db(req.prompt)} ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/modules/vector_db.py", line 72, in query_db return self._retriever.invoke(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 141, in invoke return self.get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents raise e File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents result = self._get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 186, in _get_relevant_documents docs = self._get_docs_with_query(new_query, search_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 160, in _get_docs_with_query docs = self.vectorstore.search(query, self.search_type, **search_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 159, in search return self.similarity_search(query, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search docs_and_scores = self.similarity_search_with_score( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 429, in similarity_search_with_score results = self.__query_collection( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/utils/utils.py", line 35, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 155, in __query_collection return self._collection.query( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 327, in query valid_query_embeddings = self._embed(input=valid_query_texts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 629, in _embed raise ValueError( ValueError: You must provide an embedding function to compute embeddings.https://docs.trychroma.com/embeddings ``` Case when providing the default embedding_function to the Chroma initializer ``` Traceback (most recent call last): File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 74, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/server.py", line 20, in echo_endpoint return {"response": db.query_db(req.prompt)} ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/modules/vector_db.py", line 75, in query_db return self._retriever.invoke(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 141, in invoke return self.get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents raise e File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents result = self._get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 186, in _get_relevant_documents docs = self._get_docs_with_query(new_query, search_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 160, in _get_docs_with_query docs = self.vectorstore.search(query, self.search_type, **search_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 159, in search return self.similarity_search(query, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search docs_and_scores = self.similarity_search_with_score( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 437, in similarity_search_with_score query_embedding = self._embedding_function.embed_query(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'ONNXMiniLM_L6_V2' object has no attribute 'embed_query' ``` ### Description * I'm trying to use a SelfQueryRetriever with Chroma vector store. * I expect it to work without passing the `embedding_function` arg, or when I pass it explicitly `embedding_function=embedding_functions.DefaultEmbeddingFunction()` to the Chroma constructor * Instead I get errors when trying to call `retriever.invoke(text)` I've debugged and found out the problem is most likely in this line: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py#L128 line If nothing was passed to the `embedding_function` - it would initialize normally and just query the chroma collection and inside the collection it will use the right methods for the embedding_function inside the chromadb lib source code: `return self._embedding_function(input=input)`. At least it will work for the default embedding_function provided by chromadb. Please, fix it. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 > Python Version: 3.11.8 (v3.11.8:db85d51d3e, Feb 6 2024, 18:02:37) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.1.27 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.10 > langchain_openai: 0.0.8 > chromadb: 0.4.24
Trying to use Chroma vectorstore with default embedding_function results in an error
https://api.github.com/repos/langchain-ai/langchain/issues/18291/comments
1
2024-02-29T00:16:15Z
2024-07-15T07:48:57Z
https://github.com/langchain-ai/langchain/issues/18291
2,160,097,448
18,291
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from dotenv import load_dotenv from langchain_community.vectorstores.faiss import FAISS from azure import azure_embeddings load_dotenv() if __name__ == '__main__': db = FAISS.load_local("faiss_index", azure_embeddings) retriever = db.as_retriever() ``` previously saved index with ```python db = FAISS.from_documents(documents, azure_embeddings) # save to disk db.save_local("faiss_index") ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "<REDACTED>/app.py", line 29, in <module> db = FAISS.load_local("faiss_index", azure_embeddings) File "<REDACTED>/site-packages/langchain_community/vectorstores/faiss.py", line 1110, in load_local index = faiss.read_index( AttributeError: module 'faiss' has no attribute 'read_index' ### Description Cannot import index for my RAG app. It was working fine, for a few days. ### System Info ```bash langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.26 langchain-openai==0.0.6 langchainhub==0.1.14 faiss-cpu==1.7.4 ``` python 3.9.18 macOS
Cannot import faiss index from a file
https://api.github.com/repos/langchain-ai/langchain/issues/18285/comments
2
2024-02-28T21:12:41Z
2024-06-19T16:07:38Z
https://github.com/langchain-ai/langchain/issues/18285
2,159,876,248
18,285
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain_openai import ChatOpenAI model = ChatOpenAI() sql_response = ( RunnablePassthrough.assign(schema=get_schema) | prompt | model.bind(stop=["\nSQLResult:"]) | StrOutputParser() ) How to add ConversationBufferMemory to this code ### Error Message and Stack Trace (if applicable) _No response_ ### Description I'm trying to add Memory to this chain but I'm unable to do it. ### System Info No particular system info
How to add memory
https://api.github.com/repos/langchain-ai/langchain/issues/18256/comments
0
2024-02-28T10:56:59Z
2024-02-28T11:06:44Z
https://github.com/langchain-ai/langchain/issues/18256
2,158,680,482
18,256
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python prompt_template = ChatPromptTemplate.from_messages( [ ("system", prompt), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"), ]) llm = ChatOpenAI(model = llm) llm_with_tools = llm.bind_tools(tools) input = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_tool_messages( x["intermediate_steps"] ), } agent = ( input | prompt_template | llm_with_tools | OpenAIToolsAgentOutputParser() ) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) for chunk in self.agent_executor.stream({"input": input}): if "output" in chunk: return chunk["output"] ``` ### Error Message and Stack Trace (if applicable) Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.') ### Description I am trying to trace my LangChain runs by using LangSmith, but I get the following error at the end of the flow: `Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.')` The environment variable is set up as environment variable using `.env`. The run is however logged in LangSmith and I can see it, but the error still appears. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:28 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T6000 > Python Version: 3.9.6 (default, Dec 7 2023, 05:42:47) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.22 > langsmith: 0.1.5 > langchain_openai: 0.0.7 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Issue: Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.')
https://api.github.com/repos/langchain-ai/langchain/issues/18254/comments
2
2024-02-28T10:38:58Z
2024-07-17T12:31:43Z
https://github.com/langchain-ai/langchain/issues/18254
2,158,645,069
18,254
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python3 import os from langchain_core.prompts import PromptTemplate, ChatPromptTemplate from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings from langchain_core.output_parsers import StrOutputParser os.environ["NVIDIA_API_KEY"] = "nvapi-*" prompt_template = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are positive in nature." llm = ChatNVIDIA(model="mixtral_8x7b", max_tokens=32) pt = PromptTemplate.from_template(prompt_template) chain = pt | llm | StrOutputParser() resp = chain.stream({"context_str": "", "query_str": "What is nvlink"}) count = sum(1 for _ in (print(chunk) for chunk in resp)) print("Token count:", count) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am using ChatNvidia to build a chain. I have specified `max_tokens` to 32 and trying to generate response. But number of tokens generated are more than that. In above example it's around 78, even though max_tokens limit is already set. Point to note is when I simply create object of ChatNvidia and invoke it works as expected. ```python3 llm = ChatNVIDIA(model="mixtral_8x7b", max_tokens=32) llm.invoke("Hi") ``` ### System Info Version details are added below ``` $ pip3 freeze | grep langchain langchain==0.0.352 langchain-community==0.0.7 langchain-core==0.1.3 langchain-nvidia-ai-endpoints==0.0.1 langchain-nvidia-trt==0.0.1rc0 ``` I am using ubuntu 22.04
max_token limit is not followed when using chain
https://api.github.com/repos/langchain-ai/langchain/issues/18248/comments
2
2024-02-28T06:32:56Z
2024-04-20T13:57:47Z
https://github.com/langchain-ai/langchain/issues/18248
2,158,220,922
18,248
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: [Document correction](https://python.langchain.com/docs/expression_language/streaming) The tips An LCEL chain constructed using using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain. repeat word "using" ### Idea or request for content: _No response_
DOC: https://python.langchain.com/docs/expression_language/streaming Document correction
https://api.github.com/repos/langchain-ai/langchain/issues/18247/comments
1
2024-02-28T06:10:33Z
2024-06-08T16:13:27Z
https://github.com/langchain-ai/langchain/issues/18247
2,158,194,261
18,247
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_community.utilities.stackexchange import StackExchangeAPIWrapper # can also replicate with: from langchain_community.tools.stackexchange.tool import StackExchangeTool api = StackExchangeAPIWrapper() api.run("window function mysql") ``` ### Error Message and Stack Trace (if applicable) ``` File [c:\ProgramData\Anaconda3\envs\llm\lib\site-packages\langchain_community\utilities\stackexchange.py:55](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:55), in <listcomp>(.0) [50](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:50) for question in questions: [51](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:51) res_text = f"Question: {question['title']}\n{question['excerpt']}" [52](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:52) relevant_answers = [ [53](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:53) answer [54](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:54) for answer in answers ---> [55](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:55) if answer["question_id"] == question["question_id"] [56](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:56) ] [57](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:57) accepted_answers = [ [58](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:58) answer for answer in relevant_answers if answer["is_accepted"] [59](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:59) ] [60](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:60) if relevant_answers: KeyError: 'question_id' ``` ### Description Currently the [stackexchange API wrapper](https://api.python.langchain.com/en/latest/_modules/langchain_community/utilities/stackexchange.html#StackExchangeAPIWrapper) may generate key errors, because some answers do not have `question_id` (which is an optional field according to [the official API docs](https://api.stackexchange.com/docs/types/search-excerpt)). My quick one-liner fix: In `StackExchangeAPIWrapper` class source code at line 55: replace `if answer['question_id'] == question['question_id']` with `if answer.get('question_id', '') == question.get('question_id', '')` ### System Info langchain==0.1.5 langchain-community==0.0.17 stackapi==0.3.0
StackExchangeTool generates keyerror: 'question_id'
https://api.github.com/repos/langchain-ai/langchain/issues/18242/comments
1
2024-02-28T02:25:59Z
2024-08-02T15:29:58Z
https://github.com/langchain-ai/langchain/issues/18242
2,157,950,437
18,242
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: <img width="1362" alt="image" src="https://github.com/langchain-ai/langchain/assets/40649016/19e08b39-50bb-4f24-bacd-5d4a2d550b0d"> I found there is an issue with the next navigation button on [LangSmith](https://python.langchain.com/docs/langsmith/) Doc page. What expected is https://python.langchain.com/docs/langsmith/walkthrough ### Idea or request for content: It is easy to fix it
DOC: Wrong 'Next' Navigation for LangSmith
https://api.github.com/repos/langchain-ai/langchain/issues/18241/comments
0
2024-02-28T02:15:41Z
2024-06-08T16:13:16Z
https://github.com/langchain-ai/langchain/issues/18241
2,157,942,271
18,241
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Missing explanation how to use LanceDB with filter from langchain api, https://python.langchain.com/docs/integrations/vectorstores/lancedb Based on the documentation I am trying to use Langchain with LanceDB as vector database. Here is how I instatiate database: db = lancedb.connect("./data/lancedb") table = db.create_table("my_docs", data=[ {"vector": embeddings.embed_query(chunks[0].page_content), "text": chunks[0].page_content, "id": "1", "file":"bb"} ], mode="overwrite") I then load more documents with different `file` metadata: `vectordb = LanceDB.from_documents(chunks[1:], embeddings, connection=table)` Then another batch with also a different `file` metadata value `vectordb = LanceDB.from_documents(chunks_ma, embeddings, connection=table)` I can see they were loaded succesfully and my vector db has correct amount of docuemnts: `print(len(db['my_docs']))` `11` Now I want to create a retriever that will be able to pre filter the data based on `file` value: I tried this retriever = vectordb.as_retriever(search_kwargs={"k": 6, 'filter':{'file': 'bb'}}) retrieved_docs = retriever.invoke("My query regarding something") But when I check the outputs of the query invocation its still giving me the documents with wrong file metadata values: `print(retrieved_docs[0].metadata['file'])` `'cc'` But it was supposed to only query the docuemnts in the database matchin the file value. Is there something I am doing wrong, or what is the correct approach to filter the values before running retrieval query from LanceDB vector DB using Langchain api? I think the guidelines are missing from the documentation but would reatly help. ### Idea or request for content: _No response_
DOC: Missing explanation of lancedb usage with filtering
https://api.github.com/repos/langchain-ai/langchain/issues/18235/comments
1
2024-02-28T01:00:59Z
2024-06-05T13:54:07Z
https://github.com/langchain-ai/langchain/issues/18235
2,157,877,691
18,235
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser import json llm = ChatOpenAI() prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), ("user", "{input}") ]) output_parser = StrOutputParser() chain = prompt | llm | output_parser json.dumps(chain.to_json()) ``` ### Error Message and Stack Trace (if applicable) ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-d8b5c0c45d51>](https://localhost:8080/#) in <cell line: 1>() ----> 1 json.dumps(chain.to_json()) 3 frames [/usr/lib/python3.10/json/encoder.py](https://localhost:8080/#) in default(self, o) 177 178 """ --> 179 raise TypeError(f'Object of type {o.__class__.__name__} ' 180 f'is not JSON serializable') 181 TypeError: Object of type ChatPromptTemplate is not JSON serializable ``` ### Description I'm trying to serialize a very simple chain to JSON, but the library is complaining that ChatPromptTemplate is not serializable. Removing that node from the chain, I then get the error `ChatOpenAI` is not JSON serializable. ### System Info langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.27 langchain-openai==0.0.8
Can't Serialize Simple Chain
https://api.github.com/repos/langchain-ai/langchain/issues/18232/comments
1
2024-02-27T21:54:32Z
2024-02-28T22:49:01Z
https://github.com/langchain-ai/langchain/issues/18232
2,157,673,208
18,232
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from langchain_openai import AzureChatOpenAI or go to pdb, and simply run: import langchain_openai ### Error Message and Stack Trace (if applicable) ImportError: cannot import name 'PydanticOutputParser' from 'langchain_core.output_parsers' (/mnt/.venv/lib/python3.8/site-packages/langchain_core/output_parsers/__init__.py) ### Description Simply importing langchain_openai based on the recommendation from : https://python.langchain.com/docs/integrations/chat/azure_chat_openai breaks the code. I can see that langchain_openai is trying to import 'PydanticOutputParser' from 'langchain_core.output_parsers' but no such export exists. I uninstalled everything and reinstalled all the packages. ### System Info langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.27 langchain-openai==0.0.8
Can not import langchain and langchain_openai
https://api.github.com/repos/langchain-ai/langchain/issues/18228/comments
5
2024-02-27T20:51:40Z
2024-07-15T16:06:38Z
https://github.com/langchain-ai/langchain/issues/18228
2,157,577,357
18,228
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python bedrock_client = boto3_session.client("bedrock-runtime") model = "anthropic.claude-v2:1" model_kwargs = { #AI21 "max_tokens_to_sample": 4096, "temperature": 0.2, "top_p": 1, "top_k": 250, "stop_sequences": [], } llm = Bedrock( model_id=model, client = bedrock_client, model_kwargs=model_kwargs ) prompt = ChatPromptTemplate.from_template("Tell me about {topic}. Explain your reasoning.") output_parser = StrOutputParser() joke_chain = ( prompt | llm | output_parser ) ``` ### Error Message and Stack Trace (if applicable) raise ValueError("Streaming must be set to True for async operations. ") ValueError: Streaming must be set to True for async operations. ### Description I have the chain above wrapped in a LangServe endpoint. The **stream** endpoint works correctly. The **invoke** endpoint fails with a 500 Internal Server error and with the error shown above in the logs. But, invoke, is supposed to be a synchronous operation. ### System Info pip freeze langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.27 langserve==0.0.43 platform AWS Linux python version 3.11.6
Bedrock llm fails with invoke endpoint in LangServe
https://api.github.com/repos/langchain-ai/langchain/issues/18224/comments
2
2024-02-27T19:49:47Z
2024-06-19T16:07:33Z
https://github.com/langchain-ai/langchain/issues/18224
2,157,486,258
18,224
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code This is the code that reproduces the issue. ``` tools = [ # a series of tools ] llm = ChatOpenAI(model='gpt-3.5-turbo-0125') prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), MessagesPlaceholder(variable_name="chat_history"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad") ]) memory = ConversationBufferMemory( return_messages=True, memory_key="chat_history", output_key='output', ) # with this, get_openai_callback works (but returns a deprecation warning) agent_good = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt) # with this, get_openai_callback does NOT work properly (it prints zero used tokens) agent_bad = create_openai_functions_agent(llm=llm, tools=tools, prompt=prompt) agent_executor = AgentExecutor( agent=agent_bad, # if I use agent_good, the cost is calculated properly tools=tools, verbose=False, memory=memory, return_intermediate_steps=True ) with get_openai_callback() as cb: result = agent_executor.invoke({'input': 'Hi'}) print(result) print(cb) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description If I use OpenAIFunctionsAgent to create an agent, I get a deprecation warning, telling me to use create_openai_functions_agent instead. But, as of langchain 0.1.9 if I use create_openai_functions_agent to create an agent, the OpenAI callback to track costs stops working properly (it returns 0 used tokens and 0 dollars cost). ### System Info ``` python==3.10 tiktoken==0.6.0 openai==1.12.0 langchain==0.1.9 langchain-core==0.1.27 langchain-community==0.0.24 langchain-openai==0.0.8 pydantic==2.6.2 chromadb==0.4.23 ```
get_openai_callback doesn't work with create_openai_functions_agent (it returns 0 used tokens / dollars)
https://api.github.com/repos/langchain-ai/langchain/issues/18212/comments
8
2024-02-27T18:09:25Z
2024-04-03T04:13:34Z
https://github.com/langchain-ai/langchain/issues/18212
2,157,306,347
18,212
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate catalog = "samples" database = "nyctaxi" examples = [ {"input": "longest trip along with pickup zip and drop zip", "query": "select trip_distance, pickup_zip, dropoff_zip from samples.nyctaxi.trips order by trip_distance desc limit 1"}, { "input": "what is the total amount earned for trip on 2016-01-01", "query": "SELECT SUM(fare_amount) FROM samples.nyctaxi.trips WHERE tpep_pickup_datetime >= '2016-01-01 00:00:00+00:00' AND tpep_pickup_datetime < '2016-01-02 00:00:00+00:00", }, { "input": "pick up date wise total amount earned and its pickup zip code and drop zip code", "query": """SELECT tpep_pickup_datetime, SUM(fare_amount) as total_fare_amount, pickup_zip, dropoff_zip FROM samples.nyctaxi.trips GROUP BY tpep_pickup_datetime, pickup_zip, dropoff_zip""", }, { "input": "What was the average trip distance for each day during the month of January 2016?", "query": """SELECT date_trunc('day', tpep_pickup_datetime) as pickup_day, avg(trip_distance) as avg_distance FROM samples.nyctaxi.trips WHERE tpep_pickup_datetime >= '2016-01-01' AND tpep_pickup_datetime < '2016-02-01' GROUP BY pickup_day ORDER BY pickup_day""" } ,{ "input":"list all the trips with pickup zip as 10003" , "query":"select * from samples.nyctaxi.trips where pickup_zip = 10003"} ] prefix_string="""You are a DATABRICKS SQL expert. Given an input question, create a syntactically correct databricks SQL query to run in which the query the table name should be prefixed database name is {database} and catalog is {catalog} ,make sure to add corrrect columns or column names of the tables with out any special characters then look at the results of the query . Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.""" example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}") prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, prefix=prefix_string, suffix="User input: {input}\nSQL query: ", input_variables=["input", "top_k", "table_info","catalog","database"], ) # Initialize the SQL database connection db = SQLDatabase.from_databricks(catalog=catalog, schema=database) tables_list= ",".join(db.get_usable_table_names()) # prompt.format(input="how many artists are there?", top_k=3, table_info=tables_list,catalog=catalog,database=database) # Create a language model instance that interacts with the Databricks SQL database llm = ChatDatabricks(endpoint="databricks-mixtral-8x7b-instruct", max_tokens=200) agent = create_sql_query_chain(llm=llm, prompt=prompt, db=db) q="What was the average trip distance for each month and year " response=agent.invoke({"question":q ,"top_k":3,"table_info":tables_list,"catalog":catalog,"database":database}) print(response)``` ### Error Message and Stack Trace (if applicable) When using the Few Shot template for a Databricks SQL query, I've noticed that sometimes there are no errors, but other times there are. The code above and the error below show the errors I've seen; I've attempted a number of solutions, but I wasn't able to ERROR:- KeyError: "Input to FewShotPromptTemplate is missing variables {'database', 'catalog'}. Expected: ['catalog', 'database', 'input', 'table_info', 'top_k'] Received: ['input', 'top_k', 'table_info']" ### Description When using the Few Shot template for a Databricks SQL query, I've noticed that sometimes there are no errors, but other times there are. The code above and the error below show the errors I've seen; I've attempted a number of solutions, but I wasn't able to ERROR:- KeyError: "Input to FewShotPromptTemplate is missing variables {'database', 'catalog'}. Expected: ['catalog', 'database', 'input', 'table_info', 'top_k'] Received: ['input', 'top_k', 'table_info']" ### System Info langchain==0.0.344,mlflow==2.9.0,databricks-sql-connector and databricks DBR is 14.3
Am working on Langchain with databricks ,tried few shot way of prompting to make sure queries are accurate, but facing the below error though prompt is correct
https://api.github.com/repos/langchain-ai/langchain/issues/18210/comments
7
2024-02-27T17:22:28Z
2024-08-06T09:39:48Z
https://github.com/langchain-ai/langchain/issues/18210
2,157,228,027
18,210
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code Code example: ``` from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id( model_id = "model/google/flan-t5-large", task = "text2text-generation", pipeline_kwargs={"max_new_tokens": 100} ) from langchain.prompts import PromptTemplate template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible. {context} Question: {question} Helpful Answer:""" QA_CHAIN_PROMPT = PromptTemplate.from_template(template) from langchain.chains import RetrievalQA qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=db.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT} ) result = qa_chain ({ "query" : question }) print(result["result"]) ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/home/rigazilla/git/infinispan-vector/rag-hf/main.py", line 65, in <module> result = qa_chain ({ "query" : question }) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 144, in _call answer = self.combine_documents_chain.run( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 550, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call output, extra_return_dict = self.combine_docs( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 293, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__ return self.invoke( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 115, in generate return self.llm.generate_prompt( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 741, in generate output = self._generate_helper( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper raise e File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper self._generate( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_community/llms/huggingface_pipeline.py", line 202, in _generate responses = self.pipeline( File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 167, in __call__ result = super().__call__(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1177, in __call__ outputs = list(final_iterator) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1102, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 191, in _forward output_ids = self.model.generate(**model_inputs, **generate_kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib64/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1350, in generate self._validate_model_kwargs(model_kwargs.copy()) File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1167, in _validate_model_kwargs raise ValueError( ValueError: The following `model_kwargs` are not used by the model: ['return_full_text'] (note: typos in the generate arguments will also show up in this list) Process finished with exit code 1 ### Description I'm trying to run the code above, but an exeception is raised. same code works with: langchain==0.1.7 langchain-community==0.0.20 langchain-core==0.1.23 Maybe this line? https://github.com/langchain-ai/langchain/blob/0d294760e742e0707a71afc7aad22e4d00b54ae5/libs/community/langchain_community/llms/huggingface_pipeline.py#L205 ### System Info langchain==0.1.9 langchain-community==0.0.24 langchain-core==0.1.27
Huggingface_pipeline passes unused 'return_full_text' argument
https://api.github.com/repos/langchain-ai/langchain/issues/18198/comments
6
2024-02-27T16:24:56Z
2024-06-12T16:08:22Z
https://github.com/langchain-ai/langchain/issues/18198
2,157,040,025
18,198
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: I am having a very hard time figuring out how to properly propagate callbacks/callback handlers for Agents/AgentExecutors/Tools so they appear correctly on langsmith. https://python.langchain.com/docs/modules/agents/how_to/streaming has a small section on how to propagate callbacks, however: 1. It only mentions tools defined with `@tool`. How does it work for tools that subclass `BaseTool`? 2. How does it work/relates to "run managers"? Stepping through the library's code I found that the way that tools are called depends on whether they contain a `run_manager` parameter (sidenote: this is *very* confusing and should be made explicit and visible, people do *not* expect caller behaviour to change depending on the existence of a parameter. At least I didn't). But in these docs you use `callbacks`, not `run_manager`? Which should be used? How does it impact everything else? ### Idea or request for content: _No response_
DOC: expand section on how to propagate callbacks in Agents/Streaming (probably make a page entirely dedicated to this?)
https://api.github.com/repos/langchain-ai/langchain/issues/18191/comments
1
2024-02-27T12:38:36Z
2024-06-08T16:13:05Z
https://github.com/langchain-ai/langchain/issues/18191
2,156,485,260
18,191
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: https://python.langchain.com/docs/modules/agents/how_to/structured_tools is pretty puzzling - it just shows some code but there is no introduction/explanation as to when/why one should care about this? ### Idea or request for content: _No response_
DOC: Fix "Structured Tools" page?
https://api.github.com/repos/langchain-ai/langchain/issues/18190/comments
1
2024-02-27T12:28:53Z
2024-06-21T16:37:07Z
https://github.com/langchain-ai/langchain/issues/18190
2,156,466,485
18,190
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code This is my code: ```python prompt = """ This prompt takes two variables like this: variable1: {variable_1} variable2: {variable_2} """ model_choice="gpt-4-0125-preview" temperature=0.0 # Initialize the model and prompt configuration prompt = ChatPromptTemplate.from_template(prompt) output_parser = StrOutputParser() model = ChatOpenAI(model=model_choice, temperature=temperature) setup = RunnableParallel({"variable1":RunnablePassthrough(),"variable2":RunnablePassthrough()}) llm_output = ( setup | prompt | model | output_parser ) llm_output.invoke({"variable_1":"testing var1","variable_2":"testing var2"}) ``` When the code is executed, the langsmith trace shows runnable sequence input as { "variable_1": "testing var1", "variable_2": "testing var2" } But the ChatOpenAI call shows the prompt sent as: ```text This prompt takes two variables like this: variable1: {'variable_1': 'testing var1', 'variable_2': 'testing var2'} variable2: {'variable_1': 'testing var1', 'variable_2': 'testing var2'} ``` What am I missing? Shouldn't it be ```text This prompt takes two variables like this: variable1: 'testing var1' variable2: 'testing var2' ``` ![Screenshot 2024-02-27 at 5 41 53 PM](https://github.com/langchain-ai/langchain/assets/156741502/780a3690-efd2-4a76-a871-2ef9863c0d5d) ![Screenshot 2024-02-27 at 5 42 34 PM](https://github.com/langchain-ai/langchain/assets/156741502/199d4b35-4951-439e-aa65-cd3baa8079b4) ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am using Langchain to take two variables in a prompt and send an output. While two variables are in correct JSON format, the prompt that is ultimately sent to OpenAI seems to be incorrect. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:59 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6030 > Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ] Package Information ------------------- > langchain_core: 0.1.22 > langchain: 0.1.1 > langchain_community: 0.0.19 > langsmith: 0.0.87 > langchain_openai: 0.0.5 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
LECL Chain doesn't process multiple inputs properly.
https://api.github.com/repos/langchain-ai/langchain/issues/18188/comments
1
2024-02-27T12:14:03Z
2024-02-27T13:36:19Z
https://github.com/langchain-ai/langchain/issues/18188
2,156,437,408
18,188
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_community.chat_models import ChatOllama from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.llms import Ollama from langchain_community.utilities import SQLDatabase from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough # Define chat models llama2_chat = ChatOllama(model="llama2:13b-chat") # Change model if required llama2_code = ChatOllama(model="codellama:7b-instruct") # Set model (choose one of the following options) llm = llama2_chat # Option 1 # llm = Ollama(model="llama2:13b-chat", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) # Option 2 # Connect to database db = SQLDatabase.from_uri("mysql+pymysql://database_user:password@localhost/databasName") # Define functions for schema retrieval and query execution def get_schema(_): return db.get_table_info() def run_query(query): return db.run(query) # Create prompt templates template1 = """ Based on the table schema below, write a SQL query that would answer the user's question: {schema} Question: {question} SQL Query: """ prompt = ChatPromptTemplate.from_messages( [ ("system", "Given an input question, convert it to a SQL query. No pre-amble."), ("human", template1), ] ) template2 = """ Based on the table schema below, question, sql query, and sql response, write a natural language response: {schema} Question: {question} SQL Query: {query} SQL Response: {response} """ prompt_response = ChatPromptTemplate.from_messages( [ ( "system", "Given an input question and SQL response, convert it to a natural language answer. No pre-amble.", ), ("human", template2), ] ) # Construct chains for query generation and response sql_response = ( RunnablePassthrough.assign(schema=get_schema) | prompt | llm.bind(stop=["\nSQLResult:"]) | StrOutputParser() ) full_chain = ( RunnablePassthrough.assign(query=sql_response) | RunnablePassthrough.assign( schema=get_schema, response=lambda x: db.run(x["query"]), ) | prompt_response | llm ) # Invoke the full chain and print the final response full_chain.invoke({"question": "how many total records?"}) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am expecting this to only return "There are a total of 15 records" instead I am getting the response "Select count(*) from table, this will get the total number of record in the table, The total is 15 record in the table " How can i modify the templates or anything so that the actual queries are not returned or explained and only the final answer in natural language is returned? ### System Info ``` System Information ------------------ > OS: Linux > OS Version: #102~20.04.1-Ubuntu SMP Mon Jan 15 13:09:14 UTC 2024 > Python Version: 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] Package Information ------------------- > langchain_core: 0.1.24 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.3 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
sql chat convert natural language to sql then sql to natural language
https://api.github.com/repos/langchain-ai/langchain/issues/18185/comments
1
2024-02-27T11:56:27Z
2024-03-11T15:36:26Z
https://github.com/langchain-ai/langchain/issues/18185
2,156,401,290
18,185
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code requests = Requests(headers=headers, verify=False) my_toolkit = NLAToolkit.from_llm_and_spec( llm, spec=OpenAPISpec.from_file("DLM_Lite_Gateway_openapi.json"), requests=requests, max_text_length=1800, # If you want to truncate the response text ) in tool.py if TYPE_CHECKING: from langchain.chains.api.openapi.chain import OpenAPIEndpointChain ### Error Message and Stack Trace (if applicable) File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\natural_language.py", line 54, in <module> my_toolkit = NLAToolkit.from_llm_and_spec( File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\toolkit.py", line 74, in from_llm_and_spec http_operation_tools = cls._get_http_operation_tools( File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\toolkit.py", line 52, in _get_http_operation_tools endpoint_tool = NLATool.from_llm_and_method( File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\tool.py", line 50, in from_llm_and_method chain = OpenAPIEndpointChain.from_api_operation( NameError: name 'OpenAPIEndpointChain' is not defined ### Description i'm starting from the exemple: https://python.langchain.com/docs/integrations/toolkits/openapi_nla but i've an error because OpenAPIEndpointChain class is never imported cause by TYPE_CHECKING in tool.py ### System Info langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.6 langchainplus-sdk==0.0.21 langsmith==0.1.3
OpenAPIEndpointChain not imported in tool.py
https://api.github.com/repos/langchain-ai/langchain/issues/18179/comments
1
2024-02-27T10:23:44Z
2024-06-08T16:13:01Z
https://github.com/langchain-ai/langchain/issues/18179
2,156,221,535
18,179
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Classes in the BaseTracer hierarchy take a "example_id" parameter. The docstring/api docs have a very vague description of "The example ID associated with the runs.". What is this supposed to be (for)? ### Idea or request for content: _No response_
DOC: What are Tracers `example_id` attributes meant for?
https://api.github.com/repos/langchain-ai/langchain/issues/18177/comments
2
2024-02-27T08:51:57Z
2024-06-12T06:40:29Z
https://github.com/langchain-ai/langchain/issues/18177
2,156,022,582
18,177
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this. Reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval `from operator import itemgetter from langchain_community.vectorstores import FAISS from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnableLambda, RunnablePassthrough from langchain_openai import ChatOpenAI, OpenAIEmbeddings vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) retriever = vectorstore.as_retriever() template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) model = ChatOpenAI() chain = ( {"context": retriever, "question": RunnablePassthrough()} ### Only line added to the example | {'context': itemgetter('context'), "question": itemgetter('question')} | prompt | model | StrOutputParser() ) chain.invoke("where did harrison work?") ' ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/zhengisamazing/1.python_dir/vigyan-llm-api/dev/langchain_playground.py", line 110, in <module> chain.invoke("where did harrison work?") File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2056, in invoke input = step.invoke( ^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in invoke output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in <dictcomp> output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3504, in invoke return self._call_with_config( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config context.run( File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3378, in _invoke output = call_func_with_variable_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ TypeError: string indices must be integers, not 'str' ### Description There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this. Provided reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval ### System Info langchain==0.1.7 langchain-cli==0.0.21 langchain-community==0.0.20 langchain-core==0.1.27 langchain-google-genai==0.0.9 langchain-openai==0.0.6 platform: mac python version:3.11.7
Langchain Expression Language (LCEL) pass through does not work with two consecutive chain
https://api.github.com/repos/langchain-ai/langchain/issues/18173/comments
2
2024-02-27T06:57:56Z
2024-07-26T16:06:17Z
https://github.com/langchain-ai/langchain/issues/18173
2,155,814,048
18,173
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` def process(request: str): raise Exception("not implemented") model = ChatOpenAI() add_routes( app, RunnableLambda(process) | model, path="/openai", ) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=9000) ``` ### Error Message and Stack Trace (if applicable) the client side alway got: ![image](https://github.com/langchain-ai/langchain/assets/3241829/68f7637a-dd83-4e62-80e8-bed16ddf34fe) ### Description I'd like a way can customize the error message returned to caller side. ### System Info NONE.
Exception are all treated as 500 Internal Server Error to caller side
https://api.github.com/repos/langchain-ai/langchain/issues/18168/comments
0
2024-02-27T03:36:29Z
2024-06-08T16:12:50Z
https://github.com/langchain-ai/langchain/issues/18168
2,155,581,294
18,168
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python import os os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}" from langchain_community.llms import Predibase model = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN')) response = model("Can you recommend me a nice dry wine?") print(response) ``` ### Error Message and Stack Trace (if applicable) It says "pc.prompt is depricated" ### Description I think it should be something like: ```python # load model and version llm = pc.LLM(self.model) # Attach the adapter to the (client-side) deployment object if self.adapter is not None: adapter = pc.get_model(self.adapter) # Add the "adapter" as adapter: str in the class ft_llm = llm.with_adapter(adapter) else: ft_llm = llm results = ft_llm.prompt(prompt) return results.response``` ### System Info NA
Predibase LLM uses a deprecated code
https://api.github.com/repos/langchain-ai/langchain/issues/18167/comments
0
2024-02-27T03:30:04Z
2024-06-08T16:12:45Z
https://github.com/langchain-ai/langchain/issues/18167
2,155,576,070
18,167
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain.memory import ChatMessageHistory from pydantic import BaseModel class Model(BaseModel): h: ChatMessageHistory print(Model.model_json_schema()) ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- PydanticInvalidForJsonSchema Traceback (most recent call last) Cell In[10], [line 5](vscode-notebook-cell:?execution_count=10&line=5) [3](vscode-notebook-cell:?execution_count=10&line=3) class Model(BaseModel): [4](vscode-notebook-cell:?execution_count=10&line=4) h: ChatMessageHistory ----> [5](vscode-notebook-cell:?execution_count=10&line=5) print(Model.model_json_schema()) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385), in BaseModel.model_json_schema(cls, by_alias, ref_template, schema_generator, mode) [365](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:365) @classmethod [366](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:366) def model_json_schema( [367](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:367) cls, (...) [371](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:371) mode: JsonSchemaMode = 'validation', [372](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:372) ) -> dict[str, Any]: [373](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:373) """Generates a JSON schema for a model class. [374](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:374) [375](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:375) Args: (...) [383](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:383) The JSON schema for the given model class. [384](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:384) """ --> [385](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385) return model_json_schema( [386](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:386) cls, by_alias=by_alias, ref_template=ref_template, schema_generator=schema_generator, mode=mode [387](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:387) ) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158), in model_json_schema(cls, by_alias, ref_template, schema_generator, mode) [2156](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2156) cls.__pydantic_validator__.rebuild() [2157](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2157) assert '__pydantic_core_schema__' in cls.__dict__, 'this is a bug! please report it' -> [2158](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158) return schema_generator_instance.generate(cls.__pydantic_core_schema__, mode=mode) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413), in GenerateJsonSchema.generate(self, schema, mode) [406](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:406) if self._used: [407](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:407) raise PydanticUserError( [408](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:408) 'This JSON schema generator has already been used to generate a JSON schema. ' [409](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:409) f'You must create a new instance of {type(self).__name__} to generate a new JSON schema.', [410](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:410) code='json-schema-already-used', [411](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:411) ) --> [413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413) json_schema: JsonSchemaValue = self.generate_inner(schema) [414](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:414) json_ref_counts = self.get_json_ref_counts(json_schema) [416](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:416) # Remove the top-level $ref if present; note that the _generate method already ensures there are no sibling keys File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema) [548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema [550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func) --> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema) [553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema): [554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function) [521](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:521) def new_handler_func( [522](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:522) schema_or_field: CoreSchemaOrField, [523](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:523) current_handler: GetJsonSchemaHandler = current_handler, [524](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:524) js_modify_function: GetJsonSchemaFunction = js_modify_function, [525](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:525) ) -> JsonSchemaValue: --> [526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526) json_schema = js_modify_function(schema_or_field, current_handler) [527](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:527) if _core_utils.is_core_schema(schema_or_field): [528](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:528) json_schema = populate_defs(schema_or_field, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603), in BaseModel.__get_pydantic_json_schema__(cls, _BaseModel__core_schema, _BaseModel__handler) [580](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:580) @classmethod [581](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:581) def __get_pydantic_json_schema__( [582](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:582) cls, [583](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:583) __core_schema: CoreSchema, [584](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:584) __handler: GetJsonSchemaHandler, [585](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:585) ) -> JsonSchemaValue: [586](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:586) """Hook into generating the model's JSON schema. [587](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:587) [588](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:588) Args: (...) [601](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:601) A JSON schema, as a Python object. [602](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:602) """ --> [603](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603) return __handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function) [521](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:521) def new_handler_func( [522](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:522) schema_or_field: CoreSchemaOrField, [523](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:523) current_handler: GetJsonSchemaHandler = current_handler, [524](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:524) js_modify_function: GetJsonSchemaFunction = js_modify_function, [525](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:525) ) -> JsonSchemaValue: --> [526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526) json_schema = js_modify_function(schema_or_field, current_handler) [527](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:527) if _core_utils.is_core_schema(schema_or_field): [528](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:528) json_schema = populate_defs(schema_or_field, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212), in modify_model_json_schema(schema_or_field, handler, cls) [199](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:199) def modify_model_json_schema( [200](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:200) schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler, *, cls: Any [201](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:201) ) -> JsonSchemaValue: [202](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:202) """Add title and description for model-like classes' JSON schema. [203](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:203) [204](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:204) Args: (...) [210](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:210) JsonSchemaValue: The updated JSON schema. [211](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:211) """ --> [212](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212) json_schema = handler(schema_or_field) [213](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:213) original_schema = handler.resolve_ref_schema(json_schema) [214](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:214) # Preserve the fact that definitions schemas should never have sibling keys: File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field) [507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field): [508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']] --> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field) [510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else: [511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}') File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323), in GenerateJsonSchema.model_schema(self, schema) [1320](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1320) title = config.get('title') [1322](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1322) with self._config_wrapper_stack.push(config): -> [1323](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323) json_schema = self.generate_inner(schema['schema']) [1325](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1325) json_schema_extra = config.get('json_schema_extra') [1326](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1326) if cls.__pydantic_root_model__: File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema) [548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema [550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func) --> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema) [553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema): [554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field) [507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field): [508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']] --> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field) [510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else: [511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}') File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415), in GenerateJsonSchema.model_fields_schema(self, schema) [1413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1413) if self.mode == 'serialization': [1414](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1414) named_required_fields.extend(self._name_required_computed_fields(schema.get('computed_fields', []))) -> [1415](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415) json_schema = self._named_required_fields_schema(named_required_fields) [1416](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1416) extras_schema = schema.get('extras_schema', None) [1417](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1417) if extras_schema is not None: File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226), in GenerateJsonSchema._named_required_fields_schema(self, named_required_fields) [1224](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1224) name = self._get_alias_name(field, name) [1225](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1225) try: -> [1226](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226) field_json_schema = self.generate_inner(field).copy() [1227](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1227) except PydanticOmit: [1228](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1228) continue File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema) [548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema [550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func) --> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema) [553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema): [554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function) [539](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:539) def new_handler_func( [540](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:540) schema_or_field: CoreSchemaOrField, [541](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:541) current_handler: GetJsonSchemaHandler = current_handler, [542](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:542) js_modify_function: GetJsonSchemaFunction = js_modify_function, [543](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:543) ) -> JsonSchemaValue: --> [544](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544) json_schema = js_modify_function(schema_or_field, current_handler) [545](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:545) if _core_utils.is_core_schema(schema_or_field): [546](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:546) json_schema = populate_defs(schema_or_field, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012), in get_json_schema_update_func.<locals>.json_schema_update_func(core_schema_or_field, handler) [2009](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2009) def json_schema_update_func( [2010](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2010) core_schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler [2011](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2011) ) -> JsonSchemaValue: -> [2012](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012) json_schema = {**handler(core_schema_or_field), **json_schema_update} [2013](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2013) add_json_schema_extra(json_schema, json_schema_extra) [2014](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2014) return json_schema File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field) [507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field): [508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']] --> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field) [510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else: [511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}') File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294), in GenerateJsonSchema.model_field_schema(self, schema) [1285](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1285) def model_field_schema(self, schema: core_schema.ModelField) -> JsonSchemaValue: [1286](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1286) """Generates a JSON schema that matches a schema that defines a model field. [1287](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1287) [1288](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1288) Args: (...) [1292](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1292) The generated JSON schema. [1293](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1293) """ -> [1294](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294) return self.generate_inner(schema['schema']) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema) [548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema [550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func) --> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema) [553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema): [554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field) [507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field): [508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']] --> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field) [510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else: [511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}') File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143), in GenerateJsonSchema.chain_schema(self, schema) [1131](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1131) """Generates a JSON schema that matches a core_schema.ChainSchema. [1132](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1132) [1133](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1133) When generating a schema for validation, we return the validation JSON schema for the first step in the chain. (...) [1140](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1140) The generated JSON schema. [1141](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1141) """ [1142](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1142) step_index = 0 if self.mode == 'validation' else -1 # use first step for validation, last for serialization -> [1143](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143) return self.generate_inner(schema['steps'][step_index]) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema) [548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema [550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func) --> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema) [553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema): [554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema) [35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue: ---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field) [507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field): [508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']] --> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field) [510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else: [511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}') File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956), in GenerateJsonSchema.function_plain_schema(self, schema) [947](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:947) def function_plain_schema(self, schema: core_schema.PlainValidatorFunctionSchema) -> JsonSchemaValue: [948](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:948) """Generates a JSON schema that matches a function-plain schema. [949](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:949) [950](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:950) Args: (...) [954](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:954) The generated JSON schema. [955](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:955) """ --> [956](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956) return self._function_schema(schema) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921), in GenerateJsonSchema._function_schema(self, schema) [918](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:918) return self.generate_inner(schema['schema']) [920](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:920) # function-plain --> [921](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921) return self.handle_invalid_for_json_schema( [922](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:922) schema, f'core_schema.PlainValidatorFunctionSchema ({schema["function"]})' [923](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:923) ) File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074), in GenerateJsonSchema.handle_invalid_for_json_schema(self, schema, error_info) [2073](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2073) def handle_invalid_for_json_schema(self, schema: CoreSchemaOrField, error_info: str) -> JsonSchemaValue: -> [2074](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074) raise PydanticInvalidForJsonSchema(f'Cannot generate a JsonSchema for {error_info}') PydanticInvalidForJsonSchema: Cannot generate a JsonSchema for core_schema.PlainValidatorFunctionSchema ({'type': 'with-info', 'function': <bound method BaseModel.validate of <class 'langchain_community.chat_message_histories.in_memory.ChatMessageHistory'>>}) For further information visit https://errors.pydantic.dev/2.5/u/invalid-for-json-schema ### Description I cannot generate an API doc with `ChatMessageHistory` in my model. ### System Info OS: mac OS 14.3.1 python: 3.11.7 langchain: 0.1.9 pydantic: 2.5.3
Cannot generate a JSON schema for ChatMessageHistory
https://api.github.com/repos/langchain-ai/langchain/issues/18141/comments
0
2024-02-26T18:13:55Z
2024-06-08T16:12:40Z
https://github.com/langchain-ai/langchain/issues/18141
2,154,808,002
18,141
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python @app.route('/stream2', methods=['GET']) def stream2(): try: user_query = request.json.get('user_query') if not user_query: return "No user query provided", 400 callback_handler = StreamHandler() callback_manager = CallbackManager([callback_handler]) llm = AzureChatOpenAI( azure_endpoint=AZURE_OPENAI_ENDPOINT, openai_api_version=OPENAI_API_VERSION, deployment_name=OPENAI_DEPLOYMENT_NAME, openai_api_key=OPENAI_API_KEY, openai_api_type=OPENAI_API_TYPE, model_name=OPENAI_MODEL_NAME, streaming=True, model_kwargs={ "logprobs": None, "best_of": None, "echo": None }, #callback_manager=callback_manager, temperature=0) @stream_with_context async def generate(): async for chunk in llm.stream(user_query): yield chunk return Response(generate(), mimetype='text/event-stream') except Exception as e: logger.error(f"An error occurred: {e}") return "An error occurred", 500 ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 362, in run_wsgi execute(self.server.app) File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 325, in execute for data in application_iter: File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wsgi.py", line 256, in __next__ return self._next() File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wrappers\response.py", line 32, in _iter_encoded for item in iterable: TypeError: 'function' object is not iterable ### Description I am creating a REST API with flask and using langchain and AzureOpenAI, however the llm.stream doesnt seam to work correctly based on my code. How can I stream AzureOPENAI responses using flask? ### System Info langchain==0.1.0 langchain-community==0.0.12 langchain-core==0.1.12 langchainhub==0.1.14
AzureOpenAI Streaming with langchain and flask, error TypeError: 'function' object is not iterable
https://api.github.com/repos/langchain-ai/langchain/issues/18138/comments
0
2024-02-26T17:26:59Z
2024-06-08T16:12:35Z
https://github.com/langchain-ai/langchain/issues/18138
2,154,723,649
18,138
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from operator import itemgetter import os import urllib.parse from sqlalchemy import create_engine import warnings from dotenv import load_dotenv from langchain_community.utilities.sql_database import SQLDatabase from langchain.chains.openai_tools import create_extraction_chain_pydantic from langchain.chains import create_sql_query_chain from langchain_community.chat_models import AzureChatOpenAI from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import PromptTemplate from langchain_core.runnables import RunnablePassthrough from langchain_community.tools.sql_database.tool import QuerySQLDataBaseTool from langchain_core.pydantic_v1 import BaseModel, Field # Define Table class to be hashable class Table(BaseModel): """Table in SQL database.""" name: str = Field(description="Name of table in SQL database.") def __hash__(self): return hash(self.name) def __eq__(self, other): if not isinstance(other, Table): return NotImplemented return self.name == other.name # Ignore all warnings warnings.filterwarnings("ignore") # Load environment variables basedir = os.path.abspath(os.path.dirname(__file__)) load_dotenv(os.path.join(basedir, '.env')) API_KEY = os.environ.get("OPENAI_API_KEY") # Database setup username = "admin" password = urllib.parse.quote_plus('t34!12!') servername = "12.28.40.85" database = "test" uri = f"mssql+pyodbc://{username}:{password}@{servername}/{database}?driver=ODBC+Driver+17+for+SQL+Server" engine = create_engine(uri) db = SQLDatabase(engine, schema="aodb") print("Connection to the SQL Server database successful") # Initialize AzureChatOpenAI llm = AzureChatOpenAI( model_name=os.environ.get("AZURE_MODEL_NAME", 'gpt-4'), deployment_name=os.environ.get("AZURE_DEPLOYMENT_NAME", 'sita-lab-gpt4'), azure_endpoint=os.environ.get("AZURE_ENDPOINT"), verbose=True ) # Function to test agent execution def test_agent(db, llm, my_question): agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True) response = agent_executor.invoke(my_question) return response # Function to get relevant tables for a question def test_agent_get_relevant_tables(db, llm, my_question): table_names = "\n".join(db.get_usable_table_names()) system = f"""Return the names of ALL the SQL tables that MIGHT be relevant to the user question. \ The tables are: {table_names} Remember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.""" query_chain = create_sql_query_chain(llm, db) table_chain = create_extraction_chain_pydantic(Table, llm, system_message=system) returned_tables = table_chain.invoke({"input": my_question}) print(f".......................") print(f"returned_tables : {returned_tables}") table_name = returned_tables[0].name # Assuming that the table name is the first element in the list print(f"table_name : {table_name}") print(f".......................") # Assign the "input" field to the table_chain table_chain = RunnablePassthrough.assign(input=itemgetter("question")) | table_chain print(f"table_chain : {table_chain}") full_chain = RunnablePassthrough.assign(table_names_to_use=table_chain) | query_chain print(f"full_chain : {full_chain}") query = full_chain.invoke({"question": my_question, "table_name": table_name,"schema_name": "aodb"}) print(f"query : {query}") # query = full_chain.invoke({"question": my_question}) response = db.run(query) return response # Example usage final_response = test_agent_get_relevant_tables(db, llm, "list me airports") print(final_response) ### Error Message and Stack Trace (if applicable) PS C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot> & "c:/Users/Savita.Raghuvanshi/OneDrive - SITA/Desktop/llm/ML-LLM-OperationsCoPilot/ops_env/Scripts/python.exe" "c:/Users/Savita.Raghuvanshi/OneDrive - SITA/Desktop/llm/ML-LLM-OperationsCoPilot/test/QnA_with_sql_db/test_agent_get_relavent_tables.py" Connection to the SQL Server database successful ....................... returned_tables : [Table(name='Airport')] table_name : Airport ....................... table_chain : first=RunnableAssign(mapper={ input: RunnableLambda(itemgetter('question')) }) middle=[ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="Return the names of ALL the SQL tables that MIGHT be relevant to the user question. The tables are:\n\nActivityFlowActivityAssignment\nActivityFlowActivityAssignmentItem\nActivityFlowActivityAvailabilityRule\nActivityFlowActivityDefinition\nActivityFlowActivityDurationRule\nActivityFlowActivityTemplate\nActivityFlowDependencyTemplate\nActivityFlowEventAvailabilityRule\nActivityFlowEventDefinition\nActivityFlowEventTemplate\nActivityFlowGroupTemplate\nActivityFlowTemplate\nAircraft\nAircraftType\nAirline\nAirport\nAirportContext\nAirportContext2UserRole\nAllocationGroup\nAllocationGroup2ResourceAllocation\nAllocationGroupShapeAllocationRequirement\nAllocationGroupShapeRequirement\nAllocationPercentageToColor\nArea\nArrivalCodeShare\nArrivalFlight\nArrivalFlightActivity\nArrivalFlightEvent\nCascadingDowngradeRule\nCascadingDowngradeRuleAssignment\nCombinationRuleContribution\nCustomsType\nDepartureCodeShare\nDepartureFlight\nDepartureFlightActivity\nDepartureFlightEvent\nMovement\nMovementActivityFlowRequirement\nMovementCommentLog\nMovementCommentLogReason\nMovementLinkRule\nMovementMatchRequirement\nMovementPaxFlowDefinition\nMovementPaxFlowGroupLoad\nMovementPerformanceIndicatorChart\nMovementPerformanceIndicatorPaxFlow\nMovementResourceRequirement\nMovementSplitRule\nMovementStatusDefinition\nMovementStatusRule\nOverlapOnResourceContribution\nOverlapPermissionRuleContribution\nPaxFlowGroup\nPaxFlowProfileDefinition\nPaxFlowProfilePercentage\nPaxFlowSlot\nPerformanceIndicatorCell\nPerformanceIndicatorColumn\nPerformanceIndicatorGrid\nPerformanceIndicatorRow\nRecurringDowngradeRule\nRecurringDowngradeRule2Area\nRecurringDowngradeRule2Resource\nRecurringDowngradeRule2ResourceGroup\nResource\nResource2ResourceGroup\nResourceAllocation\nResourceAllocationAutoMappingRule\nResourceAllocationAutoMappingRuleAssignment\nResourceAllocationBufferTime\nResourceAllocationCombinationRule\nResourceAllocationCombinationRuleMatch\nResourceAllocationCombinationRuleMatch2Area\nResourceAllocationCombinationRuleMatch2Resource\nResourceAllocationCombinationRuleMatch2ResourceGroup\nResourceAllocationElementConfig\nResourceAllocationElementConfig2UserRole\nResourceAllocationElementConfigColor\nResourceAllocationElementConfigIcon\nResourceAllocationElementConfigText\nResourceAllocationElementConfigToolTip\nResourceAllocationMovementMatchPreference\nResourceAllocationMovementMatchPreference2Area\nResourceAllocationMovementMatchPreference2Resource\nResourceAllocationMovementMatchPreference2ResourceGroup\nResourceAllocationMovementMatchPreference2ResourceMatch\nResourceAllocationMovementMatchRule\nResourceAllocationMovementMatchRule2Area\nResourceAllocationMovementMatchRule2Resource\nResourceAllocationMovementMatchRule2ResourceGroup\nResourceAllocationMovementMatchRuleGroup\nResourceAllocationOverlapPermissionRule\nResourceAllocationOverlapPermissionRule2Area\nResourceAllocationOverlapPermissionRule2Resource\nResourceAllocationOverlapPermissionRule2ResourceGroup\nResourceDowngradeType\nResourceGroup\nResourceSerieVisualRule\nResourceSerieVisualRule2UserRole\nResourceSerieVisualRuleColor\nResourceSerieVisualRuleDescription\nResourceSerieVisualRuleHeaderText\nResourceSerieVisualRuleIcon\nResourceSerieVisualRuleToolTip\nResourceTypeSettings\nResourceTypeSettings2Details\nResourceUnavailabilityRule\nResourceUnavailabilityRule2Area\nResourceUnavailabilityRule2Resource\nResourceUnavailabilityRule2ResourceGroup\nRoute\nRouteViaPoint\nSeason\nSlotRequest\nSlotRequestComplianceRule\nSlotRequestCompliantSlot\nSlotRequestOperatedFlight\nSlotRequestPropertyMapping\nSlotRequestReservedSlot\nSlotRequestStatusDefinition\nSlotRequestStatusHistoryLog\nSlotRequestStatusTransition\nTowing\n\nRemember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))]), RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'tools': [{'type': 'function', 'function': {'name': 'Table', 'description': 'Table in SQL database.', 'parameters': {'type': 'object', 'properties': {'name': {'description': 'Name of table in SQL database.', 'type': 'string'}}, 'required': ['name']}}}]})] last=PydanticToolsParser(tools=[<class '__main__.Table'>]) full_chain : first=RunnableAssign(mapper={ table_names_to_use: RunnableAssign(mapper={ input: RunnableLambda(itemgetter('question')) }) | ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="Return the names of ALL the SQL tables that MIGHT be relevant to the user question. The tables are:\n\nActivityFlowActivityAssignment\nActivityFlowActivityAssignmentItem\nActivityFlowActivityAvailabilityRule\nActivityFlowActivityDefinition\nActivityFlowActivityDurationRule\nActivityFlowActivityTemplate\nActivityFlowDependencyTemplate\nActivityFlowEventAvailabilityRule\nActivityFlowEventDefinition\nActivityFlowEventTemplate\nActivityFlowGroupTemplate\nActivityFlowTemplate\nAircraft\nAircraftType\nAirline\nAirport\nAirportContext\nAirportContext2UserRole\nAllocationGroup\nAllocationGroup2ResourceAllocation\nAllocationGroupShapeAllocationRequirement\nAllocationGroupShapeRequirement\nAllocationPercentageToColor\nArea\nArrivalCodeShare\nArrivalFlight\nArrivalFlightActivity\nArrivalFlightEvent\nCascadingDowngradeRule\nCascadingDowngradeRuleAssignment\nCombinationRuleContribution\nCustomsType\nDepartureCodeShare\nDepartureFlight\nDepartureFlightActivity\nDepartureFlightEvent\nMovement\nMovementActivityFlowRequirement\nMovementCommentLog\nMovementCommentLogReason\nMovementLinkRule\nMovementMatchRequirement\nMovementPaxFlowDefinition\nMovementPaxFlowGroupLoad\nMovementPerformanceIndicatorChart\nMovementPerformanceIndicatorPaxFlow\nMovementResourceRequirement\nMovementSplitRule\nMovementStatusDefinition\nMovementStatusRule\nOverlapOnResourceContribution\nOverlapPermissionRuleContribution\nPaxFlowGroup\nPaxFlowProfileDefinition\nPaxFlowProfilePercentage\nPaxFlowSlot\nPerformanceIndicatorCell\nPerformanceIndicatorColumn\nPerformanceIndicatorGrid\nPerformanceIndicatorRow\nRecurringDowngradeRule\nRecurringDowngradeRule2Area\nRecurringDowngradeRule2Resource\nRecurringDowngradeRule2ResourceGroup\nResource\nResource2ResourceGroup\nResourceAllocation\nResourceAllocationAutoMappingRule\nResourceAllocationAutoMappingRuleAssignment\nResourceAllocationBufferTime\nResourceAllocationCombinationRule\nResourceAllocationCombinationRuleMatch\nResourceAllocationCombinationRuleMatch2Area\nResourceAllocationCombinationRuleMatch2Resource\nResourceAllocationCombinationRuleMatch2ResourceGroup\nResourceAllocationElementConfig\nResourceAllocationElementConfig2UserRole\nResourceAllocationElementConfigColor\nResourceAllocationElementConfigIcon\nResourceAllocationElementConfigText\nResourceAllocationElementConfigToolTip\nResourceAllocationMovementMatchPreference\nResourceAllocationMovementMatchPreference2Area\nResourceAllocationMovementMatchPreference2Resource\nResourceAllocationMovementMatchPreference2ResourceGroup\nResourceAllocationMovementMatchPreference2ResourceMatch\nResourceAllocationMovementMatchRule\nResourceAllocationMovementMatchRule2Area\nResourceAllocationMovementMatchRule2Resource\nResourceAllocationMovementMatchRule2ResourceGroup\nResourceAllocationMovementMatchRuleGroup\nResourceAllocationOverlapPermissionRule\nResourceAllocationOverlapPermissionRule2Area\nResourceAllocationOverlapPermissionRule2Resource\nResourceAllocationOverlapPermissionRule2ResourceGroup\nResourceDowngradeType\nResourceGroup\nResourceSerieVisualRule\nResourceSerieVisualRule2UserRole\nResourceSerieVisualRuleColor\nResourceSerieVisualRuleDescription\nResourceSerieVisualRuleHeaderText\nResourceSerieVisualRuleIcon\nResourceSerieVisualRuleToolTip\nResourceTypeSettings\nResourceTypeSettings2Details\nResourceUnavailabilityRule\nResourceUnavailabilityRule2Area\nResourceUnavailabilityRule2Resource\nResourceUnavailabilityRule2ResourceGroup\nRoute\nRouteViaPoint\nSeason\nSlotRequest\nSlotRequestComplianceRule\nSlotRequestCompliantSlot\nSlotRequestOperatedFlight\nSlotRequestPropertyMapping\nSlotRequestReservedSlot\nSlotRequestStatusDefinition\nSlotRequestStatusHistoryLog\nSlotRequestStatusTransition\nTowing\n\nRemember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))]) | RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'tools': [{'type': 'function', 'function': {'name': 'Table', 'description': 'Table in SQL database.', 'parameters': {'type': 'object', 'properties': {'name': {'description': 'Name of table in SQL database.', 'type': 'string'}}, 'required': ['name']}}}]}) | PydanticToolsParser(tools=[<class '__main__.Table'>]) }) middle=[RunnableAssign(mapper={ input: RunnableLambda(...), table_info: RunnableLambda(...) }), RunnableLambda(lambda x: {k: v for k, v in x.items() if k not in ('question', 'table_names_to_use')}), PromptTemplate(input_variables=['input', 'table_info'], partial_variables={'top_k': '5'}, template='You are an MS SQL expert. Given an input question, first create a syntactically correct MS SQL query to run, then look at the results of the query and return the answer to the input question.\nUnless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the TOP clause as per MS SQL. You can order the results to return the most informative data in the database.\nNever query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in square brackets ([]) to denote them as delimited identifiers.\nPay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\nPay attention to use CAST(GETDATE() as date) function to get the current date, if the question involves "today".\n\nUse the following format:\n\nQuestion: Question here\nSQLQuery: SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}'), RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'stop': ['\nSQLResult:']}), StrOutputParser()] last=RunnableLambda(_strip) Traceback (most recent call last): File "c:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\test\QnA_with_sql_db\test_agent_get_relavent_tables.py", line 102, in <module> final_response = test_agent_get_relevant_tables(db, llm, "list me airports") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\test\QnA_with_sql_db\test_agent_get_relavent_tables.py", line 92, in test_agent_get_relevant_tables query = full_chain.invoke({"question": my_question, "table_name": table_name,"schema_name": "aodb"}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 2056, in invoke input = step.invoke( ^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\passthrough.py", line 419, in invoke return self._call_with_config(self._invoke, input, config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 1243, in _call_with_config context.run( File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\passthrough.py", line 406, in _invoke **self.mapper.invoke( ^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 2693, in invoke output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^ File "C:\Python312\Lib\concurrent\futures\_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result raise self._exception File "C:\Python312\Lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 3507, in invoke return self._call_with_config( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 1243, in _call_with_config context.run( File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 3381, in _invoke output = call_func_with_variable_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain\chains\sql_database\query.py", line 126, in <lambda> "table_info": lambda x: db.get_table_info( ^^^^^^^^^^^^^^^^^^ File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_community\utilities\sql_database.py", line 307, in get_table_info raise ValueError(f"table_names {missing_tables} not found in database") ValueError: table_names {Table(name='Airport')} not found in database ### Description As you see from the stack trace my db is having Table(name='Airport') but error saying table not found ### System Info PS C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot> pip freeze aiohttp==3.9.3 aiosignal==1.3.1 aniso8601==9.0.1 annotated-types==0.6.0 anyio==3.7.1 async-timeout==4.0.3 attrs==23.1.0 azure-common==1.1.28 azure-core==1.29.5 azure-identity==1.13.0 azure-search==1.0.0b2 azure-search-documents==11.4.0b8 backoff==2.2.1 beautifulsoup4==4.12.3 blinker==1.6.3 cachetools==5.3.2 certifi==2023.7.22 cffi==1.16.0 chardet==5.2.0 charset-normalizer==3.3.1 click==8.1.7 colorama==0.4.6 cryptography==41.0.5 dataclasses-json==0.6.1 dataclasses-json-speakeasy==0.5.11 distro==1.9.0 dnspython==2.4.2 docx2txt==0.8 emoji==2.10.1 exceptiongroup==1.1.3 filetype==1.2.0 flasgger==0.9.7.1 Flask==2.3.3 Flask-Cors==3.0.10 Flask-RESTful==0.3.9 flask-restx==1.2.0 frozenlist==1.4.0 gevent==24.2.1 greenlet==3.0.1 h11==0.14.0 httpcore==1.0.3 httpx==0.26.0 idna==3.4 importlib-metadata==6.8.0 importlib-resources==6.1.0 iniconfig==2.0.0 isodate==0.6.1 itsdangerous==2.1.2 Jinja2==3.1.2 joblib==1.3.2 jsonpatch==1.33 jsonpath-python==1.0.6 jsonpointer==2.4 jsonschema==4.17.3 jsonschema-specifications==2023.7.1 **langchain==0.1.9 langchain-community==0.0.23 langchain-core==0.1.26 langchain-experimental==0.0.52 langchain-openai==0.0.6** langdetect==1.0.9 langsmith==0.1.6 loguru==0.7.2 lxml==5.1.0 MarkupSafe==2.1.3 marshmallow==3.20.1 mistune==3.0.2 msal==1.24.1 msal-extensions==1.0.0 msrest==0.7.1 multidict==6.0.4 mypy-extensions==1.0.0 nltk==3.8.1 nose==1.3.7 numpy==1.26.4 oauthlib==3.2.2 openai==1.12.0 orjson==3.9.14 packaging==23.2 pkgutil_resolve_name==1.3.10 pluggy==1.4.0 portalocker==2.8.2 psycopg2==2.9.9 psycopg2-binary==2.9.9 pycparser==2.21 pydantic==2.4.2 pydantic_core==2.10.1 PyJWT==2.8.0 pymongo==4.5.0 PyMySQL==1.1.0 pyodbc==5.1.0 pypdf==3.17.0 pyrsistent==0.20.0 pytest==8.0.0 pytest-mock==3.12.0 python-dateutil==2.8.2 python-dotenv==1.0.0 python-environ==0.4.54 python-iso639==2024.2.7 python-magic==0.4.27 pytz==2023.3.post1 pywin32==306 PyYAML==6.0.1 rapidfuzz==3.6.1 referencing==0.30.2 regex==2023.10.3 requests==2.31.0 requests-oauthlib==1.3.1 rpds-py==0.10.6 Scaffold==0.1.3 setuptools==69.1.0 six==1.16.0 sniffio==1.3.0 soupsieve==2.5 SQLAlchemy==2.0.22 tabulate==0.9.0 tenacity==8.2.3 tiktoken==0.6.0 tqdm==4.66.1 typing-inspect==0.9.0 typing_extensions==4.8.0 unstructured==0.11.8 unstructured-client==0.18.0 urllib3==2.0.7 websocket==0.2.1 websockets==12.0 Werkzeug==2.3.7 wheel==0.42.0 win32-setctime==1.1.0 wrapt==1.16.0 yarl==1.9.2 zipp==3.17.0 zope.event==5.0 zope.interface==6.1
Reporting missing table issue on msSql although table present
https://api.github.com/repos/langchain-ai/langchain/issues/18137/comments
0
2024-02-26T17:08:01Z
2024-06-08T16:12:30Z
https://github.com/langchain-ai/langchain/issues/18137
2,154,680,784
18,137
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain.agents import create_openai_tools_agent, AgentExecutor from langchain.tools import Tool from langchain_community.callbacks import OpenAICallbackHandler llm = AzureChatOpenAI(**current_settings.azureopenai_llm.dict(), temperature=0, callbacks=[OpenAICallbackHandler()]) def get_context_from_vector_store(query): results = VectorStoreManager(collection_name=collection_name).store.similarity_search_with_score(query, k=k) return results add_db_context = Tool( name="add_context_documents_from_vector_store", func=run_qa_chain, description=f"Useful when you need to answer questions about the contents of the files in the vector store. Use it if you are uncertain about your answer or you don't have any hard data to support your answer", return_direct=False ) tools = [add_db_context] agent = create_openai_tools_agent(llm=llm, tools=tools, prompt=hub.pull("hwchase17/openai-tools-agent")) agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[OpenAICallbackHandler()]) agent_executor.invoke({"input": "test"}) print(agent_executor.callbacks) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am writing a simple RAG application with Langchain Tools, function calling and Langchain Agents. I want to monitor token usage for the agent. I want to use the Langchain Callbacks for that purpose. I see that the `OpenAICallbackHandler` properly monitors token usage for chat models directly, but it doesn't monitor the usage statistics for agents at all. It should be possible since the `AzureChatOpenAI` is passed to the agent. I tried defining the callback both in the agent and in the chat model, but after invoking the agent there usage statistics are not saved in any of the callbacks whatsoever. I think this may be because the `OpenAICallbackHandler` implements the `on_llm_end` method, but not the `on_chain_end` method, which seems to be the method that the agent callbacks interact with ([source](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html)). I wanted to define a custom callback handler that would extend `OpenAICallbackHandler` and map `on_llm_end` to `on_chain_end`, but this is not straightforward, or even not doable (the `LLMResult` instance used in `on_llm_end` seems to be lost inside the interaction between the Agent and the chat model, which disables access to the "token_usage" property). Can token usage somehow be monitored when working with Langchain Agents? ### System Info langchain==0.1.9 langchain-openai==0.0.7 langchainhub==0.1.14 pydantic==1.10.13
OpenAICallbackHandler not counting token usage for Agents
https://api.github.com/repos/langchain-ai/langchain/issues/18130/comments
10
2024-02-26T14:20:12Z
2024-06-04T21:51:39Z
https://github.com/langchain-ai/langchain/issues/18130
2,154,300,761
18,130
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: I tried to configure MongoDBChatMessageHistory using the code from the original documentation to store messages based on the passed session_id in MongoDB. However, this configuration did not take effect, and the session id in the database remained as 'test_session'. To resolve this issue, I found that when configuring MongoDBChatMessageHistory, it is necessary to set session_id=session_id instead of session_id=test_session. pr: https://github.com/langchain-ai/langchain/pull/18128 _No response_
DOC: Ineffective Configuration of MongoDBChatMessageHistory for Custom session_id Storage
https://api.github.com/repos/langchain-ai/langchain/issues/18127/comments
0
2024-02-26T14:04:47Z
2024-02-28T15:57:20Z
https://github.com/langchain-ai/langchain/issues/18127
2,154,265,433
18,127
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python index_name = f"vector_{index_suffix}" keyword_index_name = f"keyword_{index_suffix}" print(f"Setup with indices: {index_name} and {keyword_index_name} ") hybrid_db = Neo4jVector.from_documents( docs, embeddings, url=url, username=username, password=password, search_type="hybrid", pre_delete_collection=True, index_name=index_name, keyword_index_name=keyword_index_name, ) print(f"\nLoaded hybrid_db {hybrid_db.search_type} with indices: {hybrid_db.index_name} and {hybrid_db.keyword_index_name} ") print(f"Embedded {index_suffix}\nsize of docs: {len(docs)}\n") ``` ### Error Message and Stack Trace (if applicable) Setup with indices: vector_data and keyword_data Loaded hybrid_db hybrid with indices: vector and keyword Embedded data size of docs: 543 ### Description * `Neo4jVector.from_documents` does not seem to set up the values of `index_name` and `keyword_index_name` * They should be set as I do explicitly in the code ### System Info $ python -m langchain_core.sys_info System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023 > Python Version: 3.12.2 (main, Feb 7 2024, 21:49:26) [GCC 10.2.1 20210110] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.7 > langchain_openai: 0.0.7 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Neo4jVector.from_documents doesn't set the index_name and keyword_index_name
https://api.github.com/repos/langchain-ai/langchain/issues/18126/comments
3
2024-02-26T13:51:37Z
2024-06-08T16:12:26Z
https://github.com/langchain-ai/langchain/issues/18126
2,154,237,278
18,126
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code The following code from https://github.com/langchain-ai/langgraph/blob/main/examples/web-navigation/web_voyager.ipynb ```py from langchain import hub from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.runnables import RunnablePassthrough from langchain_openai import ChatOpenAI async def annotate(state): marked_page = await mark_page.with_retry().ainvoke(state["page"]) return {**state, **marked_page} def format_descriptions(state): labels = [] for i, bbox in enumerate(state["bboxes"]): text = bbox.get("ariaLabel") or "" if not text.strip(): text = bbox["text"] el_type = bbox.get("type") labels.append(f'{i} (<{el_type}/>): "{text}"') bbox_descriptions = "\nValid Bounding Boxes:\n" + "\n".join(labels) return {**state, "bbox_descriptions": bbox_descriptions} def parse(text: str) -> dict: action_prefix = "Action: " if not text.strip().split("\n")[-1].startswith(action_prefix): return {"action": "retry", "args": f"Could not parse LLM Output: {text}"} action_block = text.strip().split("\n")[-1] action_str = action_block[len(action_prefix) :] split_output = action_str.split(" ", 1) if len(split_output) == 1: action, action_input = split_output[0], None else: action, action_input = split_output action = action.strip() if action_input is not None: action_input = [ inp.strip().strip("[]") for inp in action_input.strip().split(";") ] return {"action": action, "args": action_input} # Will need a later version of langchain to pull # this image prompt template prompt = hub.pull("wfh/web-voyager") ``` ### Error Message and Stack Trace (if applicable) { "name": "ValueError", "message": "Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain_core', 'prompts', 'image', 'ImagePromptTemplate')", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[49], line 1 ----> 1 prompt_new = hub.pull(\"mrpolymath/web-voyager_test\") File /usr/local/lib/python3.11/site-packages/langchain/hub.py:81, in pull(owner_repo_commit, api_url, api_key) 79 client = _get_client(api_url=api_url, api_key=api_key) 80 resp: str = client.pull(owner_repo_commit) ---> 81 return loads(resp) File /usr/local/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:109, in beta.<locals>.beta.<locals>.warning_emitting_wrapper(*args, **kwargs) 107 warned = True 108 emit_warning() --> 109 return wrapped(*args, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/load/load.py:130, in loads(text, secrets_map, valid_namespaces) 111 @beta() 112 def loads( 113 text: str, (...) 116 valid_namespaces: Optional[List[str]] = None, 117 ) -> Any: 118 \"\"\"Revive a LangChain class from a JSON string. 119 Equivalent to `load(json.loads(text))`. 120 (...) 128 Revived LangChain objects. 129 \"\"\" --> 130 return json.loads(text, object_hook=Reviver(secrets_map, valid_namespaces)) File /usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py:359, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 357 if parse_constant is not None: 358 kw['parse_constant'] = parse_constant --> 359 return cls(**kw).decode(s) File /usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 332 def decode(self, s, _w=WHITESPACE.match): 333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 \"\"\" --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): File /usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 344 \"\"\"Decode a JSON document from ``s`` (a ``str`` beginning with 345 a JSON document) and return a 2-tuple of the Python 346 representation and the index in ``s`` where the document ended. (...) 350 351 \"\"\" 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError(\"Expecting value\", s, err.value) from None File /usr/local/lib/python3.11/site-packages/langchain_core/load/load.py:82, in Reviver.__call__(self, value) 80 key = tuple(namespace + [name]) 81 if key not in ALL_SERIALIZABLE_MAPPINGS: ---> 82 raise ValueError( 83 \"Trying to deserialize something that cannot \" 84 \"be deserialized in current version of langchain-core: \" 85 f\"{key}\" 86 ) 87 import_path = ALL_SERIALIZABLE_MAPPINGS[key] 88 # Split into module and name ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain_core', 'prompts', 'image', 'ImagePromptTemplate')" } ### Description I was trying to fork a prompt template from "wfh/web-voyager" at LangSmith Hub and modify it to fit my own use-case. ### System Info langchain==0.1.9 langchain-community==0.0.22 langchain-core==0.1.26 langchain-openai==0.0.7 langchainhub==0.1.14 platform (mac) python version (Python 3.11.7)
Deserialization error when modifying an existing prompt template in LangSmith Hub
https://api.github.com/repos/langchain-ai/langchain/issues/21295/comments
1
2024-02-26T12:42:11Z
2024-08-10T16:07:10Z
https://github.com/langchain-ai/langchain/issues/21295
2,279,169,456
21,295
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code os.environ['OPENAI_API_KEY'] = openapi_key # Define connection parameters using constants from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password) connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server" # Create an engine to connect to the SQL database engine = create_engine(connection_uri) model_name="gpt-3.5-turbo-16k" # db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT']) db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_rnd360_overall','combined_leave_details', 'egv_attendancequery_chatgpt', 'egv_compoff_chatgpt', 'egv_leavedetails_chatgpt','egv_location_chatgpt','egv_education_chatgpt','egv_memo_chatgpt']) # db = SQLDatabase(engine, view_support=True, include_tables=[ 'EGV_emp_departments_rnd360_overall', 'egv_leavedetails_chatgpt']) llm = ChatOpenAI(temperature=0, verbose=False, model=model_name) PROMPT_SUFFIX = """Only use the following tables: {table_info} Previous Conversation: {history} Question: {input}""" _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question, then look at the results of the query and return the answer. Check each view and if the question is present in different views perform join on those. If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns. Write the query only for the column names which are present in view. Execute the query and analyze the results to formulate a response. Return the answer in sentence form. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Answer: "Final answer here" """ PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX) memory = None # cache = InMemoryCache() # Define a function named chat that takes a question and SQL format indicator as input def chat1(question): # global db_chain global memory # prompt = """ # Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question, # then look at the results of the query and return the answer. # If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns. # Write the query only for the column names which are present in view. # Execute the query and analyze the results to formulate a response. # Return the answer in sentence form. # The question: {question} # """ try: if memory == None: memory = ConversationBufferMemory() db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory) greetings = ["hi", "hello", "hey"] if any(greeting == question.lower() for greeting in greetings): print(question) print("Hello! How can I assist you today?") return "Hello! How can I assist you today?" # elif question in cache: # return cache[question] else: answer = db_chain.run(question) # answer = db_chain.run(prompt.format(question=question)) # print(memory.load_memory_variables()["history"]) print(memory.load_memory_variables({})) # history = memory.load_memory_variables()["history"] # print(history) return answer except exc.ProgrammingError as e: # Check for a specific SQL error related to invalid column name if "Invalid column name" in str(e): print("Answer: Error Occured while processing the question") print(str(e)) return "Invalid question. Please check your column names." else: print("Error Occured while processing") print(str(e)) # return "Unknown ProgrammingError Occured" return "Invalid Question" except openai.RateLimitError as e: print("Error Occured while fetching the answer") print(str(e)) return "Rate limit exceeded. Please, Mention the Specific Columns you need!" except openai.BadRequestError as e: print("Error Occured while fetching the answer") print(str(e)) return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages." except Exception as e: print("Error Occured while processing") print(str(e)) return "Unknown Error Occured" ### Error Message and Stack Trace (if applicable) _No response_ ### Description while using single view or two views the model is working accurately wile giving the results, but if i'm trying to include multiple views and the question consist of joining 2 or more view, its not giving the correct result, it failing to perform join , its either considering wrong view or column from the view. how to resolve this issue? ### System Info python: 3.11 langchain: latest
when connecting multiple views in SQLDatabaseChain model its not performing the join properly
https://api.github.com/repos/langchain-ai/langchain/issues/18120/comments
6
2024-02-26T09:54:23Z
2024-06-08T16:12:20Z
https://github.com/langchain-ai/langchain/issues/18120
2,153,734,246
18,120
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ''' python code: from elasticsearch import Elasticsearch from langchain import hub from langchain.agents import create_openai_functions_agent, AgentExecutor from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain from langchain_core.tools import Tool from langchain_openai import ChatOpenAI # 连接 Elasticsearch conn = Elasticsearch( "http://xx.xxx.xxx:9200", ca_certs="certs/http_ca.crt", http_auth=("xx", "xxxx"), verify_certs=False ) if conn.ping(): print("成功连接到 Elasticsearch!") else: print("无法连接到 Elasticsearch!") conn.search() llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) db_chain = ElasticsearchDatabaseChain.from_llm(llm, conn,verbose=True,) tools = [ Tool.from_function( func=db_chain.invoke, name = "es_db_Search", description="根据输入词,去Elasticsearch数据库进行检索", ), ] prompt = hub.pull("hwchase17/openai-functions-agent") agent = create_openai_functions_agent(llm, tools, prompt) agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True) executor_invoke = agent_executor.invoke({"input": "根据关键词【通过在保持强度满足需要的情况下减轻整体质量】搜索索引方案"}) print(executor_invoke) # invoke = db_chain.invoke({"question": "根据关键词【通过在保持强度满足需要的情况下减轻整体质量】搜索索引方案"}) # print(invoke) ''' ### Error Message and Stack Trace (if applicable) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 168, in invoke raise e File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 158, in invoke self._call(inputs, run_manager=run_manager) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1391, in _call next_step_output = self._take_next_step( File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1097, in _take_next_step [ File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1097, in <listcomp> [ File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1182, in _iter_next_step yield self._perform_agent_action( File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1204, in _perform_agent_action observation = tool.run( File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 401, in run raise e File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 358, in run self._run(*tool_args, run_manager=run_manager, **tool_kwargs) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 566, in _run else self.func(*args, **kwargs) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 168, in invoke raise e File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 158, in invoke self._call(inputs, run_manager=run_manager) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\elasticsearch_database\base.py", line 129, in _call indices_info = self._get_indices_infos(indices) File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\elasticsearch_database\base.py", line 102, in _get_indices_infos hits = self.database.search( File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\elasticsearch\client\utils.py", line 168, in _wrapped return func(*args, params=params, headers=headers, **kwargs) TypeError: Elasticsearch.search() got an unexpected keyword argument 'query' ### Description 调用langchain得elasticsearch_database其中方法_get_indices_infos中调用es得时候 hits = self.database.search( index=k, query={"match_all": {}}, size=self.sample_documents_in_index_info, )["hits"]["hits"], 报错参数Elasticsearch.search() got an unexpected keyword argument 'query',query应该修改为body ### System Info elasticsearch 7.13.3 langchain 0.1.8 platform(windows ) python 3.10
langchain integrated with Elasticsearch, search syntax error,Elasticsearch.search() got an unexpected keyword argument 'query'
https://api.github.com/repos/langchain-ai/langchain/issues/18119/comments
1
2024-02-26T09:39:49Z
2024-06-18T16:09:20Z
https://github.com/langchain-ai/langchain/issues/18119
2,153,704,709
18,119
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python def load_documents_v2(file_path, file_filter_regex, vectorstore_persist_path, vectorstore_collection_name, vectorstore_embeddings, parent_chunk_size, child_chunk_size, ): """ It is *not* possible to replace files with updated versions. Parameters ---------- file_path : str file path that points the documents file_filter_regex Regex expression for filtering documents. Must be a full match, that is, partial matches won't work Ex: r".*(.ts|.tsx)" vectorstore_persist_path : str where to save the vector store vectorstore_collection_name : str the name of the vector store vectorstore_embeddings the previously loaded embeddings parent_chunk_size how many tokens each document should contain (1000 is the minimum size that gives decent results) child_chunk_size the smaller the better """ # if os.path.exists(vectorstore_persist_path): # vectorstore = Chroma( # persist_directory=vectorstore_persist_path, # collection_name=vectorstore_collection_name, # embedding_function=vectorstore_embeddings) # return vectorstore documents = load_documents_as_files(file_path, file_filter_regex) print("chunking") parent_splitter = RecursiveCharacterTextSplitter(chunk_size=parent_chunk_size); child_splitter = RecursiveCharacterTextSplitter(chunk_size=child_chunk_size); store = InMemoryStore() vectorstore = Chroma( collection_name=vectorstore_collection_name, embedding_function=vectorstore_embeddings ) retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, ) print("adding to ParentDocumentRetriever"); retriever.add_documents(documents) return [vectorstore, retriever] result = load_documents_v2( file_path="/Users/uavalos/Documents/manage/private/react/pages", file_filter_regex = "**/*", vectorstore_persist_path="/Users/uavalos/Documents/llm/manage-react-pages", vectorstore_collection_name="manage", vectorstore_embeddings=embeddings, parent_chunk_size=1000, child_chunk_size=200) ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[9], line 1 ----> 1 result = load_documents_v2( 2 file_path="/Users/uavalos/Documents/manage/private/react/pages", 3 file_filter_regex = "**/*", 4 vectorstore_persist_path="/Users/uavalos/Documents/llm/manage-react-pages", 5 vectorstore_collection_name="manage", 6 vectorstore_embeddings=embeddings, 7 parent_chunk_size=1000, 8 child_chunk_size=400) 10 vectorstore = result[0] 11 retriever = result[1] Cell In[7], line 126, in load_documents_v2(file_path, file_filter_regex, vectorstore_persist_path, vectorstore_collection_name, vectorstore_embeddings, parent_chunk_size, child_chunk_size) 122 print("adding to ParentDocumentRetriever"); 124 vectorstore.max_batch_size = 150945 --> 126 retriever.add_documents(documents) 128 # ids = [] 129 130 # for doc in documents: (...) 139 140 # vectorstore.persist() 142 return [vectorstore, retriever] File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain/retrievers/parent_document_retriever.py:122, in ParentDocumentRetriever.add_documents(self, documents, ids, add_to_docstore) 120 docs.extend(sub_docs) 121 full_docs.append((_id, doc)) --> 122 self.vectorstore.add_documents(docs) 123 if add_to_docstore: 124 self.docstore.mset(full_docs) File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_core/vectorstores.py:119, in VectorStore.add_documents(self, documents, **kwargs) 117 texts = [doc.page_content for doc in documents] 118 metadatas = [doc.metadata for doc in documents] --> 119 return self.add_texts(texts, metadatas, **kwargs) File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py:311, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs) 309 raise ValueError(e.args[0] + "\n\n" + msg) 310 else: --> 311 raise e 312 if empty_ids: 313 texts_without_metadatas = [texts[j] for j in empty_ids] File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py:297, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs) 295 ids_with_metadata = [ids[idx] for idx in non_empty_ids] 296 try: --> 297 self._collection.upsert( 298 metadatas=metadatas, 299 embeddings=embeddings_with_metadatas, 300 documents=texts_with_metadatas, 301 ids=ids_with_metadata, 302 ) 303 except ValueError as e: 304 if "Expected metadata value to be" in str(e): File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/models/Collection.py:487, in Collection.upsert(self, ids, embeddings, metadatas, documents, images, uris) 484 else: 485 embeddings = self._embed(input=images) --> 487 self._client._upsert( 488 collection_id=self.id, 489 ids=ids, 490 embeddings=embeddings, 491 metadatas=metadatas, 492 documents=documents, 493 uris=uris, 494 ) File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py:127, in trace_method.<locals>.decorator.<locals>.wrapper(*args, **kwargs) 125 global tracer, granularity 126 if trace_granularity < granularity: --> 127 return f(*args, **kwargs) 128 if not tracer: 129 return f(*args, **kwargs) File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/segment.py:447, in SegmentAPI._upsert(self, collection_id, ids, embeddings, metadatas, documents, uris) 445 coll = self._get_collection(collection_id) 446 self._manager.hint_use_collection(collection_id, t.Operation.UPSERT) --> 447 validate_batch( 448 (ids, embeddings, metadatas, documents, uris), 449 {"max_batch_size": self.max_batch_size}, 450 ) 451 records_to_submit = [] 452 for r in _records( 453 t.Operation.UPSERT, 454 ids=ids, (...) 459 uris=uris, 460 ): File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/types.py:488, in validate_batch(batch, limits) 477 def validate_batch( 478 batch: Tuple[ 479 IDs, (...) 485 limits: Dict[str, Any], 486 ) -> None: 487 if len(batch[0]) > limits["max_batch_size"]: --> 488 raise ValueError( 489 f"Batch size {len(batch[0])} exceeds maximum batch size {limits['max_batch_size']}" 490 ) ValueError: Batch size 155434 exceeds maximum batch size 41666 ### Description * I'm basically following the default ParentDocumentRetriever example w/ both a child/parent splitter. See here: https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever * However, the difference is that I have many (like 10k) docs. * Also, i'm using a local embedder. Ex: sentence-transformers/all-mpnet-base-v2 * I hit the above error message when trying to add the documents to the ParentDocumentRetriever ### System Info ``` System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 22.6.0: Sun Dec 17 22:13:25 PST 2023; root:xnu-8796.141.3.703.2~2/RELEASE_ARM64_T6020 > Python Version: 3.11.7 (main, Dec 15 2023, 12:09:56) [Clang 14.0.6 ] Package Information ------------------- > langchain_core: 0.1.23 > langchain: 0.1.7 > langchain_community: 0.0.20 > langsmith: 0.0.87 > langchainhub: 0.1.14 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
"Batch size 155434 exceeds maximum batch size 41666" error with ParentDocumentRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/18105/comments
1
2024-02-26T03:26:21Z
2024-06-10T16:07:38Z
https://github.com/langchain-ai/langchain/issues/18105
2,153,173,912
18,105
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code code ```python from langchain_openai import AzureOpenAIEmbeddings from dotenv import load_dotenv load_dotenv() azure_embeddings = AzureOpenAIEmbeddings( azure_deployment="<model_deployment>", openai_api_version="2023-05-15", ) ``` and envs ``` AZURE_OPENAI_ENDPOINT=https://<MY PROJECT>.openai.azure.com/ AZURE_OPENAI_API_KEY=... ``` I can see while debugging that the azure_deployment is set properly, though the error still makes it impossible to run I have tried setting `validate_base_url = False` but then it throws an error that base_url and azure_deployment and exclusive ### Error Message and Stack Trace (if applicable) ``` 1 validation error for AzureOpenAIEmbeddings __root__ As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). (type=value_error) File "/<REDACTED>, line 12, in <module> azure_embeddings = AzureOpenAIEmbeddings( pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings __root__ As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). (type=value_error) ``` ### Description pydantic should not throw an error ### System Info ``` System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64 > Python Version: 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:23:49) [Clang 15.0.7 ] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.8 > langchain_openai: 0.0.7 ```
AzureOpenAIEmbedding raises errors due to deprication of openai_api_base
https://api.github.com/repos/langchain-ai/langchain/issues/18099/comments
3
2024-02-25T19:25:41Z
2024-06-08T16:12:11Z
https://github.com/langchain-ai/langchain/issues/18099
2,152,905,624
18,099
[ "langchain-ai", "langchain" ]
I am getting this error while using embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2') to_vectorize = [" ".join(example.values()) for example in few_shots] vectorstore = Chroma.from_texts( to_vectorize, embedding=embeddings, metadatas=few_shots ) ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs']) Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface. Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 Im using chromadb==0.4.15 _Originally posted by @varayush007 in https://github.com/langchain-ai/langchain/issues/13051#issuecomment-1963036031_
Chromadb error
https://api.github.com/repos/langchain-ai/langchain/issues/18098/comments
4
2024-02-25T19:25:05Z
2024-05-31T10:24:05Z
https://github.com/langchain-ai/langchain/issues/18098
2,152,905,421
18,098
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code The following code: ```from typing import Any from pydantic import BaseModel from unstructured.partition.pdf import partition_pdf # Get elements raw_pdf_elements = partition_pdf( filename=path + "2304.08485.pdf", # Using pdf format to find embedded image blocks extract_images_in_pdf=True, # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles # Titles are any sub-section of the document infer_table_structure=True, # Post processing to aggregate text once we have the title chunking_strategy="by_title", # Chunking params to aggregate text blocks # Attempt to create a new chunk 3800 chars # Attempt to keep chunks > 2000 chars # Hard max on chunks max_characters=4000, new_after_n_chars=3800, combine_text_under_n_chars=2000, image_output_dir_path=path, )``` raised the error below. ### Error Message and Stack Trace (if applicable) WARNING:unstructured:This function will be deprecated in a future release and `unstructured` will simply use the DEFAULT_MODEL from `unstructured_inference.model.base` to set default model name --------------------------------------------------------------------------- UnidentifiedImageError Traceback (most recent call last) [<ipython-input-9-99d863c83b7a>](https://localhost:8080/#) in <cell line: 7>() 5 6 # Get elements ----> 7 raw_pdf_elements = partition_pdf( 8 filename=path + "2304.08485.pdf", 9 # Using pdf format to find embedded image blocks 10 frames [/usr/local/lib/python3.10/dist-packages/PIL/Image.py](https://localhost:8080/#) in open(fp, mode, formats) 3281 continue 3282 except BaseException: -> 3283 if exclusive_fp: 3284 fp.close() 3285 raise UnidentifiedImageError: cannot identify image file '/tmp/tmpg2qlx8jd/69fab29b-6c14-4bd4-888b-85c763aa1b31-01.ppm' ### Description I'm trying to run this notebook in Colab: https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb?ref=blog.langchain.dev and got the error above. ### System Info Google Colab
Error in Semi_structured_and_multi_modal_RAG.ipynb
https://api.github.com/repos/langchain-ai/langchain/issues/18095/comments
0
2024-02-25T18:11:13Z
2024-06-08T16:12:05Z
https://github.com/langchain-ai/langchain/issues/18095
2,152,877,808
18,095
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from langchain_community.graphs import NeptuneRdfGraph ### Error Message and Stack Trace (if applicable) _No response_ ### Description * I was trying to use the neptune_sparql in my local system and it showing the import error of the NeptuneRdfGraph and I notice that there are some file name changes in regarding the NeptuneRdfGraph but it didn't change in the neptune_sparql.py file ![lol](https://github.com/langchain-ai/langchain/assets/68547750/0995624b-9514-49bb-a0f5-25f421acd8e2) ![lol2](https://github.com/langchain-ai/langchain/assets/68547750/d77e0e7c-4b0b-42e4-9e21-bff8ada8ea63) ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.19045 > Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.23 > langsmith: 0.1.6 > langchain_experimental: 0.0.52 > langchain_google_genai: 0.0.5 > langchain_mistralai: 0.0.4 > langchain_openai: 0.0.6 > langchainhub: 0.1.14 > langgraph: 0.0.24
Importerror: Getting import error in the neptune_sparql.py file regarding the NeptuneRdfGraph
https://api.github.com/repos/langchain-ai/langchain/issues/18094/comments
0
2024-02-25T16:41:31Z
2024-06-08T16:12:00Z
https://github.com/langchain-ai/langchain/issues/18094
2,152,845,374
18,094
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_community.vectorstores import Chroma from langchain_openai import OpenAIEmbeddings from chromadb.config import Settings db = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()), client_settings= Settings( anonymized_telemetry=False)) retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8}, ) ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "lchain.py", line 26, in <module> from chromadb.config import Settings File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/__init__.py", line 5, in <module> from chromadb.auth.token import TokenTransportHeader File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/auth/token/__init__.py", line 26, in <module> from chromadb.telemetry.opentelemetry import ( File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 11, in <module> from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py", line 24, in <module> from opentelemetry.exporter.otlp.proto.common.trace_encoder import ( File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/common/trace_encoder.py", line 16, in <module> from opentelemetry.exporter.otlp.proto.common._internal.trace_encoder import ( File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/trace_encoder/__init__.py", line 44, in <module> SpanKind.INTERNAL: PB2SPan.SpanKind.SPAN_KIND_INTERNAL, AttributeError: 'EnumTypeWrapper' object has no attribute 'SPAN_KIND_INTERNAL' ### Description Trying to follow the Code understanding tutorial (https://python.langchain.com/docs/use_cases/code_understanding#use-case), find myself with an AttributeError related to opentelemetry. Tried reinstall the appropriate packages and turning off the usage of telemetry when initializing chroma but the issue persists. ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021 > Python Version: 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.24 > langsmith: 0.1.7 > langchain_openai: 0.0.7 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
AttributeError on opentelemetry when initializing Chroma from documents
https://api.github.com/repos/langchain-ai/langchain/issues/18093/comments
4
2024-02-25T15:17:14Z
2024-07-21T16:05:50Z
https://github.com/langchain-ai/langchain/issues/18093
2,152,810,537
18,093
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: before i updated my langchain i have this code without any issue : llm=ChatOpenAI(temperature=0.0,model="gpt-4") sql_toolkit=SQLDatabaseToolkit(db=db,llm=llm) llm_with_tool = llm.bind_tools(analyze_tool) sql_toolkit.get_tools() but after updating my langchain i have got this error : 'ChatOpenAI' object has no attribute 'bind_tools' ### Idea or request for content: can anyone guide me how can i solve this issue ?
ChatOpenAI' object has no attribute 'bind_tools'
https://api.github.com/repos/langchain-ai/langchain/issues/18088/comments
4
2024-02-25T11:15:00Z
2024-06-23T16:09:30Z
https://github.com/langchain-ai/langchain/issues/18088
2,152,714,982
18,088
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Readiing `ConversationBufferWindowMemory` class, it seems that the property `buffer` returns `self.buffer_as_messages` if `self.return_messages` is `True`, otherwise it returns `self.buffer_as_str`. https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L17-L21 However, the docstrings of these properties `buffer_as_str` and `return_messages` seem inverted when they say `True` and `False` respectively: https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L22-L30 https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L32-L35 ### Idea or request for content: _No response_
DOC: Possible mistake in property's docstring of ConversationBufferWindowMemory
https://api.github.com/repos/langchain-ai/langchain/issues/18080/comments
3
2024-02-25T01:56:23Z
2024-02-26T17:08:04Z
https://github.com/langchain-ai/langchain/issues/18080
2,152,559,260
18,080
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from langchain_community.document_loaders import UnstructuredWordDocumentLoader ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/home/imxsys/flask-ui/prototypes/LangChainProto/./deeplake_vector_docxi.py", line 2, in <module> from langchain_community.document_loaders import UnstructuredWordDocumentLoader File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/__init__.py", line 53, in <module> from langchain_community.document_loaders.blackboard import BlackboardLoader File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/blackboard.py", line 10, in <module> from langchain_community.document_loaders.pdf import PyPDFLoader File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/pdf.py", line 18, in <module> from langchain_community.document_loaders.parsers.pdf import ( File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/__init__.py", line 8, in <module> from langchain_community.document_loaders.parsers.language import LanguageParser File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/language/__init__.py", line 1, in <module> from langchain_community.document_loaders.parsers.language.language_parser import ( File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/language/language_parser.py", line 39, in <module> "c": Language.C, ^^^^^^^^^^ File "/usr/lib/python3.11/enum.py", line 784, in __getattr__ raise AttributeError(name) from None AttributeError: C ### Description I can't include UnstructuredWordDocumentLoader from langchain_community.document_loaders ### System Info pip3.11 list | grep langch langchain 0.1.7 langchain-community 0.0.21 langchain-core 0.1.26 langchain-google-vertexai 0.0.5 langchain-openai 0.0.6
raise AttributeError(name) from None AttributeError: C
https://api.github.com/repos/langchain-ai/langchain/issues/18076/comments
1
2024-02-24T23:34:27Z
2024-02-24T23:37:32Z
https://github.com/langchain-ai/langchain/issues/18076
2,152,521,908
18,076
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` import os from dotenv import load_dotenv from langchain.document_loaders.pdf import PyMuPDFLoader from langchain_community.document_loaders import DirectoryLoader, TextLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.vectorstores import Qdrant from langchain_community.embeddings import GPT4AllEmbeddings from langchain.indexes import SQLRecordManager, index load_dotenv() loaders = { '.pdf': PyMuPDFLoader, '.txt': TextLoader } def create_directory_loader(file_type, directory_path): '''Define a function to create a DirectoryLoader for a specific file type''' return DirectoryLoader( path=directory_path, glob=f"**/*{file_type}", loader_cls=loaders[file_type], show_progress=True, use_multithreading=True ) dirpath = os.environ.get('TEMP_DOCS_DIR') txt_loader = create_directory_loader('.txt', dirpath) texts = txt_loader.load() full_text = '' for paper in texts: full_text = full_text + paper.page_content full_text = " ".join(l for l in full_text.splitlines() if l) text_splitter = RecursiveCharacterTextSplitter( chunk_size=2048, chunk_overlap=512 ) document_chunks = text_splitter.create_documents( [full_text], [{'source': 'education'}]) embeddings = GPT4AllEmbeddings() collection_name = "testing_v1" namespace = f"mydata/{collection_name}" record_manager = SQLRecordManager( namespace, db_url="sqlite:///record_manager_cache.sql" ) record_manager.create_schema() url = 'http://0.0.0.0:6333' from qdrant_client import QdrantClient client = QdrantClient(url) qdrant = Qdrant( client=client, embeddings=embeddings, collection_name='testing_v1', ) index_stats= index( document_chunks, record_manager, qdrant, cleanup="full", source_id_key="source" ) print(index_stats) ``` ### Error Message and Stack Trace (if applicable) ``` Traceback (most recent call last): File "/home/khophi/Development/myApp/llm/api/embeddings.py", line 79, in <module> index_stats = index( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain/indexes/_api.py", line 326, in index vector_store.add_documents(docs_to_index, ids=uids) File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 119, in add_documents return self.add_texts(texts, metadatas, **kwargs) File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py", line 181, in add_texts self.client.upsert( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 987, in upsert return self._client.upsert( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/qdrant_remote.py", line 1300, in upsert http_result = self.openapi_client.points_api.upsert_points( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 1439, in upsert_points return self._build_for_upsert_points( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 738, in _build_for_upsert_points return self.api_client.request( File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 74, in request return self.send(request, type_) File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 97, in send raise UnexpectedResponse.for_response(response) qdrant_client.http.exceptions.UnexpectedResponse: Unexpected Response: 404 (Not Found) Raw response content: b'{"status":{"error":"Not found: Collection `testing_v1` doesn\'t exist!"},"time":0.0000653}' (venv) khophi@KhoPhi:~/Development/myApp/llm/api$ ``` ### Description I'm following the tutorial here trying to use Qdrant as the vectorstore https://python.langchain.com/docs/modules/data_connection/indexing According to the docs: > Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously. If preferably it should work with a brand new collection from start, then the chances of such a collection not existing will be true, at which point I'd expect the index to have something similar to the `force_create` in the `Qdrant.from_documents(...)` function to create the collection if it doesn't exist, before proceeding. That way the index db and collection all have the same start. As it stands now, there isn't a way to create an empty collection in Qdrant. ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023 > Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.1.24 > langchain: 0.0.350 > langchain_community: 0.0.3 > langsmith: 0.1.3 > langchain_cli: 0.0.19 > langchain_experimental: 0.0.47 > langchain_mistralai: 0.0.4 > langchain_openai: 0.0.6 > langchainhub: 0.1.14 > langgraph: 0.0.24 > langserve: 0.0.36
When indexing vectorstore, if no collection, create one - 404 collection not found in Qdrant when Indexing
https://api.github.com/repos/langchain-ai/langchain/issues/18068/comments
1
2024-02-24T14:25:30Z
2024-06-25T16:13:27Z
https://github.com/langchain-ai/langchain/issues/18068
2,152,338,733
18,068
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_community.document_loaders import WebBaseLoader from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings from langchain.llms import OpenAI from langchain.chains import RetrievalQA # Load the Wikipedia page loader = WebBaseLoader("https://en.wikipedia.org/wiki/New_York_City") documents = loader.load() # Split the text into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) # Create embeddings embeddings = OpenAIEmbeddings() # Create a vector store db = Chroma.from_documents(texts, embeddings, collection_name="wiki-nyc") # Create a retriever retriever = db.as_retriever() # Create a QA chain llm = OpenAI(temperature=0) qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever) ``` ### Error Message and Stack Trace (if applicable) ``` from langchain_community.document_loaders import WikipediaLoader File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/__init__.py", line 53, in <module> from langchain_community.document_loaders.blackboard import BlackboardLoader File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/blackboard.py", line 10, in <module> from langchain_community.document_loaders.pdf import PyPDFLoader File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/pdf.py", line 18, in <module> from langchain_community.document_loaders.parsers.pdf import ( File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/__init__.py", line 8, in <module> from langchain_community.document_loaders.parsers.language import LanguageParser File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/language/__init__.py", line 1, in <module> from langchain_community.document_loaders.parsers.language.language_parser import ( File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/language/language_parser.py", line 39, in <module> "c": Language.C, File "/nix/store/xf54733x4chbawkh1qvy9i1i4mlscy1c-python3-3.10.11/lib/python3.10/enum.py", line 437, in __getattr__ raise AttributeError(name) from None AttributeError: C ``` ### Description When trying a basic script to call the Wikipedia loader or WebBaseLoader (maybe any loader?) I get the error. Here's another example script that throws the same error. ``` from langchain_community.document_loaders import WikipediaLoader docs = WikipediaLoader(query="Genesis of the Daleks", load_max_docs=2).load() len(docs) docs[0].metadata # meta-information of the Document docs[0].page_content[:400] # a content of the Document ``` ### System Info System Information ------------------ > OS: Linux > OS Version: #13~22.04.1-Ubuntu SMP Wed Jan 24 23:39:40 UTC 2024 > Python Version: 3.10.11 (main, Apr 4 2023, 22:10:32) [GCC 12.2.0] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.6 > langchain_community: 0.0.24 > langsmith: 0.1.7 > langchain_openai: 0.0.5 > langchainhub: 0.1.14 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
AttributeError: C when importing WikipediaLoader / WebBaseLoader
https://api.github.com/repos/langchain-ai/langchain/issues/18067/comments
2
2024-02-24T14:22:59Z
2024-02-25T11:06:43Z
https://github.com/langchain-ai/langchain/issues/18067
2,152,337,867
18,067
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: ### Problem I spent a lot of time troubleshooting how to pass `search_kwargs` to the `vectorstore.as_retriever()` method. I believe there is a lot of nesting / traceability issues that could be improved if the optional search_kwargs parameters were defined as named parameters. [as_retriever method](https://github.com/langchain-ai/langchain/blob/9ebbca369560e6f8eca42bf27ed5215807695f8b/libs/core/langchain_core/vectorstores.py#L573) Although the [examples](https://python.langchain.com/docs/modules/data_connection/retrievers/vectorstore#specifying-top-k) explains it well for using `k`, I think there could still be a benefit here given that the examples aren't always easily findable for every method used. E.g., I wanted to specify`namespace` too. Example: ```python qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", # retriever=vector_db.as_retriever(**search_kwargs), # retriever=vector_db.as_retriever(search_kwargs=search_kwargs), retriever=vector_db.as_retriever(k=20, namespace=uid), chain_type_kwargs={"prompt": prompt_template}, return_source_documents=True, verbose=True, ) ``` ### Expected Usage From reading the documents on as_retriever(), I had tried these methods for passing my keyword arguments, since the as_retriever() method accepts **kwargs. ```python qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", chain_type_kwargs={"prompt": prompt_template}, return_source_documents=True, verbose=True, ######### THIS LINE ############## retriever=vector_db.as_retriever(k=20, namespace=uid), ######## THIS DOESN'T WORK EITHER ##### # retriever=vector_db.as_retriever(**search_kwargs), ) ``` ### Proper Usage ```python qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", chain_type_kwargs={"prompt": prompt_template}, return_source_documents=True, verbose=True, ### THIS WORKS ### retriever=vector_db.as_retriever(search_kwargs={"namespace": uid, "k": 20}), ) ``` I’m confused on why the search_type and search_kwargs are not named parameters. From going through this exercise, it is clearer to me now that I should read the function docstring, but for the sake of readability, traceability, and type checking, wouldn’t it be better to just add those search_kwargs function definition? E.g., my search_kwargs contained `namespace` and `k` — but it’s not really clear in the documentation for as_retriever how to do this. ### Idea or request for content: ## Suggested Enhancement I propose revising the `as_retriever()` method to include named parameters for common search options, such as `namespace`, `k`, and `search_type`. This would not only clarify usage but also enhance developer experience by providing code completion hints and reducing reliance on external documentation. For instance, the method signature could be enhanced as follows: ```python def as_retriever(self, namespace=None, k=4, search_type="similarity", **kwargs): ... ``` **Benefits** - This reduces the need for relying on examples in the docstring of this method to understand its proper usage. - Allows for auto-completion in IDE. What do you think?
DOC: Clarification on as_retriever Method Parameters in RetrievalQA Chain
https://api.github.com/repos/langchain-ai/langchain/issues/18045/comments
0
2024-02-23T19:27:56Z
2024-06-11T00:33:24Z
https://github.com/langchain-ai/langchain/issues/18045
2,151,676,931
18,045
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.agents import create_sql_agent db_chain = SQLDatabaseToolkit( db=db, llm=bedrock_llm) agent_executor = create_sql_agent(llm=bedrock_llm, toolkit=db_chain, verbose=True, prompt=few_shot_prompt) agent_executor.invoke("Question?") ``` ### Error Message and Stack Trace (if applicable) ``` ValueError Traceback (most recent call last) Cell In[11], line 6 3 from langchain.schema.cache import BaseCache 4 db_chain = SQLDatabaseToolkit( 5 db=db, llm=bedrock_llm) ----> 6 agent_executor = create_sql_agent(llm=bedrock_llm, toolkit=db_chain, verbose=True, prompt=few_shot_prompt) 7 #agent_executor.run("What are the number of catapult impressions for new to alexa customers in US for last 2 weeks") 8 agent_executor.invoke("What are the number of catapult impressions for new to alexa customers in US for last 2 weeks?") File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py:182, in create_sql_agent(llm, toolkit, agent_type, callback_manager, prefix, suffix, format_instructions, input_variables, top_k, max_iterations, max_execution_time, early_stopping_method, verbose, agent_executor_kwargs, extra_tools, db, prompt, **kwargs) 172 template = "\n\n".join( 173 [ 174 react_prompt.PREFIX, (...) 178 ] 179 ) 180 prompt = PromptTemplate.from_template(template) 181 agent = RunnableAgent( --> 182 runnable=create_react_agent(llm, tools, prompt), 183 input_keys_arg=["input"], 184 return_keys_arg=["output"], 185 ) 187 elif agent_type == AgentType.OPENAI_FUNCTIONS: 188 if prompt is None: File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/react/agent.py:97, in create_react_agent(llm, tools, prompt) 93 missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference( 94 prompt.input_variables 95 ) 96 if missing_vars: ---> 97 raise ValueError(f"Prompt missing required variables: {missing_vars}") 99 prompt = prompt.partial( 100 tools=render_text_description(list(tools)), 101 tool_names=", ".join([t.name for t in tools]), 102 ) 103 llm_with_stop = llm.bind(stop=["\nObservation"]) ValueError: Prompt missing required variables: {'tool_names', 'tools', 'agent_scratchpad'} ``` ### Description I am trying to use SQL Agent from LangChain and facing this error. ### System Info langchain 0.1.9 langchain-experimental 0.0.49
SQL Agent got error : Prompt missing required variables: {'tool_names', 'tools', 'agent_scratchpad'}
https://api.github.com/repos/langchain-ai/langchain/issues/18035/comments
2
2024-02-23T18:29:38Z
2024-06-09T16:07:27Z
https://github.com/langchain-ai/langchain/issues/18035
2,151,595,129
18,035
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code install the following dependencies ```bash pip install openai pip install google-search-results pip install langchain # version 0.1.9 pip install numexpr ``` Run the following python code (and add the openai api key): ```python from langchain import hub from langchain import LLMMathChain, SerpAPIWrapper from langchain.agents import Tool from langchain.chat_models import ChatOpenAI from langchain.agents import AgentExecutor, create_openai_functions_agent import os os.environ['OPENAI_API_KEY'] = str("xxx") llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613") llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) tools = [ Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ) ] hub_prompt: object = hub.pull("hwchase17/openai-tools-agent") agent = create_openai_functions_agent(llm, tools, hub_prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True) input_prompt = "What is the square root of the year of birth of the founder of Space X?" agent_executor.invoke({"input": input_prompt}) ``` ### Error Message and Stack Trace (if applicable) AttributeError: 'AIMessageChunk' object has no attribute 'text' Trace Traceback (most recent call last): File "/Users/fteutsch/Desktop/PythonProjects/private/digiprod-gen/bug_report.py", line 31, in <module> agent_executor.invoke({"input": input_prompt}) File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call next_step_output = self._take_next_step( File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step [ File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp> [ File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step output = self.agent.plan( File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2427, in stream yield from self.transform(iter([input]), config, **kwargs) File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2414, in transform yield from self._transform_stream_with_config( File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1494, in _transform_stream_with_config chunk: Output = context.run(next, iterator) # type: ignore File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2378, in _transform for output in final_pipeline: File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1032, in transform for chunk in input: File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4167, in transform yield from self.bound.transform( File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1042, in transform yield from self.stream(final, config, **kwargs) File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream raise e File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream for chunk in self._stream( File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 418, in _stream run_manager.on_llm_new_token(chunk.text, chunk=cg_chunk) AttributeError: 'AIMessageChunk' object has no attribute 'text' ### Description Since i updated langchain from 0.1.7 to latest version 0.1.9 i got the exception mentioned above. I already found the issue and maybe the solution as well. libs/community/langchain_community/chat_models/openai.py lines 414-418 ```python cg_chunk = ChatGenerationChunk( message=chunk, generation_info=generation_info ) if run_manager: run_manager.on_llm_new_token(**chunk**.text, chunk=cg_chunk) ``` **chunk** has the type AIMessageChunk which does not contain the attribute **text**, whereas **cg_chunk** has the type ChatGenerationChunk which contains **text** as attribute (and in version 0.1.7 the same class was used). The fix probably would be: ```python cg_chunk = ChatGenerationChunk( message=chunk, generation_info=generation_info ) if run_manager: run_manager.on_llm_new_token(**cg_chunk**.text, chunk=cg_chunk) ``` line 510 in the same file contains the same issue. ### System Info langchain==0.1.9 langchain-community==0.0.22 langchain-core==0.1.26 langchainhub==0.1.14 macOs python 3.10
AttributeError: 'AIMessageChunk' object has no attribute 'text'
https://api.github.com/repos/langchain-ai/langchain/issues/18024/comments
4
2024-02-23T16:03:25Z
2024-02-25T22:41:11Z
https://github.com/langchain-ai/langchain/issues/18024
2,151,376,945
18,024
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python print("?") ``` ### Error Message and Stack Trace (if applicable) ``` File "/home/martin/dev/mlki/langchain/llmaa/env/lib/python3.10/site-packages/langchain_community/utils/math.py", line 29, in cosine_similarity Z = 1 - simd.cdist(X, Y, metric="cosine") TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.OutputDistances' ``` ### Description Error came up after installing langchain [v0.1.9](https://github.com/langchain-ai/langchain/releases/tag/v0.1.9) and the python bindings for [simsimd v3.8.1](https://pypi.org/project/simsimd/3.8.1/). After switching the python package to [simsimd v3.7.7](https://github.com/ashvardanian/SimSIMD/releases/tag/v3.7.7), the error disappeared. Seems to be related to this change https://github.com/ashvardanian/SimSIMD/commit/819a40666faf038613b2368d1810e2563fb9d422 ### System Info langchain==0.1.9 langchain-community==0.0.22 langchain-core==0.1.26 langchain-openai==0.0.7 langchainhub==0.1.14
TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.OutputDistances'
https://api.github.com/repos/langchain-ai/langchain/issues/18022/comments
4
2024-02-23T15:41:24Z
2024-07-17T16:05:03Z
https://github.com/langchain-ai/langchain/issues/18022
2,151,340,319
18,022
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code The code sample: https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search produces an error message. The exception is thrown when running the following code : store.similarity_search(query) ### Error Message and Stack Trace (if applicable) `the JSON object must be str, bytes or bytearray, not dict` ### Description Once, you have debugged the code, the root cause is this located block of source code : Source file : `lib/langchain_community/vectorstores/bigquery_vector_search.py` around Line 548 (langchain_community 0.0.22) ```python metadata = row[self.metadata_field] if metadata: metadata = json.loads(metadata) else: metadata = {} ``` This can't work, because row[self.metadata_field] is a dictionary. To make it work, i suggest to replace this part by : ```python metadata = row[self.metadata_field] if metadata is None: metadata = {} ```` but, maybe there is a smarter fix, in my case, it does the job. ### System Info pip freeze | grep langchain langchain==0.1.6 langchain-community==0.0.22 langchain-core==0.1.26 langchain-google-genai==0.0.9 langchain-google-vertexai==0.0.5 langchain-openai==0.0.5
When using BigQueryVectorSearch store and similarity_search(query) : I got the message : `the JSON object must be str, bytes or bytearray, not dict`
https://api.github.com/repos/langchain-ai/langchain/issues/18020/comments
0
2024-02-23T14:10:23Z
2024-06-08T16:11:45Z
https://github.com/langchain-ai/langchain/issues/18020
2,151,174,694
18,020
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() ········ ········ from langchain_community.document_loaders import NotionDBLoader loader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10 ) docs = loader.load() print(docs) ``` ### Error Message and Stack Trace (if applicable) b'{"object":"error","status":400,"code":"validation_error","message":"body failed validation. Fix one:\\nbody.filter.or should be defined, instead was `undefined`.\\nbody.filter.and should be defined, instead was `undefined`.\\nbody.filter.title should be defined, instead was `undefined`.\\nbody.filter.rich_text should be defined, instead was `undefined`.\\nbody.filter.number should be defined, instead was `undefined`.\\nbody.filter.checkbox should be defined, instead was `undefined`.\\nbody.filter.select should be defined, instead was `undefined`.\\nbody.filter.multi_select should be defined, instead was `undefined`.\\nbody.filter.status should be defined, instead was `undefined`.\\nbody.filter.date should be defined, instead was `undefined`.\\nbody.filter.people should be defined, instead was `undefined`.\\nbody.filter.files should be defined, instead was `undefined`.\\nbody.filter.url should be defined, instead was `undefined`.\\nbody.filter.email should be defined, instead was `undefined`.\\nbody.filter.phone_number should be defined, instead was `undefined`.\\nbody.filter.relation should be defined, instead was `undefined`.\\nbody.filter.created_by should be defined, instead was `undefined`.\\nbody.filter.created_time should be defined, instead was `undefined`.\\nbody.filter.last_edited_by should be defined, instead was `undefined`.\\nbody.filter.last_edited_time should be defined, instead was `undefined`.\\nbody.filter.formula should be defined, instead was `undefined`.\\nbody.filter.unique_id should be defined, instead was `undefined`.\\nbody.filter.rollup should be defined, instead was `undefined`.","request_id":"a251ecce-5757-44bf-a5f1-c4d7582d72dd"}' ### Description getting 400, Bad Request ### System Info langchain==0.1.9 langchain-community==0.0.22 langchain-core==0.1.26
BadRequest Error on Simple Query to NotionDB Database Using notion_db_secret and database_id, due to passing empty dict in json for filter by default
https://api.github.com/repos/langchain-ai/langchain/issues/18009/comments
5
2024-02-23T09:53:43Z
2024-03-14T13:56:59Z
https://github.com/langchain-ai/langchain/issues/18009
2,150,741,437
18,009
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code langchain_core->prompts->loading.py->load_template line46 with open(template_path) as f: ### Error Message and Stack Trace (if applicable) UnicodeDecodeError: 'gbk' codec can't decode byte 0xaa in position 55: illegal multibyte sequence ### Description The default encoding mode of the Windows system is 'gbk', and you need to specify the decoding mode as 'utf-8'. langchain_core->prompts->loading.py->load_template line46 with open(template_path) as f: -> with open(template_path, encoding="utf-8") as f: ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22631 > Python Version: 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.1.25 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.5 > langchain_experimental: 0.0.51 > langchain_openai: 0.0.6 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
UnicodeDecodeError: 'gbk' codec can't decode byte 0xaa in position 55: illegal multibyte sequence
https://api.github.com/repos/langchain-ai/langchain/issues/17995/comments
1
2024-02-23T03:07:18Z
2024-07-20T02:28:43Z
https://github.com/langchain-ai/langchain/issues/17995
2,150,289,392
17,995
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI import os prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}") model = ChatOpenAI(openai_api_key=os.environ["OPEN_AI_KEY"], model="gpt-4") functions = [ { "name": "joke", "description": "A joke", "parameters": { "type": "object", "properties": { "setup": {"type": "string", "description": "The setup for the joke"}, "punchline": { "type": "string", "description": "The punchline for the joke", }, }, "required": ["setup", "punchline"], }, } ] from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser chain = ( prompt | model.bind(function_call={"name": "joke"}, functions=functions) | JsonOutputFunctionsParser() ) ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[7], [line 21](vscode-notebook-cell:?execution_count=7&line=21) [1](vscode-notebook-cell:?execution_count=7&line=1) functions = [ [2](vscode-notebook-cell:?execution_count=7&line=2) { [3](vscode-notebook-cell:?execution_count=7&line=3) "name": "joke", (...) [16](vscode-notebook-cell:?execution_count=7&line=16) } [17](vscode-notebook-cell:?execution_count=7&line=17) ] [18](vscode-notebook-cell:?execution_count=7&line=18) from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser [20](vscode-notebook-cell:?execution_count=7&line=20) chain = ( ---> [21](vscode-notebook-cell:?execution_count=7&line=21) prompt [22](vscode-notebook-cell:?execution_count=7&line=22) | model.bind(function_call={"name": "joke"}, functions=functions) [23](vscode-notebook-cell:?execution_count=7&line=23) | JsonOutputFunctionsParser() [24](vscode-notebook-cell:?execution_count=7&line=24) ) File [~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010), in RunnableSequence.__or__(self, other) [1996](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1996) return RunnableSequence( [1997](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1997) self.first, [1998](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1998) *self.middle, (...) [2003](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2003) name=self.name or other.name, [2004](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2004) ) [2005](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2005) else: [2006](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2006) return RunnableSequence( [2007](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2007) self.first, [2008](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2008) *self.middle, [2009](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2009) self.last, -> [2010](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010) coerce_to_runnable(other), [2011](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2011) name=self.name, [2012](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2012) ) File [~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366), in coerce_to_runnable(thing) [4364](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4364) return cast(Runnable[Input, Output], RunnableParallel(thing)) [4365](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4365) else: -> [4366](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366) raise TypeError( [4367](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4367) f"Expected a Runnable, callable or dict." [4368](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4368) f"Instead got an unsupported type: {type(thing)}" [4369](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4369) ) TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'langchain.output_parsers.openai_functions.JsonOutputFunctionsParser'> ### Description I am running the example given in the Cookbook https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser#prompttemplate-llm-outputparser and I am getting the error shown ### System Info ```bash langchain==0.0.229 langchain-core==0.1.25 langchain-google-genai==0.0.9 langchain-openai==0.0.6 langchainplus-sdk==0.0.20 ```
Error running the example https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser
https://api.github.com/repos/langchain-ai/langchain/issues/17975/comments
1
2024-02-22T21:25:30Z
2024-06-08T16:11:35Z
https://github.com/langchain-ai/langchain/issues/17975
2,149,965,621
17,975
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code import os import pinecone import streamlit as st from dotenv import load_dotenv, find_dotenv from langchain.chains import RetrievalQA from langchain_openai import ChatOpenAI from langchain_openai import OpenAIEmbeddings from langchain_community.document_loaders import PyPDFLoader, Docx2txtLoader, TextLoader from langchain_core.vectorstores import VectorStore from langchain_community.vectorstores import Pinecone from langchain_pinecone import PineconeVectorStore from langchain.text_splitter import RecursiveCharacterTextSplitter ### Error Message and Stack Trace (if applicable) cannot import name 'PineconeVectorStore' from 'langchain_pinecone' ### Description So while import all of the libraries i provided Unfortunatelly i get an error. Can't tell why. I am using import that has been set in the code libs/partners/pinecone/langchain_pinecone/vectorstores.py. The example says directly to use this import. If anyone had this issue, i would be glad to hear the solution. ### System Info System Information ------------------ > OS: Linux > OS Version: #18~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Feb 7 11:40:03 UTC 2 > Python Version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] Package Information ------------------- > langchain_core: 0.1.22 > langchain: 0.1.6 > langchain_community: 0.0.19 > langsmith: 0.0.87 > langchain_openai: 0.0.5 > langchain_pinecone: 0.0.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Isse while importing PineconeVectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/17965/comments
1
2024-02-22T18:54:53Z
2024-02-22T18:57:55Z
https://github.com/langchain-ai/langchain/issues/17965
2,149,739,963
17,965
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python # test.py from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch embeddings = OpenAIEmbeddings() MONGO_CONNECTION_URI = "mongo uri" VECTORS_DB_NAME = "db name" VECTOR_SEARCH_INDEX_NAME = "index name" db = MongoDBAtlasVectorSearch.from_connection_string( MONGO_CONNECTION_URI, VECTORS_DB_NAME, embeddings, index_name=VECTOR_SEARCH_INDEX_NAME, ) db_query = "Find info related to ..." num_of_db_results = 10 raw_contexts = db.max_marginal_relevance_search(query=db_query, k=num_of_db_results, lambda_mult=0) ``` ### Error Message and Stack Trace (if applicable) ```cmd Traceback (most recent call last): File ".../test.py", line 39, in <module> raw_contexts = db.max_marginal_relevance_search(query=db_query, k=num_of_db_results, lambda_mult=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/langchain_community/vectorstores/mongodb_atlas.py", line 325, in max_marginal_relevance_search [doc.metadata[self._embedding_key] for doc, _ in docs], ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^ KeyError: 'embedding' ``` ### Description * Error when using MMR (max marginal relevance) search. This includes - calling. `max_marginal_relevance_search` directly - using `db.as_retriever(search_type="mmr")` * `self._embedding_key` is deleted from `docs` in `_similarity_search_with_score` https://github.com/langchain-ai/langchain/blob/f6e3aa9770e32216954c3e0f2fa6825e5d89bd75/libs/community/langchain_community/vectorstores/mongodb_atlas.py#L212 the following method `maximal_marginal_relevance` tries to access the deleted field resulting in an error https://github.com/langchain-ai/langchain/blob/f6e3aa9770e32216954c3e0f2fa6825e5d89bd75/libs/community/langchain_community/vectorstores/mongodb_atlas.py#L325 Expectation * MMR option working with no error ### System Info Python `v3.12.0` requirements.txt ``` langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.5 langsmith==0.1.5 openai==1.11.1 ```
community: [MongoDBAtlasVectorSearch] Fix KeyError 'embedding' when using MMR
https://api.github.com/repos/langchain-ai/langchain/issues/17963/comments
1
2024-02-22T18:40:16Z
2024-06-08T16:11:30Z
https://github.com/langchain-ai/langchain/issues/17963
2,149,716,421
17,963
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code Hi. I try to reproduce the exact same exemple used [here](https://python.langchain.com/docs/modules/agents/agent_types/react), but I use duckduck go to search. The following error occur ``` from langchain import hub from langchain.agents import AgentExecutor, create_react_agent from langchain.tools import DuckDuckGoSearchRun, DuckDuckGoSearchResults # setting the tools tools = [DuckDuckGoSearchResults()] # Get the prompt to use - you can modify this! prompt = hub.pull("hwchase17/react") # Choose the LLM to use #llm = OpenAI() #llm = VertexAI() # Construct the ReAct agent agent = create_react_agent(llm, tools, prompt) # Create an agent executor by passing in the agent and tools agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) agent_executor.invoke({"input": "what is LangChain?"}) ``` ### Error Message and Stack Trace (if applicable) ``` > Entering new AgentExecutor chain... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[20], line 21 18 # Create an agent executor by passing in the agent and tools 19 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ---> 21 agent_executor.invoke({"input": "what is LangChain?"}) File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:168, in Chain.invoke(self, input, config, **kwargs) 166 except BaseException as e: 167 run_manager.on_chain_error(e) --> 168 raise e 169 run_manager.on_chain_end(outputs) 171 if include_run_info: File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:158, in Chain.invoke(self, input, config, **kwargs) 155 try: 156 self._validate_inputs(inputs) 157 outputs = ( --> 158 self._call(inputs, run_manager=run_manager) 159 if new_arg_supported 160 else self._call(inputs) 161 ) 163 final_outputs: Dict[str, Any] = self.prep_outputs( 164 inputs, outputs, return_only_outputs 165 ) 166 except BaseException as e: File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager) 1389 # We now enter the agent loop (until it returns something). 1390 while self._should_continue(iterations, time_elapsed): -> 1391 next_step_output = self._take_next_step( 1392 name_to_tool_map, 1393 color_mapping, 1394 inputs, 1395 intermediate_steps, 1396 run_manager=run_manager, 1397 ) 1398 if isinstance(next_step_output, AgentFinish): 1399 return self._return( 1400 next_step_output, intermediate_steps, run_manager=run_manager 1401 ) File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1088 def _take_next_step( 1089 self, 1090 name_to_tool_map: Dict[str, BaseTool], (...) 1094 run_manager: Optional[CallbackManagerForChainRun] = None, 1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1096 return self._consume_next_step( -> 1097 [ 1098 a 1099 for a in self._iter_next_step( 1100 name_to_tool_map, 1101 color_mapping, 1102 inputs, 1103 intermediate_steps, 1104 run_manager, 1105 ) 1106 ] 1107 ) File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in <listcomp>(.0) 1088 def _take_next_step( 1089 self, 1090 name_to_tool_map: Dict[str, BaseTool], (...) 1094 run_manager: Optional[CallbackManagerForChainRun] = None, 1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1096 return self._consume_next_step( -> 1097 [ 1098 a 1099 for a in self._iter_next_step( 1100 name_to_tool_map, 1101 color_mapping, 1102 inputs, 1103 intermediate_steps, 1104 run_manager, 1105 ) 1106 ] 1107 ) File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1122 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 1124 # Call the LLM to see what to do. -> 1125 output = self.agent.plan( 1126 intermediate_steps, 1127 callbacks=run_manager.get_child() if run_manager else None, 1128 **inputs, 1129 ) 1130 except OutputParserException as e: 1131 if isinstance(self.handle_parsing_errors, bool): File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:387, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs) 381 # Use streaming to make sure that the underlying LLM is invoked in a streaming 382 # fashion to make it possible to get access to the individual LLM tokens 383 # when using stream_log with the Agent Executor. 384 # Because the response from the plan is not a generator, we need to 385 # accumulate the output into final output and return that. 386 final_output: Any = None --> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): 388 if final_output is None: 389 final_output = chunk File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2427, in RunnableSequence.stream(self, input, config, **kwargs) 2421 def stream( 2422 self, 2423 input: Input, 2424 config: Optional[RunnableConfig] = None, 2425 **kwargs: Optional[Any], 2426 ) -> Iterator[Output]: -> 2427 yield from self.transform(iter([input]), config, **kwargs) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2414, in RunnableSequence.transform(self, input, config, **kwargs) 2408 def transform( 2409 self, 2410 input: Iterator[Input], 2411 config: Optional[RunnableConfig] = None, 2412 **kwargs: Optional[Any], 2413 ) -> Iterator[Output]: -> 2414 yield from self._transform_stream_with_config( 2415 input, 2416 self._transform, 2417 patch_config(config, run_name=(config or {}).get("run_name") or self.name), 2418 **kwargs, 2419 ) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1494, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs) 1492 try: 1493 while True: -> 1494 chunk: Output = context.run(next, iterator) # type: ignore 1495 yield chunk 1496 if final_output_supported: File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2378, in RunnableSequence._transform(self, input, run_manager, config) 2369 for step in steps: 2370 final_pipeline = step.transform( 2371 final_pipeline, 2372 patch_config( (...) 2375 ), 2376 ) -> 2378 for output in final_pipeline: 2379 yield output File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1032, in Runnable.transform(self, input, config, **kwargs) 1029 final: Input 1030 got_first_val = False -> 1032 for chunk in input: 1033 if not got_first_val: 1034 final = chunk File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:4164, in RunnableBindingBase.transform(self, input, config, **kwargs) 4158 def transform( 4159 self, 4160 input: Iterator[Input], 4161 config: Optional[RunnableConfig] = None, 4162 **kwargs: Any, 4163 ) -> Iterator[Output]: -> 4164 yield from self.bound.transform( 4165 input, 4166 self._merge_configs(config), 4167 **{**self.kwargs, **kwargs}, 4168 ) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1042, in Runnable.transform(self, input, config, **kwargs) 1039 final = final + chunk # type: ignore[operator] 1041 if got_first_val: -> 1042 yield from self.stream(final, config, **kwargs) File /opt/conda/lib/python3.10/site-packages/langchain_core/language_models/llms.py:452, in BaseLLM.stream(self, input, config, stop, **kwargs) 445 except BaseException as e: 446 run_manager.on_llm_error( 447 e, 448 response=LLMResult( 449 generations=[[generation]] if generation else [] 450 ), 451 ) --> 452 raise e 453 else: 454 run_manager.on_llm_end(LLMResult(generations=[[generation]])) File /opt/conda/lib/python3.10/site-packages/langchain_core/language_models/llms.py:436, in BaseLLM.stream(self, input, config, stop, **kwargs) 434 generation: Optional[GenerationChunk] = None 435 try: --> 436 for chunk in self._stream( 437 prompt, stop=stop, run_manager=run_manager, **kwargs 438 ): 439 yield chunk.text 440 if generation is None: File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:376, in VertexAI._stream(self, prompt, stop, run_manager, **kwargs) 368 def _stream( 369 self, 370 prompt: str, (...) 373 **kwargs: Any, 374 ) -> Iterator[GenerationChunk]: 375 params = self._prepare_params(stop=stop, stream=True, **kwargs) --> 376 for stream_resp in completion_with_retry( # type: ignore[misc] 377 self, 378 [prompt], 379 stream=True, 380 is_gemini=self._is_gemini_model, 381 run_manager=run_manager, 382 **params, 383 ): 384 chunk = self._response_to_generation(stream_resp) 385 yield chunk File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:76, in completion_with_retry(llm, prompt, stream, is_gemini, run_manager, **kwargs) 73 return llm.client.predict_streaming(prompt[0], **kwargs) 74 return llm.client.predict(prompt[0], **kwargs) ---> 76 return _completion_with_retry(prompt, is_gemini, **kwargs) File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw) 287 @functools.wraps(f) 288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any: --> 289 return self(f, *args, **kw) File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs) 377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 378 while True: --> 379 do = self.iter(retry_state=retry_state) 380 if isinstance(do, DoAttempt): 381 try: File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state) 312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) 313 if not (is_explicit_retry or self.retry(retry_state)): --> 314 return fut.result() 316 if self.after is not None: 317 self.after(retry_state) File /opt/conda/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout) 449 raise CancelledError() 450 elif self._state == FINISHED: --> 451 return self.__get_result() 453 self._condition.wait(timeout) 455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: File /opt/conda/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self) 401 if self._exception: 402 try: --> 403 raise self._exception 404 finally: 405 # Break a reference cycle with the exception in self._exception 406 self = None File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs) 380 if isinstance(do, DoAttempt): 381 try: --> 382 result = fn(*args, **kwargs) 383 except BaseException: # noqa: B902 384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:73, in completion_with_retry.<locals>._completion_with_retry(prompt, is_gemini, **kwargs) 71 else: 72 if stream: ---> 73 return llm.client.predict_streaming(prompt[0], **kwargs) 74 return llm.client.predict(prompt[0], **kwargs) AttributeError: 'TextGenerationModel' object has no attribute 'predict_streaming' ``` ### Description Im trying to use langchain to search and ask about specifics topics. I expect to see the answer about the topics, but the above mentioned error occur ### System Info langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchainhub==0.1.14
AttributeError: 'TextGenerationModel' object has no attribute 'predict_streaming'
https://api.github.com/repos/langchain-ai/langchain/issues/17962/comments
0
2024-02-22T18:36:10Z
2024-06-08T16:11:25Z
https://github.com/langchain-ai/langchain/issues/17962
2,149,707,516
17,962
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code I define a Redis database like ``` self.db: Redis = Redis( embedding=embeddings, redis_url=self.url, index_name=Database.index_name, ) ``` and later on, if my docs are out of date, I am clearing the database by using `self.db.client` directly to call `FLUSHALL`. After that, I add the docs, which hangs indefinitely: ``` self.clear_db() logger.info(f"Adding {len(docs)} docs to the database") self.db.add_documents(documents=docs) logger.info("Finished adding docs") ``` The last log is never seen and no docs are added to the database. ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am clearing the Redis database after creating it, which deletes the index in which is used by for the vector database functionality. However, it seems the current implementation cannot handle the index being deleted or the database getting cleared before adding documents to it. ### System Info ``` System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Dec 7 03:06:13 EST 2023 > Python Version: 3.11.5 (main, Sep 22 2023, 15:34:29) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] Package Information ------------------- > langchain_core: 0.1.23 > langchain: 0.1.7 > langchain_community: 0.0.20 > langsmith: 0.0.87 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
Redis `add/aadd_documents` hangs after calling `FLUSHALL` on Database
https://api.github.com/repos/langchain-ai/langchain/issues/17959/comments
1
2024-02-22T17:06:33Z
2024-02-22T19:09:41Z
https://github.com/langchain-ai/langchain/issues/17959
2,149,527,094
17,959
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader loader = AzureAIDocumentIntelligenceLoader(file_path='<a pdf containing a table>', api_endpoint="<your endpoint>", api_key="<your key>", mode="object") loaded_documents = loader.load_and_split() ``` ### Error Message and Stack Trace (if applicable) AzureAIDocumentIntelligenceParser._generate_docs_object(self, result) 72 # table 73 for table in result.tables: ---> 74 yield Document( 75 page_content=table.cells, # json object 76 metadata={ 77 "footnote": table.footnotes, 78 "caption": table.caption, 79 "page": para.bounding_regions[0].page_number, 80 "bounding_box": para.bounding_regions[0].polygon, 81 "row_count": table.row_count, 82 "column_count": table.column_count, 83 "type": "table", 84 }, 85 ) ... ValidationError: 1 validation error for Document page_content str type expected (type=type_error.str) ### Description * it seems that the page_content attribute is not filled correctly * it feeds in a list[DocumentTableCell] in a field that expects a string. * there is even a comment in the code, that declares that it is not a string that is passed in but a "json object" instead ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000 > Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.1.22 > langchain: 0.1.6 > langchain_community: 0.0.19 > langsmith: 0.0.87
AzureAIDocumentIntelligenceParser fills the Document Model incorrectly for tables
https://api.github.com/repos/langchain-ai/langchain/issues/17957/comments
0
2024-02-22T16:47:11Z
2024-06-08T16:11:20Z
https://github.com/langchain-ai/langchain/issues/17957
2,149,492,486
17,957
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: tried to run the SelfQueryRetriever for retrieving the snippets, but it is returning some weird error which is returning JSON Decode Error ### Idea or request for content: below is the code ``` def fetch_unique_documents(query_template, company_names, initial_limit, desired_count): company_documents = {} for company_name in company_names: # Format the query with the current company name query = query_template.format(company_names=company_name) unique_docs = [] seen_contents = set() current_limit = initial_limit while len(unique_docs) < desired_count: structured_query = StructuredQuery(query=query, limit=current_limit) docs = retriever.get_relevant_documents(structured_query) # Keep track of whether we found new unique documents in this iteration found_new_unique = False for doc in docs: if doc.page_content not in seen_contents: unique_docs.append(doc) seen_contents.add(doc.page_content) found_new_unique = True if len(unique_docs) == desired_count: break if not found_new_unique or len(unique_docs) == desired_count: break # Exit if no new unique documents are found or if we've reached the desired count # Increase the limit more aggressively if we are still far from the desired count current_limit += desired_count - len(unique_docs) # Store the results in the dictionary with the company name as the key company_documents[company_name] = unique_docs return company_documents # Example usage company_names = company_names query_template = "Does the company {company_names}, has plans to get financial statements?" desired_count = 5 # The number of unique documents you want per company initial_limit = 50 # Fetch documents for each company company_documents = fetch_unique_documents(query_template, company_names, initial_limit=desired_count, desired_count=desired_count) ``` and below is the output ``` --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py](https://localhost:8080/#) in parse_and_check_json_markdown(text, expected_keys) 174 try: --> 175 json_obj = parse_json_markdown(text) 176 except json.JSONDecodeError as e: 17 frames JSONDecodeError: Extra data: line 6 column 1 (char 78) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) OutputParserException: Got invalid JSON object. Error: Extra data: line 6 column 1 (char 78) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/langchain/chains/query_constructor/base.py](https://localhost:8080/#) in parse(self, text) 61 ) 62 except Exception as e: ---> 63 raise OutputParserException( 64 f"Parsing text\n{text}\n raised following error:\n{e}" 65 ) OutputParserException: Parsing text ```json { "query": "Macquarie Group", "filter": "NO_FILTER", "limit": 5 } ``` ``` how to resolve this issue?
returning JSON Decode Error for SelfQueryRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/17952/comments
0
2024-02-22T14:46:52Z
2024-06-08T16:11:15Z
https://github.com/langchain-ai/langchain/issues/17952
2,149,244,148
17,952
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code Making a repro will be a *lot* of work for what is likely a fairly simple server-side issue of langsmith. If it isn't, let me know and I'll try to do a repro. ### Error Message and Stack Trace (if applicable) Failed to batch ingest runs: LangSmithError('Failed to post https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'422 Client Error: unknown for url: [https://api.smith.langchain.com/runs/batch\](https://api.smith.langchain.com/runs/batch/)', \'{"detail":"[\\\'post\\\', \\\'items\\\', \\\'properties\\\', \\\'name\\\', \\\'maxLength\\\']: \\\\"ChannelRead<[\\\'raw_conversation\\\', \\\'messages\\\', \\\'query\\\', \\\'assistant_id\\\', \\\'assistant_nickname\\\', \\\'user_name\\\', \\\'user_roles\\\', \\\'organization_name\\\']>\\\\" is longer than 128 characters"}\')') ### Description I started having these errors from langsmith when using it to trace langgraph runs. It ran fine until a couple of days ago, might be an issue with the latest version of langsmith? ### System Info System Information ------------------ > OS: Linux > OS Version: #1 ZEN SMP PREEMPT_DYNAMIC Mon, 05 Feb 2024 22:07:37 +0000 > Python Version: 3.11.7 (main, Jan 29 2024, 16:03:57) [GCC 13.2.1 20230801] Package Information ------------------- > langchain_core: 0.1.19 > langchain: 0.1.4 > langchain_community: 0.0.16 > langsmith: 0.0.84 > langchain_openai: 0.0.5 > langchainhub: 0.1.14 > langgraph: 0.0.21 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langserve
Langsmith: Getting random "Failed to batch ingest runs: LangSmithError('Failed to post https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'422 Client Error: unknown for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"..." is longer than 128 characters"}\')')
https://api.github.com/repos/langchain-ai/langchain/issues/17950/comments
2
2024-02-22T13:48:56Z
2024-03-06T19:00:15Z
https://github.com/langchain-ai/langchain/issues/17950
2,149,119,232
17,950
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python embedding = OpenAIEmbeddings() memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer') llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1) vector_store = PGVector( connection_string=CONNECTION_STRING, collection_name=COLLECTION_NAME, embedding_function=embedding ) retriever = vector_store.as_retriever(search_kwargs={"k": 3}) qa = ConversationalRetrievalChain.from_llm( llm=llm, memory=memory, retriever=retriever, combine_docs_chain_kwargs={'prompt': prompt}, return_source_documents=True, ) ``` ### Error Message and Stack Trace (if applicable) when running I got : {'question': 'what is code of conduct policy?', 'chat_history': [HumanMessage(content='what is code of conduct policy?'), AIMessage(content="The Code of Conduct policy at HabileLabs is a set of guidelines that all team members and board members are expected to know and follow. Here are the key points of the policy:\n\n1. Purpose and Standards:\n 1.1 The Code is based on the highest standards of ethical business conduct.\n 1.2 It serves as a practical guide......... but not able to store previous conversation in chat_history ### Description I am not able to store previous conversation in memory as when I asked about the previous instance, the bot reply as I dont Know ### System Info I am using memory with langchain.
Not able to store chat_history while Implementing Memory in ConversationalRetreival Chain
https://api.github.com/repos/langchain-ai/langchain/issues/17944/comments
0
2024-02-22T12:13:31Z
2024-06-08T16:11:10Z
https://github.com/langchain-ai/langchain/issues/17944
2,148,938,171
17,944
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_openai import AzureChatOpenAI from llama_index.core.agent import ReActAgent from llama_index.llms.openai import OpenAI query_engine_tools = [ QueryEngineTool( query_engine=lyft_engine, metadata=ToolMetadata( name="lyft_10k", description=( "Provides information about Lyft financials for year 2021. " "Use a detailed plain text question as input to the tool." ), ), ), QueryEngineTool( query_engine=uber_engine, metadata=ToolMetadata( name="uber_10k", description=( "Provides information about Uber financials for year 2021. " "Use a detailed plain text question as input to the tool." ), ), ), ] azure_llm = AzureChatOpenAI( openai_api_version=api_version, api_key=api_key, azure_endpoint=azure_endpoint, api_version=api_version, ) azure_agent =ReActAgent.from_tools( query_engine_tools, llm=azure_llm, memory={}, verbose=True, ) ``` ### Error Message and Stack Trace (if applicable) ``` [56](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:56) """Create a chat memory buffer from an LLM.""" [57](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:57) if llm is not None: ---> [58](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:58) context_window = llm.metadata.context_window [59](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:59) token_limit = token_limit or int(context_window * DEFAULT_TOKEN_LIMIT_RATIO) [60](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:60) elif token_limit is None: AttributeError: 'NoneType' object has no attribute 'context_window' ``` ### Description I am following the tutorial here: https://docs.llamaindex.ai/en/latest/examples/agent/react_agent_with_query_engine.html And it is working great, However, when I use AzureChatOpenAI (or AzureOpenAI) instead I get an error: ``` [56](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:56) """Create a chat memory buffer from an LLM.""" [57](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:57) if llm is not None: ---> [58](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:58) context_window = llm.metadata.context_window [59](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:59) token_limit = token_limit or int(context_window * DEFAULT_TOKEN_LIMIT_RATIO) [60](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:60) elif token_limit is None: AttributeError: 'NoneType' object has no attribute 'context_window' ``` ### System Info langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.6 langchainhub==0.1.14 python 3.8 Ubuntu
Cant use agent with AzureChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/17940/comments
1
2024-02-22T11:48:30Z
2024-06-08T16:11:05Z
https://github.com/langchain-ai/langchain/issues/17940
2,148,886,063
17,940
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python few_shot_prompt = FewShotPromptTemplate( examples=examples, example_prompt=PromptTemplate.from_template( "User input: {input}\nSQL query: {query}" ), input_variables=["input", "dialect", "top_k"], prefix=system_prefix, suffix="User input: {input}\nSQL query: ", ) full_prompt = ChatPromptTemplate.from_messages( [ SystemMessagePromptTemplate(prompt=few_shot_prompt), ("human", "{input}"), MessagesPlaceholder("agent_scratchpad"), ] ) agent = create_sql_agent( llm=llm, db=db, prompt=full_prompt, verbose=True ) ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[32], line 1 ----> 1 agent = create_sql_agent( 2 llm=llm, 3 db=db, 4 prompt=full_prompt, 5 verbose=True 6 ) File ~\anaconda3\Lib\site-packages\langchain_community\agent_toolkits\sql\base.py:182, in create_sql_agent(llm, toolkit, agent_type, callback_manager, prefix, suffix, format_instructions, input_variables, top_k, max_iterations, max_execution_time, early_stopping_method, verbose, agent_executor_kwargs, extra_tools, db, prompt, **kwargs) 172 template = "\n\n".join( 173 [ 174 react_prompt.PREFIX, (...) 178 ] 179 ) 180 prompt = PromptTemplate.from_template(template) 181 agent = RunnableAgent( --> 182 runnable=create_react_agent(llm, tools, prompt), 183 input_keys_arg=["input"], 184 return_keys_arg=["output"], 185 ) 187 elif agent_type == AgentType.OPENAI_FUNCTIONS: 188 if prompt is None: File ~\anaconda3\Lib\site-packages\langchain\agents\react\agent.py:97, in create_react_agent(llm, tools, prompt) 93 missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference( 94 prompt.input_variables 95 ) 96 if missing_vars: ---> 97 raise ValueError(f"Prompt missing required variables: {missing_vars}") 99 prompt = prompt.partial( 100 tools=render_text_description(list(tools)), 101 tool_names=", ".join([t.name for t in tools]), 102 ) 103 llm_with_stop = llm.bind(stop=["\nObservation"]) ValueError: Prompt missing required variables: {'tools', 'tool_names'} ### Description Create_sql_agent is throwing an error ### System Info langchain 0.1.8 langchain-community 0.0.21 langchain-core 0.1.25 langchain-experimental 0.0.52 langchain-openai 0.0.6
create_sql_agent: Prompt missing required variables: {'tools', 'tool_names'}
https://api.github.com/repos/langchain-ai/langchain/issues/17939/comments
10
2024-02-22T11:47:34Z
2024-07-30T23:01:45Z
https://github.com/langchain-ai/langchain/issues/17939
2,148,884,491
17,939
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code from langchain_community.document_loaders import S3FileLoader from langchain_community.document_loaders import UnstructuredWordDocumentLoader # loader = S3FileLoader(s3_bucket, s3_key) # docs = loader.load() s3.download_file(s3_bucket, s3_key, f"/tmp/{s3_key}") with open(f"/tmp/{s3_key}", "rb") as f: loader = UnstructuredWordDocumentLoader(f"/tmp/{s3_key}") docs = loader.load() ### Error Message and Stack Trace (if applicable) [ERROR] OSError: [Errno 30] Read-only file system: '/home/sbx_user1051' Traceback (most recent call last): File "/var/task/aws_lambda_powertools/logging/logger.py", line 449, in decorate return lambda_handler(event, context, *args, **kwargs) File "/var/task/lambda_handler.py", line 54, in handler docs = loader.load() File "/var/task/langchain_community/document_loaders/unstructured.py", line 87, in load elements = self._get_elements() File "/var/task/langchain_community/document_loaders/word_document.py", line 124, in _get_elements return partition_docx(filename=self.file_path, **self.unstructured_kwargs) File "/var/task/unstructured/documents/elements.py", line 526, in wrapper elements = func(*args, **kwargs) File "/var/task/unstructured/file_utils/filetype.py", line 619, in wrapper elements = func(*args, **kwargs) File "/var/task/unstructured/file_utils/filetype.py", line 574, in wrapper elements = func(*args, **kwargs) File "/var/task/unstructured/chunking/__init__.py", line 69, in wrapper elements = func(*args, **kwargs) File "/var/task/unstructured/partition/docx.py", line 228, in partition_docx return list(elements) File "/var/task/unstructured/partition/lang.py", line 397, in apply_lang_metadata elements = list(elements) File "/var/task/unstructured/partition/docx.py", line 305, in _iter_document_elements yield from self._iter_paragraph_elements(block_item) File "/var/task/unstructured/partition/docx.py", line 541, in _iter_paragraph_elements yield from self._classify_paragraph_to_element(item) File "/var/task/unstructured/partition/docx.py", line 361, in _classify_paragraph_to_element TextSubCls = self._parse_paragraph_text_for_element_type(paragraph) File "/var/task/unstructured/partition/docx.py", line 868, in _parse_paragraph_text_for_element_type if is_possible_narrative_text(text): File "/var/task/unstructured/partition/text_type.py", line 78, in is_possible_narrative_text if exceeds_cap_ratio(text, threshold=cap_threshold): File "/var/task/unstructured/partition/text_type.py", line 274, in exceeds_cap_ratio if sentence_count(text, 3) > 1: File "/var/task/unstructured/partition/text_type.py", line 223, in sentence_count sentences = sent_tokenize(text) File "/var/task/unstructured/nlp/tokenize.py", line 29, in sent_tokenize _download_nltk_package_if_not_present(package_category="tokenizers", package_name="punkt") File "/var/task/unstructured/nlp/tokenize.py", line 23, in _download_nltk_package_if_not_present nltk.download(package_name) File "/var/task/nltk/downloader.py", line 777, in download for msg in self.incr_download(info_or_id, download_dir, force): File "/var/task/nltk/downloader.py", line 642, in incr_download yield from self._download_package(info, download_dir, force) File "/var/task/nltk/downloader.py", line 699, in _download_package os.makedirs(download_dir) File "<frozen os>", line 215, in makedirs File "<frozen os>", line 225, in makedirs ### Description I am trying to load a file from S3 bucket using AWS lambda using langchain document loaders I first tried using S3FileLoader when it gave the read-only file error. So I tried downloading the docx file first from the S3 bucket and then used the specific document loader "UnstructuredWordDocumentLoader" as it was the word document I uploaded. but it still gave the same error. Eventually I want to load any type of document in S3 bucket and generate embeddings to store in an opensearch vector database. Also if I try to deploy my lambda with docker using image public.ecr.aws/lambda/python:3.11 I get a error "FileNotFoundError: soffice command was not found. Please install libreoffice" ### System Info python version 3.11 langchain==0.1.6 opensearch-py==2.4.2 langchain-community==0.0.19 tiktoken==0.6.0 unstructured unstructured[docx] aws_lambda_powertools==2.33.1
Langchain document loaders give read only file system error on AWS lambda while loading the document
https://api.github.com/repos/langchain-ai/langchain/issues/17936/comments
7
2024-02-22T10:59:30Z
2024-07-12T16:03:53Z
https://github.com/langchain-ai/langchain/issues/17936
2,148,793,334
17,936
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code import cx_Oracle from sqlalchemy import create_engine import openai from langchain.agents.agent_types import AgentType from langchain.sql_database import SQLDatabase from langchain_community.agent_toolkits import create_sql_agent from langchain_openai import ChatOpenAI from langchain import OpenAI, SQLDatabase import cx_Oracle import openai from langchain_openai import OpenAIEmbeddings from langchain.llms import OpenAI from langchain_community.agent_toolkits import create_sql_agent from langchain_openai import ChatOpenAI from langchain.sql_database import SQLDatabase from langchain_experimental.sql import SQLDatabaseChain from langchain_openai import ChatOpenAI from langchain.chains import create_sql_query_chain from langchain_community.vectorstores import FAISS from langchain_core.example_selectors import SemanticSimilarityExampleSelector from langchain_openai import OpenAIEmbeddings from langchain_core.prompts import ( ChatPromptTemplate, FewShotPromptTemplate, MessagesPlaceholder, PromptTemplate, SystemMessagePromptTemplate, ) # Database credentials username = "" password = "" hostname = "" port = "" service_name = "" # Oracle connection string oracle_connection_string_fmt = ( 'oracle+cx_oracle://{username}:{password}@' + cx_Oracle.makedsn('{hostname}', '{port}', service_name='{service_name}') ) url = oracle_connection_string_fmt.format( username=username, password=password, hostname=hostname, port=port, service_name=service_name, ) # Create SQLAlchemy engine engine = create_engine(url, echo=False) # Create SQLDatabase instance db = SQLDatabase(engine) print(db.dialect) print(db.get_usable_table_names()) db.run("SELECT * FROM our_oracle_table") import getpass import os #os.environ["OPENAI_API_KEY"] = getpass.getpass() openai.api_type = "azure" openai.api_base = "oururl" # Create OpenAI Completion instance with the 'ChatOpenAI' class llm = OpenAI(temperature=0, openai_api_key='xxxxxxxxxxxxxxxxxxx', engine='gpt-35-turbo') # Create SQL agent agent_executor = create_sql_agent(llm, db=db, verbose=True, handle_parsing_errors=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,) # Invoke the agent with a specific input agent_executor.invoke({ "input": "" }) ### Error Message and Stack Trace (if applicable) I have the following questions and errors: 1. Is the above code is the right way of connecting langchain to oracle sql developer because it wont fetch the right table name, also it is not straight forward as using sqllite 2. If i run certain prompts it provides me this error: DatabaseError: ORA-01805: possible error in date/time operation 3. For certain prompts: i get this error ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should check the list of tables to see if there is a table called MINI_FACTORY Action: sql_db_list_tables, ""` 4. Please recommend the right way of connecting to the oracle sql developer database ### Description I'm trying to query the oracle SQL developer DB for getting answer in the natural language. But, going into lots of issues as mentioned above. Please help me out :) ### System Info Name: langchain Version: 0.1.8 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: C:\Users\ajm1nh3\AppData\Local\anaconda3\envs\sample\Lib\site-packages Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: langchain-experimental Note: you may need to restart the kernel to use updated packages. Name: openai Version: 0.28.0 Summary: Python client library for the OpenAI API Home-page: https://github.com/openai/openai-python Author: OpenAI Author-email: support@openai.com Required-by: langchain-openai
Integration of LangChain with Oracle SQL developer DB
https://api.github.com/repos/langchain-ai/langchain/issues/17933/comments
2
2024-02-22T10:30:32Z
2024-06-27T05:10:17Z
https://github.com/langchain-ai/langchain/issues/17933
2,148,736,658
17,933
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: link : https://python.langchain.com/docs/expression_language/get_started#basic-example-prompt-model-output-parser Missing import for `1. Prompt section` The code below throws an error, so we need to add an import. ```python ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')]) ``` ```python [HumanMessage(content='tell me a short joke about ice cream')] ``` ### Idea or request for content: Add import `from langchain_core.prompt_values import ChatPromptValue, HumanMessage` for `1. Prompt section` ```python from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI ## New import below from langchain_core.prompt_values import ChatPromptValue, HumanMessage prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}") model = ChatOpenAI(model="gpt-4") output_parser = StrOutputParser() chain = prompt | model | output_parser chain.invoke({"topic": "ice cream"}) ```
DOC: Missing Import on LCEL's 'Get Started'
https://api.github.com/repos/langchain-ai/langchain/issues/17931/comments
0
2024-02-22T10:08:04Z
2024-06-08T16:10:55Z
https://github.com/langchain-ai/langchain/issues/17931
2,148,689,238
17,931
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python embedding = OpenAIEmbeddings() memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer') llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1) vector_store = PGVector( connection_string=CONNECTION_STRING, collection_name=COLLECTION_NAME, embedding_function=embedding ) retriever = vector_store.as_retriever(search_kwargs={"k": 3}) qa = ConversationalRetrievalChain.from_llm( llm=llm, memory=memory, retriever=retriever, combine_docs_chain_kwargs={'prompt': prompt}, return_source_documents=True, ) ``` ### Error Message and Stack Trace (if applicable) when running I got : {'question': 'what is code of conduct policy?', 'chat_history': [HumanMessage(content='what is code of conduct policy?'), AIMessage(content="The Code of Conduct policy at HabileLabs is a set of guidelines that all team members and board members are expected to know and follow. Here are the key points of the policy:\n\n1. Purpose and Standards:\n 1.1 The Code is based on the highest standards of ethical business conduct.\n 1.2 It serves as a practical guide......... but not able to store previous conversation in chat_history ### Description I am not able to store previous conversation in memory. ### System Info I am using memory
Getting error while Implementing Memory in ConversationalRetreival Chain
https://api.github.com/repos/langchain-ai/langchain/issues/17930/comments
1
2024-02-22T09:24:51Z
2024-06-08T16:10:50Z
https://github.com/langchain-ai/langchain/issues/17930
2,148,602,240
17,930
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code This Code is just an example. Can't post the full code due to restrictions. ```python class LLMOutputJSON(OldBaseModel): msg: str = OldField(description="The message to be sent to the user") finished_validating: bool = OldField(description="Whether the agent has finished validating the users request") metadata: dict = OldField(description="Metadata") def main(): llm = AzureChatOpenAI() prompt="You are funny and tell the user Jokes" parser=JsonOutputParser(pydantic_object=LLMOutputJSON) tools = [] memory = ConversationBufferMemory() agent = ( { "input": lambda x: x["input"], "chat_history": lambda x: x["chat_history"], "agent_scratchpad": lambda x: format_log_to_str( x["intermediate_steps"] ), } | prompt | llm | parser ) agent = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory, handle_parsing_errors=True, ) agent.ainvoke:("input": "Tell me a Joke.") ``` ### Error Message and Stack Trace (if applicable) res = await agent.ainvoke( app-1 | ^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 217, in ainvoke app-1 | raise e app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 208, in ainvoke app-1 | await self._acall(inputs, run_manager=run_manager) app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1440, in _acall app-1 | next_step_output = await self._atake_next_step( app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1234, in _atake_next_step app-1 | [ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1234, in <listcomp> app-1 | [ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1262, in _aiter_next_step app-1 | output = await self.agent.aplan( app-1 | ^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 422, in aplan app-1 | async for chunk in self.runnable.astream( app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2452, in astream app-1 | async for chunk in self.atransform(input_aiter(), config, **kwargs): app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2435, in atransform app-1 | async for chunk in self._atransform_stream_with_config( app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config app-1 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform app-1 | async for output in final_pipeline: app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform app-1 | async for chunk in self._atransform_stream_with_config( app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1557, in _atransform_stream_with_config app-1 | final_input: Optional[Input] = await py_anext(input_for_tracing, None) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl app-1 | return await __anext__(iterator) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer app-1 | item = await iterator.__anext__() app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4176, in atransform app-1 | async for item in self.bound.atransform( app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1058, in atransform app-1 | async for chunk in input: app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1068, in atransform app-1 | async for output in self.astream(final, config, **kwargs): app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 589, in astream app-1 | yield await self.ainvoke(input, config, **kwargs) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 495, in ainvoke app-1 | return await run_in_executor(config, self.invoke, input, config, **kwargs) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor app-1 | return await asyncio.get_running_loop().run_in_executor( app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run app-1 | result = self.fn(*self.args, **self.kwargs) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 113, in invoke app-1 | return self._call_with_config( app-1 | ^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config app-1 | context.run( app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args app-1 | return func(input, **kwargs) # type: ignore[call-arg] app-1 | ^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 98, in _format_prompt_with_error_handling app-1 | raise KeyError( app-1 | KeyError: 'Input to PromptTemplate is missing variables {\'"properties"\', \'"foo"\'}. Expected: [\'"foo"\', \'"properties"\', \'agent_scratchpad\', \'chat_history\', \'input\'] Received: [\'input\', \'chat_history\', \'agent_scratchpad\']' ### Description The Problem seems to come from within the JSONOutputParser. ```python JSON_FORMAT_INSTRUCTIONS = """ The output should be formatted as a JSON instance that conforms to the JSON schema below. As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}} the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted. Here is the output schema: {schema} """ ``` The foo and properties are registered as input_variables what causes the Error. Not exactly sure why that happens. ### System Info pip freeze | grep langchain langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.6 langchainhub==0.1.14
JsonOutputParser throws KeyError for missing variables
https://api.github.com/repos/langchain-ai/langchain/issues/17929/comments
1
2024-02-22T09:12:17Z
2024-05-27T20:00:56Z
https://github.com/langchain-ai/langchain/issues/17929
2,148,578,038
17,929
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code os.environ['OPENAI_API_KEY'] = openapi_key # Define connection parameters using constants from urllib.parse import quote_plus server_name = constants.server_name database_name = constants.database_name username = constants.username password = constants.password encoded_password = quote_plus(password) connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server" # Create an engine to connect to the SQL database engine = create_engine(connection_uri) model_name="gpt-3.5-turbo-16k" db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT','egv_employee_attendance']) llm = ChatOpenAI(temperature=0, verbose=False, model=model_name) #, n=2, best_of=2) PROMPT_SUFFIX = """Only use the following tables: {table_info} Previous Conversation: {history} Question: {input}""" _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question, then look at the results of the query and return the answer. If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns. Write the query only for the column names which are present in view. Execute the query and analyze the results to formulate a response. Return the answer in sentence form. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Answer: "Final answer here" """ PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX) memory = None if memory == None: memory = ConversationBufferMemory() db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory) answer = db_chain.run(question) print(memory.load_memory_variables({})) return answer here is my code with interact with sql database in natural language , it uses langchain SQLDatabaseChain ### Error Message and Stack Trace (if applicable) _No response_ ### Description here if an user ask a question like hi, hello, welcome or they simply type any thing, that time it is executing some query and just returning the 1st employee details, instead if it greeting can the model return something like hello or how can i help you, in short can the model try to understand the question and give response based on the question or if it doesn't understand the question instead of writing query to give 1st employee details or irrelevant answer can it respond like invalid question or something. can we integrate some other functionality or model which will interact with user if the question is not related to the database. ### System Info python: 3.11 langchain: latest
in a chatbot which interact with sql db, how can it return a user friendly answer/based on user question, instead of executing query to return the 1st row of db
https://api.github.com/repos/langchain-ai/langchain/issues/17926/comments
2
2024-02-22T09:03:36Z
2024-06-08T16:10:45Z
https://github.com/langchain-ai/langchain/issues/17926
2,148,561,499
17,926
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: - ### Idea or request for content: below is the code ``` embeddings = OpenAIEmbeddings( openai_api_type="", openai_api_key="", openai_api_base="", deployment="text-embedding-ada-002", model="text-embedding-ada-002", chunk_size=1 ) # Create a FAISS vector store from the embeddings vectorstore = FAISS.from_documents(documents, embeddings) ``` it is returning below error ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-27-ab8130d30993>](https://localhost:8080/#) in <cell line: 2>() 1 # Create a FAISS vector store from the embeddings ----> 2 vectorStore = FAISS.from_documents(documents, embeddings) 5 frames [/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in _create_retry_decorator(embeddings) 55 ), 56 retry=( ---> 57 retry_if_exception_type(openai.error.Timeout) 58 | retry_if_exception_type(openai.error.APIError) 59 | retry_if_exception_type(openai.error.APIConnectionError) AttributeError: module 'openai' has no attribute 'error' ``` Even i tried AzureOpenAIEmbeddings with openai version>1, but it is returning some issue. How to resolve this issue?
unable to use Azure openai key for OpenAIEmbeddings(), openai version is <1.0.0
https://api.github.com/repos/langchain-ai/langchain/issues/17925/comments
0
2024-02-22T08:46:30Z
2024-06-08T16:10:40Z
https://github.com/langchain-ai/langchain/issues/17925
2,148,531,878
17,925
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code Similar code works with open ai (see https://python.langchain.com/docs/use_cases/sql/agents) and but not with Azure Open AI Below is my full code ``` import os os.environ['OPENAI_API_KEY'] = 'your key' os.environ['OPENAI_API_TYPE'] = 'azure' os.environ['OPENAI_API_VERSION'] = '2023-05-15' from langchain.sql_database import SQLDatabase db = SQLDatabase.from_uri("sqlite:///Chinook.db") from langchain_community.agent_toolkits import create_sql_agent from langchain_openai import AzureOpenAI # Updated import from langchain_community.vectorstores import FAISS from langchain_core.example_selectors import SemanticSimilarityExampleSelector from langchain_openai import AzureOpenAIEmbeddings llm = AzureOpenAI(deployment_name="GPT35Turbo",model_name="gpt-35-turbo" ,azure_endpoint="https://copilots-aoai-df.openai.azure.com/" , temperature=0, verbose=True) embeddings = AzureOpenAIEmbeddings( azure_endpoint="https://copilots-aoai-df.openai.azure.com/", azure_deployment="Ada002EmbeddingModel", openai_api_version="2023-05-15", ) examples = [ {"input": "List all artists.", "query": "SELECT * FROM Artist;"}, { "input": "Find all albums for the artist 'AC/DC'.", "query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');", }, { "input": "List all tracks in the 'Rock' genre.", "query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');", }, { "input": "Find the total duration of all tracks.", "query": "SELECT SUM(Milliseconds) FROM Track;", }, { "input": "List all customers from Canada.", "query": "SELECT * FROM Customer WHERE Country = 'Canada';", }, { "input": "How many tracks are there in the album with ID 5?", "query": "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;", }, { "input": "Find the total number of invoices.", "query": "SELECT COUNT(*) FROM Invoice;", }, { "input": "List all tracks that are longer than 5 minutes.", "query": "SELECT * FROM Track WHERE Milliseconds > 300000;", }, { "input": "Who are the top 5 customers by total purchase?", "query": "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;", }, { "input": "Which albums are from the year 2000?", "query": "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';", }, { "input": "How many employees are there", "query": 'SELECT COUNT(*) FROM "Employee"', }, ] example_selector = SemanticSimilarityExampleSelector.from_examples( examples, embeddings, FAISS, k=5, input_keys=["input"], ) from langchain_core.prompts import ( ChatPromptTemplate, FewShotPromptTemplate, MessagesPlaceholder, PromptTemplate, SystemMessagePromptTemplate, ) system_prefix = """You are an agent designed to interact with a SQL database. Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for the relevant columns given the question. You have access to tools for interacting with the database. Only use the given tools. Only use the information returned by the tools to construct your final answer. You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again. DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database. If the question does not seem related to the database, just return "I don't know" as the answer. Here are some examples of user inputs and their corresponding SQL queries:""" few_shot_prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=PromptTemplate.from_template( "User input: {input}\nSQL query: {query}" ), input_variables=["input", "dialect", "top_k"], prefix=system_prefix, suffix="" ) full_prompt = ChatPromptTemplate.from_messages( [ SystemMessagePromptTemplate(prompt=few_shot_prompt), ("human", "{input}"), MessagesPlaceholder("agent_scratchpad"), ] ) # Example formatted prompt prompt_val = full_prompt.invoke( { "input": "How many arists are there", "top_k": 5, "dialect": "SQLite", "agent_scratchpad": [], } ) print(prompt_val.to_string()) from langchain.agents.agent_types import AgentType from langchain_community.agent_toolkits import SQLDatabaseToolkit toolkit = SQLDatabaseToolkit(db=db, llm=llm) # Get list of tools tools = toolkit.get_tools() agent = create_sql_agent( llm=llm, db = db, tools = tools, prompt=full_prompt, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) agent.invoke({"input": "list top 10 customers by total purchase?"}) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description raceback (most recent call last): File "C:\Projects\Personal\Langchain-DB-Example\main-sqlite-azure.py", line 142, in <module> agent = create_sql_agent( File "C:\Projects\Personal\Langchain-DB-Example\.venv\lib\site-packages\langchain_community\agent_toolkits\sql\base.py", line 182, in create_sql_agent runnable=create_react_agent(llm, tools, prompt), File "C:\Projects\Personal\Langchain-DB-Example\.venv\lib\site-packages\langchain\agents\react\agent.py", line 97, in create_react_agent raise ValueError(f"Prompt missing required variables: {missing_vars}") **ValueError: Prompt missing required variables: {'tools', 'tool_names'}** ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22621 > Python Version: 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.1.25 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.5 > langchain_experimental: 0.0.52 > langchain_openai: 0.0.6 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve PIP freeze -------------------------------------------------- aiohttp==3.9.3 aiosignal==1.3.1 annotated-types==0.6.0 anyio==4.3.0 async-timeout==4.0.3 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 colorama==0.4.6 dataclasses-json==0.6.4 distro==1.9.0 exceptiongroup==1.2.0 faiss-cpu==1.7.4 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.3 httpx==0.26.0 idna==3.6 jsonpatch==1.33 jsonpointer==2.4 langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-experimental==0.0.52 langchain-openai==0.0.6 langsmith==0.1.5 marshmallow==3.20.2 multidict==6.0.5 mypy-extensions==1.0.0 numpy==1.26.4 openai==1.12.0 packaging==23.2 pydantic==2.6.1 pydantic_core==2.16.2 PyYAML==6.0.1 regex==2023.12.25 requests==2.31.0 sniffio==1.3.0 SQLAlchemy==2.0.27 tenacity==8.2.3 tiktoken==0.6.0 tqdm==4.66.2 typing-inspect==0.9.0 typing_extensions==4.9.0 urllib3==2.2.1 yarl==1.9.4
AzureOpenAI and SQLTools with agent -> ValueError(f"Prompt missing required variables: {missing_vars}")
https://api.github.com/repos/langchain-ai/langchain/issues/17921/comments
5
2024-02-22T07:00:16Z
2024-06-09T16:07:22Z
https://github.com/langchain-ai/langchain/issues/17921
2,148,358,385
17,921
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from langchain_openai import OpenAI lm_runnable = ( OpenAI(temperature=0, openai_api_key="nothing") .configurable_alternatives( ConfigurableField(id="llm"), default_key="openai", prefix_keys=False) .configurable_fields( temperature=ConfigurableField( id="llm_temperature", name="LLM Temperature", description="The temperature of the LLM", ) ) ) ``` ### Error Message and Stack Trace (if applicable) ``` pydantic.v1.error_wrappers.ValidationError: 1 validation error for RunnableConfigurableAlternatives prefix_keys field required (type=value_error.missing) ``` ### Description I'm trying to add configure_fields to the existing configure_alternatives instance but I got the below exception. ### System Info ``` langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-openai==0.0.6 ```
Invoking configurable_fields method on RunnableConfigurableAlternatives instance throws prefix_keys field required error
https://api.github.com/repos/langchain-ai/langchain/issues/17915/comments
0
2024-02-22T05:40:36Z
2024-02-26T18:27:08Z
https://github.com/langchain-ai/langchain/issues/17915
2,148,244,376
17,915
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Now I find the [API documentation](https://api.python.langchain.com/en/latest/langchain_api_reference.html) is only provided in the format of web page, which is unstructured. This makes it difficult for me to use programs to batch process these data. ### Idea or request for content: I hope the developers of LangChain can provide a structured format of API documentation, such as JSON and XML. It will be convenient for those people who want to batch process the data in the API documentation.
DOC: Is it possible to provide a structured format (like JSON) of API documentation?
https://api.github.com/repos/langchain-ai/langchain/issues/17908/comments
0
2024-02-22T03:03:37Z
2024-06-08T16:10:30Z
https://github.com/langchain-ai/langchain/issues/17908
2,148,087,220
17,908
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code https://python.langchain.com/docs/integrations/llms/vllm ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am trying to use langchain VLLM for mistral v2. for that I went to this documentation and tried to run first sample code. from langchain_community.llms import VLLM llm = VLLM( model="mistralai/Mistral-7B-Instruct-v0.2", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8, ) print(llm.invoke("What is the capital of France ?")) however I ended up getting error - The model’s max sew len (32768) is larger than the maximum number of tokens that can be stored in KV cache (18896). Try increasing ‘gpu_memory_utilization’ or decreasing ‘max_model_len’ when initializing the engine. I tried to update VLLM constructor code like this- from langchain_community.llms import VLLM llm = VLLM( model="mosaicml/mpt-7b", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8, max_model_len=4096, ) print(llm.invoke("What is the capital of France ?")) This change also it didn’t pick up. when I look at log I can see LLM engine gets initialized with config - max_seq_len=32768 I also tried to add model_kwargs={“max_model_len”:4096} None of the changes made to VLLM constructor are it getting picked up. This seems to be bug. ### System Info Langchain==0.1.8 python version 3.10.12 platform ubuntu22
Vllm not able to set max_model_len
https://api.github.com/repos/langchain-ai/langchain/issues/17906/comments
2
2024-02-22T02:50:34Z
2024-06-17T16:09:26Z
https://github.com/langchain-ai/langchain/issues/17906
2,148,076,026
17,906
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain_community.document_loaders import JSONLoader ``` ### Error Message and Stack Trace (if applicable) When I want to use 'https://python.langchain.com/docs/integrations/retrievers/bm25', as the snippet `from langchain.retrievers import BM25Retriever` , an error throwed that hint user to use pip install -U langchain_community. I did it as mentioned. But I found that JSONLoader cannot be load and the error message is very short. I have to dive into the code and found that after langchian_community updated, jsonloader support c language which is a dict contained in langchian without 'c' ```python LANGUAGE_EXTENSIONS: Dict[str, str] = { "py": Language.PYTHON, "js": Language.JS, "cobol": Language.COBOL, "cpp": Language.CPP, "cs": Language.CSHARP, "rb": Language.RUBY, "scala": Language.SCALA, "rs": Language.RUST, "go": Language.GO, "kt": Language.KOTLIN, "lua": Language.LUA, "pl": Language.PERL, "ts": Language.TS, "java": Language.JAVA, } ``` so the title issue appear. After update langchain, this issue solved. ### Description I have two suggestion that: 1.Consider updating the outdated BM25 document. 2 The issue caused by inconsistent versions should not be addressed so briefly. Consider providing a more specific prompt or including the appropriate version of LangChain in the requirements of LangChain_Community. ### System Info The issue will appear in langchian 0.1.7 and community 0.0.21 / while 0.1.8+0.0.21 is ok
AttributeError: C
https://api.github.com/repos/langchain-ai/langchain/issues/17905/comments
5
2024-02-22T02:31:44Z
2024-03-12T09:17:05Z
https://github.com/langchain-ai/langchain/issues/17905
2,148,059,381
17,905
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import FAISS import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") embeddings = OpenAIEmbeddings(model="text-embedding-3-small") # embeddings = OpenAIEmbeddings(model="text-embedding-ada-002") db = FAISS.from_documents(split_document, embeddings) # db.save_local("qa_faiss_index") db.save_local("qa_faiss_test_index") ``` ### Error Message and Stack Trace (if applicable) Warning: model not found. Using cl100k_base encoding. ### Description I want to use the new embedding model, but after running, the result is :Warning: model not found. Using cl100k_base encoding. I have upgrade langchain,But the problem still occurs. So, I do not how to address this issue ### System Info platform(windows) python version: 3.11.5(anaconda)
How to use openai new embedding model such as : text-embedding-3-small?
https://api.github.com/repos/langchain-ai/langchain/issues/17903/comments
7
2024-02-22T02:03:03Z
2024-06-08T16:10:25Z
https://github.com/langchain-ai/langchain/issues/17903
2,148,032,480
17,903
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python memory = ConversationSummaryBufferMemory( max_token_limit=os.getenv("MEMORY_MAX_TOKEN_LIMIT"), memory_key="history", llm=ChatOpenAI(model=os.getenv("GPT_3")), ) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description To report this issue on the LangChain GitHub, your bug report description should be clear, concise, and detailed. Below is a structured description that you can use to raise this bug: --- **Overview**: The ConversationSummaryBufferMemory component is designed to manage chat memory efficiently within a defined maximum token limit. It is expected to monitor the buffer (chat memory messages) and summarize the buffer content into a `moving_summary_buffer` once the token limit is reached. However, an issue has been identified where the token count of the `moving_summary_buffer` content keeps increasing over time, leading to the overall token limit being exceeded. **Expected Behavior**: - The ConversationSummaryBufferMemory should ensure that the combined token count of chat memory messages and the `moving_summary_buffer` does not exceed the defined maximum token limit. - Upon reaching the token limit, the buffer should be summarized effectively, with the `moving_summary_buffer` maintaining or reducing its token count to comply with the token limitations. **Current Behavior**: - The `moving_summary_buffer`'s token count increases over time, even after summarization processes. - This increase contributes to the total token count exceeding the maximum limit, potentially impacting performance and functionality. **Steps to Reproduce**: 1. Engage in a conversation that progressively fills the chat memory buffer close to the maximum token limit. 2. Observe the behavior as the system attempts to summarize the buffer into the `moving_summary_buffer`. 3. Monitor the token count of the `moving_summary_buffer` and the overall token usage over time. 4. Note instances where the total token count exceeds the predefined maximum limit. **Impact**: This issue can lead to degraded performance with potential implications for conversation continuity and user experience. It challenges the system's ability to manage long conversations efficiently and could lead to errors or limitations in processing further input. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.4.0: Wed Feb 7 23:21:07 PST 2024; root:xnu-10063.100.637.501.2~2/RELEASE_ARM64_T8112 > Python Version: 3.11.4 (main, Feb 2 2024, 14:55:45) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.1.23 > langchain: 0.1.6 > langchain_community: 0.0.20 > langsmith: 0.0.87 > langchain_openai: 0.0.5 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Issue with ConversationSummaryBufferMemory: Token Limit Exceeded by moving_summary_buffer(system message)
https://api.github.com/repos/langchain-ai/langchain/issues/17888/comments
0
2024-02-21T19:09:33Z
2024-05-31T23:58:44Z
https://github.com/langchain-ai/langchain/issues/17888
2,147,496,777
17,888
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content As well as some of the blob loading semantics
BaseLoader should be in core
https://api.github.com/repos/langchain-ai/langchain/issues/17883/comments
0
2024-02-21T17:47:48Z
2024-05-31T23:56:12Z
https://github.com/langchain-ai/langchain/issues/17883
2,147,350,240
17,883
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from fastapi import FastAPI from langchain_openai import ChatOpenAI from langchain.schema.runnable import ( ConfigurableField, Runnable, RunnableBranch, RunnableLambda, RunnableMap, ) from langchain_community.chat_message_histories import SQLChatMessageHistory from langchain_core import __version__ from langchain_community.vectorstores import Milvus from langserve import add_routes as langserve_add_routes from dotenv import load_dotenv from operator import itemgetter from file_manager import add_routes as file_manager_add_routes from fastapi import FastAPI from langchain.schema.output_parser import StrOutputParser from langchain_core.runnables.history import RunnableWithMessageHistory from langchain_core.messages import get_buffer_string from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder from prompts import CONDENSE_QUESTION_PROMPT, ANSWER_PROMPT, DEFAULT_DOCUMENT_PROMPT, RESPONSE_TEMPLATE from langchain_community.retrievers.llama_index import LlamaIndexRetriever from llama_index.core.indices.vector_store.base import VectorStoreIndex from llama_index.vector_stores.milvus import MilvusVectorStore from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.retrievers.document_compressors import ( DocumentCompressorPipeline, EmbeddingsFilter, ) from datetime import datetime from typing import Sequence from langchain.schema.document import Document from llama_index.core.vector_stores.types import VectorStore from langchain.retrievers import ContextualCompressionRetriever from langchain_openai import OpenAIEmbeddings load_dotenv() def get_chat_history(session_id: str) -> str: """Get the chat history for the session.""" history = SQLChatMessageHistory( session_id=session_id, connection_string="postgresql://rag:rag_pass@127.0.0.1:5432/rag" ) return history def format_docs(docs: Sequence[Document]) -> str: formatted_docs = [] for i, doc in enumerate(docs): doc_string = f"<doc id='{i}'>{doc.page_content}</doc>" formatted_docs.append(doc_string) return "\n".join(formatted_docs) def _get_retriever(vector_store: VectorStore): embeddings = OpenAIEmbeddings() splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=20) relevance_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.8) pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, relevance_filter] ) base_retriever = LlamaIndexRetriever(index=VectorStoreIndex.from_vector_store(vector_store).as_query_engine()) retriever = ContextualCompressionRetriever( base_retriever=base_retriever, base_compressor=pipeline_compressor ) return retriever.with_config(run_name="SourceRetriever") def create_retriever_chain( llm, retriever ) -> Runnable: condense_question_chain = ( CONDENSE_QUESTION_PROMPT | llm | StrOutputParser() ).with_config( run_name="CondenseQuestion", ) conversation_chain = condense_question_chain | retriever return RunnableBranch( ( RunnableLambda(lambda x: get_buffer_string(x["chat_history"])).with_config( run_name="HasChatHistoryCheck" ), conversation_chain.with_config(run_name="RetrievalChainWithHistory"), ), ( RunnableLambda(itemgetter("question")).with_config( run_name="Itemgetter:question" ) | retriever ).with_config(run_name="RetrievalChainWithNoHistory"), ).with_config(run_name="RouteDependingOnChatHistory") def _build_chain() -> Runnable: vector_store = MilvusVectorStore(dim=1536) retriever = _get_retriever(vector_store) llm = ChatOpenAI(model="gpt-3.5-turbo").configurable_alternatives( ConfigurableField(id="llm"), default_key="gpt-3.5-turbo", gpt_4_turbo_preview=ChatOpenAI(model="gpt-4-turbo-preview") ) retriever_chain = create_retriever_chain(llm, retriever) | RunnableLambda( format_docs ).with_config(run_name="FormatDocumentChunks") _context = RunnableMap( { "context": retriever_chain.with_config(run_name="RetrievalChain"), "question": RunnableLambda(itemgetter("question")).with_config( run_name="Itemgetter:question" ), "chat_history": RunnableLambda(itemgetter("chat_history")).with_config( run_name="Itemgetter:chat_history" ), } ) prompt = ChatPromptTemplate.from_messages( [ ("system", RESPONSE_TEMPLATE), MessagesPlaceholder(variable_name="chat_history"), ("human", "{question}"), ] ).partial(current_date=datetime.now().isoformat()) response_synthesizer = (prompt | llm | StrOutputParser()).with_config( run_name="GenerateResponse", ) return ( { "question": RunnableLambda(itemgetter("question")).with_config( run_name="Itemgetter:question" ), "chat_history": RunnableLambda(itemgetter("chat_history")).with_config( run_name="SerializeHistory" ), } | _context | response_synthesizer ) chain = _build_chain() chain_with_history = RunnableWithMessageHistory( chain, get_chat_history, input_messages_key="question", history_messages_key="chat_history", ) def add_routes(app: FastAPI) -> None: """Add routes to the FastAPI app.""" langserve_add_routes( app, chain_with_history, # disabled_endpoints=["playground", "batch"], ) file_manager_add_routes(app) ``` ### Error Message and Stack Trace (if applicable) RuntimeError: super(): __class__ cell not found The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app await response(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in __call__ async with anyio.create_task_group() as task_group: File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ raise exceptions[0] File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response async for data in self.body_iterator: File "/usr/local/lib/python3.11/site-packages/langserve/api_handler.py", line 1085, in _stream_log "data": self._serializer.dumps(data).decode("utf-8"), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langserve/serialization.py", line 168, in dumps return orjson.dumps(obj, default=default) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Type is not JSON serializable: numpy.float64 ### Description I'm trying to run langserve api with RAG and Runnable with message history but i'm facing this bug ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64 > Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)] Package Information ------------------- > langchain_core: 0.1.24 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.3 > langchain_cli: 0.0.21 > langchain_experimental: 0.0.50 > langchain_openai: 0.0.5 > langchainhub: 0.1.14 > langserve: 0.0.41 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph
TypeError: Type is not JSON serializable: numpy.float64
https://api.github.com/repos/langchain-ai/langchain/issues/17875/comments
5
2024-02-21T15:20:41Z
2024-02-21T15:45:53Z
https://github.com/langchain-ai/langchain/issues/17875
2,147,022,028
17,875
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` python from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains import MapReduceDocumentsChain, ReduceDocumentsChain text_splitter = RecursiveCharacterTextSplitterRegroup( # custom RecursiveCharacterTextSplitter model_name = self.name, chunk_size = max_input_tokens, # 2048 chunk_overlap = 0, length_function = text_tokenized_length, separators = ["\nFile name:", "\n\n", "\n"]#PATCH_SEPARATORS ) map_chain = LLMChain(llm=self.llm, prompt=self._prompt("patch", "explain")) reduce_chain = LLMChain(llm=self.llm, prompt=self._prompt("patch", "summarize")) combine_documents_chain = StuffDocumentsChain(llm_chain=reduce_chain, document_variable_name="patch_explain") reduce_documents_chain = ReduceDocumentsChain(combine_documents_chain=combine_documents_chain, collapse_documents_chain=combine_documents_chain, token_max=max_input_tokens, # 2048 collapse_max_retries=1 ) map_reduce_chain = MapReduceDocumentsChain(llm_chain=map_chain, reduce_documents_chain=reduce_documents_chain, document_variable_name="patch", return_intermediate_steps=False, ) texts = text_splitter.split_text(text) input_dict = { "input_documents": [Document( page_content = text ) for text in texts], "text_name": text_name, } map_reduce_chain.invoke(input=input_dict) ``` ### Error Message and Stack Trace (if applicable) Token indices sequence length is longer than the specified maximum sequence length for this model (6589 > 2048). Running this sequence through the model will result in indexing errors ### Description I'm working with the **MapReduceDocumentsChain** class, following the instructions in the LangChain tutorial on [Summarization](https://python.langchain.com/docs/use_cases/summarization). According to what I've gathered from the documentation, the output documents produced by the ReduceDocumentChain are restricted to not exceed a maximum token length, referred to as **token_max**. Searching online, I've come across others facing the same issue, yet no solutions provided seem adequate. I'm eager for any assistance you can offer. ### System Info Platform: macOS 14.2.1 (23C71) Python version: 3.11 langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.2
ReduceDocumentsChain: token_max has not effect on chunk length
https://api.github.com/repos/langchain-ai/langchain/issues/17869/comments
3
2024-02-21T12:15:36Z
2024-07-13T16:05:31Z
https://github.com/langchain-ai/langchain/issues/17869
2,146,609,976
17,869
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code I think there's an issue with `save_context` and `asave_context` after introducing this [change](https://github.com/langchain-ai/langchain/pull/16728). Here's the minimal example: ```python from langchain.agents import AgentExecutor, create_react_agent from langchain import hub import os from langchain_openai import ChatOpenAI from dotenv import load_dotenv from langchain.tools import Tool from langchain.chains.conversation.memory import ConversationBufferWindowMemory load_dotenv() llm = ChatOpenAI( openai_api_key=os.getenv("OPENAI_API_KEY"), model="gpt-3.5-turbo", ) tools = [ Tool.from_function( name="General Chat", description="For general chat not covered by other tools", func=llm.invoke, return_direct=True ) ] memory = ConversationBufferWindowMemory( memory_key='chat_history', k=5, return_messages=True, ) agent_prompt = hub.pull("hwchase17/react-chat") agent = create_react_agent(llm, tools, agent_prompt) agent_executor = AgentExecutor( agent=agent, tools=tools, memory=memory, verbose=True ) def generate_response(prompt): """ Create a handler that calls the Conversational agent and returns a response to be rendered in the UI """ response = agent_executor.invoke({"input": prompt}) return response['output'] generate_response(input("Write: ")) ``` ### Error Message and Stack Trace (if applicable) ``` > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: General Chat Action Input: help me writecontent='Sure, I can help you write. What would you like to write about?' Traceback (most recent call last): File "D:\Development\WW\langchain-minimal\demo.py", line 53, in <module> generate_response(input("Write: ")) File "D:\Development\WW\langchain-minimal\demo.py", line 49, in generate_response response = agent_executor.invoke({"input": prompt}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 168, in invoke raise e File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 163, in invoke final_outputs: Dict[str, Any] = self.prep_outputs( ^^^^^^^^^^^^^^^^^^ File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 460, in prep_outputs self.memory.save_context(inputs, outputs) File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 40, in save_context [HumanMessage(content=input_str), AIMessage(content=output_str)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain_core\messages\base.py", line 37, in __init__ return super().__init__(content=content, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__ super().__init__(**kwargs) File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage content str type expected (type=type_error.str) content value is not a valid list (type=type_error.list) ``` ### Description [See related line](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L40) I am encountering this error when creating reAct agents. I looked into the possible cause and I think it has something to do with `save_context` and `asave_context` functions: ```python def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: """Save context from this conversation to buffer.""" input_str, output_str = self._get_input_output(inputs, outputs) self.chat_memory.add_messages( [HumanMessage(content=input_str), AIMessage(content=output_str)] ) ``` It happens when `output_str` is already of type `AIMessage`. The error message also suggests that validation fails because `AIMessage(content=output_str)` is not `str`. Removing the cast `AIMessage` seems to resolve the issue. But I think we really need to address the pydantic error and make the validation more flexible in this part (like `Any` instead of `str`? Or Union of types). ### System Info langchain==0.1.8 platform: windows python version: 3.11.4
The functions `save_context` and `asave_context` are experiencing problems when trying to store output messages, specifically when `output_str` is of the `AIMessage` type
https://api.github.com/repos/langchain-ai/langchain/issues/17867/comments
5
2024-02-21T11:43:51Z
2024-02-23T09:14:24Z
https://github.com/langchain-ai/langchain/issues/17867
2,146,545,271
17,867
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code `from langchain.agents.agent_types import AgentType` ### Error Message and Stack Trace (if applicable) TypeError: type 'Result' is not subscriptable ### Description TypeError: type 'Result' is not subscriptable ----> 6 from langchain.agents.agent_types import AgentType 7 from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent 8 #from langchain_openai import ChatOpenAI File ~/anaconda3/lib/python3.11/site-packages/langchain/agents/__init__.py:34 31 from pathlib import Path 32 from typing import Any ---> 34 from langchain_community.agent_toolkits import ( 35 create_json_agent, 36 create_openapi_agent, 37 create_pbi_agent, 38 create_pbi_chat_agent, 39 create_spark_sql_agent, 40 create_sql_agent, 41 ) 42 from langchain_core._api.path import as_import_path 44 from langchain.agents.agent import ( 45 Agent, 46 AgentExecutor, (...) 50 LLMSingleActionAgent, 51 ) File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/__init__.py:46 44 from langchain_community.agent_toolkits.spark_sql.base import create_spark_sql_agent 45 from langchain_community.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit ---> 46 from langchain_community.agent_toolkits.sql.base import create_sql_agent 47 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit 48 from langchain_community.agent_toolkits.steam.toolkit import SteamToolkit File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/sql/base.py:29 19 from langchain_core.prompts.chat import ( 20 ChatPromptTemplate, 21 HumanMessagePromptTemplate, 22 MessagesPlaceholder, 23 ) 25 from langchain_community.agent_toolkits.sql.prompt import ( 26 SQL_FUNCTIONS_SUFFIX, 27 SQL_PREFIX, 28 ) ---> 29 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit 30 from langchain_community.tools.sql_database.tool import ( 31 InfoSQLDatabaseTool, 32 ListSQLDatabaseTool, 33 ) 35 if TYPE_CHECKING: File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/sql/toolkit.py:9 7 from langchain_community.agent_toolkits.base import BaseToolkit 8 from langchain_community.tools import BaseTool ----> 9 from langchain_community.tools.sql_database.tool import ( 10 InfoSQLDatabaseTool, 11 ListSQLDatabaseTool, 12 QuerySQLCheckerTool, 13 QuerySQLDataBaseTool, 14 ) 15 from langchain_community.utilities.sql_database import SQLDatabase 18 class SQLDatabaseToolkit(BaseToolkit): File ~/anaconda3/lib/python3.11/site-packages/langchain_community/tools/sql_database/tool.py:33 29 class _QuerySQLDataBaseToolInput(BaseModel): 30 query: str = Field(..., description="A detailed and correct SQL query.") ---> 33 class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool): 34 """Tool for querying a SQL database.""" 36 name: str = "sql_db_query" File ~/anaconda3/lib/python3.11/site-packages/langchain_community/tools/sql_database/tool.py:47, in QuerySQLDataBaseTool() 36 name: str = "sql_db_query" 37 description: str = """ 38 Execute a SQL query against the database and get back the result.. 39 If the query is not correct, an error message will be returned. 40 If an error is returned, rewrite the query, check the query, and try again. 41 """ 43 def _run( 44 self, 45 query: str, 46 run_manager: Optional[CallbackManagerForToolRun] = None, ---> 47 ) -> Union[str, Sequence[Dict[str, Any]], Result[Any]]: 48 """Execute the query, return the results or an error message.""" 49 return self.db.run_no_throw(query) TypeError: type 'Result' is not subscriptable ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000 > Python Version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] Package Information ------------------- > langchain_core: 0.1.25 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.5 > langchain_experimental: 0.0.52 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Agent Type Import iSsue
https://api.github.com/repos/langchain-ai/langchain/issues/17866/comments
2
2024-02-21T11:36:00Z
2024-02-21T11:40:53Z
https://github.com/langchain-ai/langchain/issues/17866
2,146,531,888
17,866
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code When I was testing with the ChatZhipuAI provided by the official documentation, an AttributeError occurred: module 'zhipuai' has no attribute 'model_api'. The documentation URL is:https://python.langchain.com/docs/integrations/chat/zhipuai ``` from langchain_community.chat_models import ChatZhipuAI from langchain_core.messages import AIMessage, HumanMessage, SystemMessage chat = ChatZhipuAI( temperature=0.5, api_key="my key***", model="chatglm_turbo", ) messages = [ AIMessage(content="Hi."), SystemMessage(content="Your role is a poet."), HumanMessage(content="Write a short poem about AI in four lines."), ] response = chat(messages) print(response.content) ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/ysk/Library/Mobile Documents/com~apple~CloudDocs/langchain/zhipu/simple.py", line 17, in <module> response = chat(messages) File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 691, in __call__ generation = self.generate( File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate raise e File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate self._generate_with_cache( File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache return self._generate( File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/zhipuai.py", line 265, in _generate response = self.invoke(prompt) File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/zhipuai.py", line 183, in invoke return self.zhipuai.model_api.invoke( AttributeError: module 'zhipuai' has no attribute 'model_api' ### Description I was testing with the ChatZhipuAI provided by the official documentation ### System Info langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.24 langchain-openai==0.0.6 langchainhub==0.1.14 platform mac python 3.10
ChatZhipuAI invoke error
https://api.github.com/repos/langchain-ai/langchain/issues/17863/comments
9
2024-02-21T11:12:01Z
2024-07-02T16:08:57Z
https://github.com/langchain-ai/langchain/issues/17863
2,146,485,006
17,863
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code Take the following Pydantic class: ```python from langchain_core.pydantic_v1 import BaseModel, Field class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") ``` When used with `JsonOutputParser` we can generate pydantic schema as follows:- ```python from langchain_core.output_parsers import JsonOutputParser instructions = JsonOutputParser(pydantic_object=Joke).get_format_instructions() ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description The given example steps are not replicable with `XMLOutputParser`. Although it does have the `tags` field, there is no way to pass the pydantic object directly. Passing JSON schema can result in transient errors in generated outputs. Ideally an XML schema or something similar should be injected, which would be dynamically generated from the Pydantic object. ### System Info System Information ------------------ > OS: Linux > OS Version: #18~22.04.1-Ubuntu SMP Wed Jan 10 22:54:16 UTC 2024 > Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.1.25 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.5 > langchainhub: 0.1.14 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Missing first class pydantic objects support by `XMLOutputParser`
https://api.github.com/repos/langchain-ai/langchain/issues/17862/comments
2
2024-02-21T10:47:37Z
2024-08-06T16:07:38Z
https://github.com/langchain-ai/langchain/issues/17862
2,146,438,574
17,862
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ` from llama_index.vector_stores.milvus import MilvusVectorStore from langchain_community.retrievers.llama_index import LlamaIndexRetriever from llama_index.core.indices.vector_store.base import VectorStoreIndex vector_store = MilvusVectorStore(dim=1536) retriever = LlamaIndexRetriever(index=VectorStoreIndex.from_vector_store(vector_store)) ` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app await response(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in __call__ async with anyio.create_task_group() as task_group: File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ raise exceptions[0] File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response async for data in self.body_iterator: File "/usr/local/lib/python3.11/site-packages/langserve/api_handler.py", line 1056, in _stream_log async for chunk in self._runnable.astream_log( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 686, in astream_log async for item in _astream_log_implementation( # type: ignore File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 612, in _astream_log_implementation await task File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 566, in consume_astream async for chunk in runnable.astream(input, config, **kwargs): File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4144, in astream async for item in self.bound.astream( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4144, in astream async for item in self.bound.astream( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2449, in astream async for chunk in self.atransform(input_aiter(), config, **kwargs): File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform async for chunk in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter async for chunk in output: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform async for output in final_pipeline: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4180, in atransform async for item in self.bound.atransform( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform async for chunk in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter async for chunk in output: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform async for output in final_pipeline: File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform async for chunk in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1560, in _atransform_stream_with_config final_input: Optional[Input] = await py_anext(input_for_tracing, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl return await __anext__(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer item = await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/configurable.py", line 218, in atransform async for chunk in runnable.atransform(input, config, **kwargs): File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1061, in atransform async for chunk in input: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1061, in atransform async for chunk in input: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in atransform async for chunk in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter async for chunk in output: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2849, in _atransform chunk = AddableDict({step_name: task.result()}) ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2832, in get_next_chunk return await py_anext(generator) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform async for chunk in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter async for chunk in output: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform async for output in final_pipeline: File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3711, in atransform async for output in self._atransform_stream_with_config( File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1560, in _atransform_stream_with_config final_input: Optional[Input] = await py_anext(input_for_tracing, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl return await __anext__(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer item = await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1071, in atransform async for output in self.astream(final, config, **kwargs): File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 592, in astream yield await self.ainvoke(input, config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 137, in ainvoke return await self.aget_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents raise e File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents result = await self._aget_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 168, in _aget_relevant_documents return await run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor return await asyncio.get_running_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/langchain_community/retrievers/llama_index.py", line 28, in _get_relevant_documents raise ImportError( ImportError: You need to install `pip install llama-index` to use this retriever. ### Description Since LlamaIndex v0.10, packages structure is different. You might have to change these imports for LlamaIndexRetriever class in langchain_community.retrievers.llama_index: `from llama_index.core.indices.base import BaseGPTIndex from llama_index.core.base.response.schema import Response` to `from llama_index.core.indices.base import BaseGPTIndex from llama_index.core.base.response.schema import Response` **NOTE : ** This might not be the only import problem since the new Llama index update ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64 > Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)] Package Information ------------------- > langchain_core: 0.1.24 > langchain: 0.1.8 > langchain_community: 0.0.21 > langsmith: 0.1.3 > langchain_cli: 0.0.21 > langchain_experimental: 0.0.50 > langchain_openai: 0.0.5 > langchainhub: 0.1.14 > langserve: 0.0.41 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph
LlamaIndexRetriever imports not working since Llamaindex v0.10
https://api.github.com/repos/langchain-ai/langchain/issues/17860/comments
2
2024-02-21T09:24:32Z
2024-04-26T21:12:53Z
https://github.com/langchain-ai/langchain/issues/17860
2,146,229,581
17,860
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python found_response_schemas = [ ResponseSchema(name="answer", description="Response substitutes from the context"), ResponseSchema( name="found", description="Return whether it could find the answer from the references or not.", ), ] ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am trying to make an output parser that is telling whether it could successfully find an answer form the reference document using RAG or not. But it seems like it always returns ```"found" = true```. ### System Info langchain==0.1.2 langchain-community==0.0.14
custom boolean output parser not working well
https://api.github.com/repos/langchain-ai/langchain/issues/17858/comments
3
2024-02-21T08:43:25Z
2024-05-31T23:56:15Z
https://github.com/langchain-ai/langchain/issues/17858
2,146,134,674
17,858
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Just a silly mistake by your developer or maybe content editor. Link : https://python.langchain.com/docs/modules/chains/ ### Idea or request for content: Just change it. Nothing else. You are going to be unicorn soon. Hire me as your content writer :) ![langchainissue](https://github.com/langchain-ai/langchain/assets/66516678/e5f34f10-f6ca-446b-9e0d-dbe9d5bac6da)
DOC: Spelling Mistake on Website (Chains: RetreivalQA)
https://api.github.com/repos/langchain-ai/langchain/issues/17851/comments
1
2024-02-21T06:20:00Z
2024-03-08T20:09:23Z
https://github.com/langchain-ai/langchain/issues/17851
2,145,924,931
17,851
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code `# Refer : https://python.langchain.com/docs/integrations/chat/google_generative_ai from langchain_google_genai import ( ChatGoogleGenerativeAI, HarmBlockThreshold, HarmCategory, ) import os os.environ["GOOGLE_API_KEY"] = "" llm = ChatGoogleGenerativeAI(model="gemini-pro", safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, }, ) result = llm.invoke("Write a ballad about LangChain") print(result.content) ` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/manjunath.s/Desktop/sample_test/gemini/safety_settings/test.py", line 18, in <module> result = llm.invoke("Write a ballad about LangChain") File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke self.generate_prompt( File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate raise e File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate self._generate_with_cache( File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache return self._generate( File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_google_genai/chat_models.py", line 550, in _generate params, chat, message = self._prepare_chat( File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_google_genai/chat_models.py", line 645, in _prepare_chat client = genai.GenerativeModel( TypeError: __init__() got an unexpected keyword argument 'tools' ### Description I was trying to disable safety filters on Google Gemini and I'm facing the error mentioned below TypeError: __init__() got an unexpected keyword argument 'tools' Sample code is attached for reference. ### System Info `langchain==0.1.8 langchain-community==0.0.21 langchain-core==0.1.25 langchain-google-genai==0.0.9 ` Python 3.9.6
safety settings lead to __init__() got an unexpected keyword argument 'tools'
https://api.github.com/repos/langchain-ai/langchain/issues/17847/comments
1
2024-02-21T05:07:42Z
2024-05-31T23:58:42Z
https://github.com/langchain-ai/langchain/issues/17847
2,145,848,491
17,847
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code SemanticSimilarityExampleSelector.from_examples is throwing an error : ServiceRequestError: Invalid URL "/indexes('langchain-index')?api-version=2023-10-01-Preview": No scheme supplied. Perhaps you meant [https:///indexes('langchain-index')?api-version=2023-10-01-Preview?](https://indexes('langchain-index')/?api-version=2023-10-01-Preview?) I am creating vector store and using it in SemanticSimilarityExampleSelector.from_examples like below: vector_store = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, ) example_selector = SemanticSimilarityExampleSelector.from_examples( examples, embeddings, vector_store, k=5, input_keys=["input"], ) ### Error Message and Stack Trace (if applicable) erviceRequestError Traceback (most recent call last) Cell In[11], line 1 ----> 1 example_selector = SemanticSimilarityExampleSelector.from_examples( 2 examples, 3 embeddings, 4 vector_store, 5 k=5, 6 input_keys=["input"] 7 ) File ~\anaconda3\Lib\site-packages\langchain_core\example_selectors\semantic_similarity.py:105, in SemanticSimilarityExampleSelector.from_examples(cls, examples, embeddings, vectorstore_cls, k, input_keys, example_keys, vectorstore_kwargs, **vectorstore_cls_kwargs) 103 else: 104 string_examples = [" ".join(sorted_values(eg)) for eg in examples] --> 105 vectorstore = vectorstore_cls.from_texts( 106 string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs 107 ) 108 return cls( 109 vectorstore=vectorstore, 110 k=k, (...) 113 vectorstore_kwargs=vectorstore_kwargs, 114 ) File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:632, in AzureSearch.from_texts(cls, texts, embedding, metadatas, azure_search_endpoint, azure_search_key, index_name, **kwargs) 620 @classmethod 621 def from_texts( 622 cls: Type[AzureSearch], (...) 630 ) -> AzureSearch: 631 # Creating a new Azure Search instance --> 632 azure_search = cls( 633 azure_search_endpoint, 634 azure_search_key, 635 index_name, 636 embedding, 637 ) 638 azure_search.add_texts(texts, metadatas, **kwargs) 639 return azure_search File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:269, in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, fields, vector_search, semantic_configurations, scoring_profiles, default_scoring_profile, cors_options, **kwargs) 267 if "user_agent" in kwargs and kwargs["user_agent"]: 268 user_agent += " " + kwargs["user_agent"] --> 269 self.client = _get_search_client( 270 azure_search_endpoint, 271 azure_search_key, 272 index_name, 273 semantic_configuration_name=semantic_configuration_name, 274 fields=fields, 275 vector_search=vector_search, 276 semantic_configurations=semantic_configurations, 277 scoring_profiles=scoring_profiles, 278 default_scoring_profile=default_scoring_profile, 279 default_fields=default_fields, 280 user_agent=user_agent, 281 cors_options=cors_options, 282 ) 283 self.search_type = search_type 284 self.semantic_configuration_name = semantic_configuration_name File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:112, in _get_search_client(endpoint, key, index_name, semantic_configuration_name, fields, vector_search, semantic_configurations, scoring_profiles, default_scoring_profile, default_fields, user_agent, cors_options) 108 index_client: SearchIndexClient = SearchIndexClient( 109 endpoint=endpoint, credential=credential, user_agent=user_agent 110 ) 111 try: --> 112 index_client.get_index(name=index_name) 113 except ResourceNotFoundError: 114 # Fields configuration 115 if fields is not None: 116 # Check mandatory fields File ~\anaconda3\Lib\site-packages\azure\core\tracing\decorator.py:78, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs) 76 span_impl_type = settings.tracing_implementation() 77 if span_impl_type is None: ---> 78 return func(*args, **kwargs) 80 # Merge span is parameter is set, but only if no explicit parent are passed 81 if merge_span and not passed_in_parent: File ~\anaconda3\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py:149, in SearchIndexClient.get_index(self, name, **kwargs) 131 """ 132 133 :param name: The name of the index to retrieve. (...) 146 :caption: Get an index. 147 """ 148 kwargs["headers"] = self._merge_client_headers(kwargs.get("headers")) --> 149 result = self._client.indexes.get(name, **kwargs) 150 return SearchIndex._from_generated(result) File ~\anaconda3\Lib\site-packages\azure\core\tracing\decorator.py:78, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs) 76 span_impl_type = settings.tracing_implementation() 77 if span_impl_type is None: ---> 78 return func(*args, **kwargs) 80 # Merge span is parameter is set, but only if no explicit parent are passed 81 if merge_span and not passed_in_parent: File ~\anaconda3\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py:857, in IndexesOperations.get(self, index_name, request_options, **kwargs) 854 _request.url = self._client.format_url(_request.url, **path_format_arguments) 856 _stream = False --> 857 pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access 858 _request, stream=_stream, **kwargs 859 ) 861 response = pipeline_response.http_response 863 if response.status_code not in [200]: File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:230, in Pipeline.run(self, request, **kwargs) 228 pipeline_request: PipelineRequest[HTTPRequestType] = PipelineRequest(request, context) 229 first_node = self._impl_policies[0] if self._impl_policies else _TransportRunner(self._transport) --> 230 return first_node.send(pipeline_request) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) [... skipping similar frames: _SansIOHTTPPolicyRunner.send at line 86 (2 times)] File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_redirect.py:197, in RedirectPolicy.send(self, request) 195 original_domain = get_domain(request.http_request.url) if redirect_settings["allow"] else None 196 while retryable: --> 197 response = self.next.send(request) 198 redirect_location = self.get_redirect_location(response) 199 if redirect_location and redirect_settings["allow"]: File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_retry.py:553, in RetryPolicy.send(self, request) 551 is_response_error = True 552 continue --> 553 raise err 554 finally: 555 end_time = time.time() File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_retry.py:531, in RetryPolicy.send(self, request) 529 try: 530 self._configure_timeout(request, absolute_timeout, is_response_error) --> 531 response = self.next.send(request) 532 if self.is_retry(retry_settings, response): 533 retry_active = self.increment(retry_settings, response=response) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) [... skipping similar frames: _SansIOHTTPPolicyRunner.send at line 86 (2 times)] File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request) 84 _await_result(self._policy.on_request, request) 85 try: ---> 86 response = self.next.send(request) 87 except Exception: # pylint: disable=broad-except 88 _await_result(self._policy.on_exception, request) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:119, in _TransportRunner.send(self, request) 109 """HTTP transport send method. 110 111 :param request: The PipelineRequest object. (...) 114 :rtype: ~azure.core.pipeline.PipelineResponse 115 """ 116 cleanup_kwargs_for_transport(request.context.options) 117 return PipelineResponse( 118 request.http_request, --> 119 self._sender.send(request.http_request, **request.context.options), 120 context=request.context, 121 ) File ~\anaconda3\Lib\site-packages\azure\core\pipeline\transport\_requests_basic.py:381, in RequestsTransport.send(self, request, **kwargs) 378 error = ServiceRequestError(err, error=err) 380 if error: --> 381 raise error 382 if _is_rest(request): 383 from azure.core.rest._requests_basic import RestRequestsTransportResponse ServiceRequestError: Invalid URL "/indexes('langchain-index')?api-version=2023-10-01-Preview": No scheme supplied. Perhaps you meant [https:///indexes('langchain-index')?api-version=2023-10-01-Preview?](https://indexes('langchain-index')/?api-version=2023-10-01-Preview?) ### Description I am trying to use the example selector (SemanticSimilarityExampleSelector.from_examples) feature of langchain. ### System Info langchain 0.1.8 platform - windows python version - 3.11.5
Issue with SemanticSimilarityExampleSelector.from_examples
https://api.github.com/repos/langchain-ai/langchain/issues/17846/comments
7
2024-02-21T04:44:48Z
2024-06-27T16:07:34Z
https://github.com/langchain-ai/langchain/issues/17846
2,145,823,292
17,846
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Hi, I was reading through the guide for Prompts and I realized that the ordering of the articles didn't make sense. After the article about composition there is "Example Selector Types" article which is followed by "Example Selectors" which is followed by "Few Shot Prompting". I think the ordering needs to be opposite to what it is. Link to the documentation: https://python.langchain.com/docs/modules/model_io/prompts/example_selectors Have a nice day, ### Idea or request for content: _No response_
Ordering under Modules > Prompts seems to be wrong
https://api.github.com/repos/langchain-ai/langchain/issues/17825/comments
3
2024-02-20T22:56:00Z
2024-05-31T23:56:12Z
https://github.com/langchain-ai/langchain/issues/17825
2,145,430,838
17,825
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content max_marginal_relevance should be moved to core in a way that doesn't require numpy in order to prevent code duplication between packages
Some vectorstore utils are duplicated between packages
https://api.github.com/repos/langchain-ai/langchain/issues/17824/comments
1
2024-02-20T22:41:55Z
2024-05-31T23:58:43Z
https://github.com/langchain-ai/langchain/issues/17824
2,145,415,747
17,824
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ``` from flask import Blueprint, request, Response, current_app, jsonify, stream_with_context from flask_jwt_extended import jwt_required, current_user from project import db from langchain_community.llms import Ollama from langchain_community.vectorstores import Chroma from langchain_community.document_loaders.csv_loader import CSVLoader from langchain_community.embeddings import OllamaEmbeddings from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate from langchain_community.chat_message_histories import MongoDBChatMessageHistory from langchain.agents import tool, AgentExecutor, create_react_agent from langchain.tools.retriever import create_retriever_tool from langchain.agents import AgentExecutor from langchain_openai import OpenAI from langchain_openai import OpenAIEmbeddings import json from langchain_core.utils.function_calling import convert_to_openai_tool from langchain_experimental.llms.ollama_functions import OllamaFunctions from langchain_openai import ChatOpenAI from langchain.callbacks.streaming_stdout_final_only import FinalStreamingStdOutCallbackHandler import time import asyncio from random import choice chat_bot_2_Blueprint = Blueprint('chat_bot_2', __name__) @chat_bot_2_Blueprint.route('/api/ai/chat_bot_2/', methods=['GET', 'POST']) # <- from '/' @jwt_required() async def chat_bot_2(): user_ID = current_user.user_ID chat_input = request.json.get("chat_input", None) chat_ID = request.json.get("live_chat_ID", None) path_to_chroma_db = "./project/api/ai/chroma_db/" path_to_csv = "/love_qa.csv" loader = CSVLoader(file_path=path_to_csv) data = loader.load() vector_collection = "dr_love_4" model_type = "open_ai" if model_type == "open_ai": embedding_model = OpenAIEmbeddings(openai_api_key=current_app.config['OPENAI_SECRET']) llm = ChatOpenAI( openai_api_key=current_app.config['OPENAI_SECRET'], model_name="gpt-3.5-turbo-0125", streaming=True, max_tokens=2000, callbacks=[FinalStreamingStdOutCallbackHandler(answer_prefix_tokens=['Final', ' Answer', ':'])],) else: model_type = OllamaEmbeddings(model="llama2") llm = OllamaFunctions(model="llama2") print('LLAMA') react_template = get_react_template() custom_react_prompt = PromptTemplate.from_template(react_template) # save to disk #vectorstore = Chroma.from_documents(documents=data, embedding=embedding_model, persist_directory=path_to_chroma_db + vector_collection) # load from disk vectorstore = Chroma(persist_directory=path_to_chroma_db + vector_collection, embedding_function=embedding_model) retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 2}) retreiver_title = "hormone_doctor_medical_questions_and_answers" retreiver_description = "Searches and returns a series of questions and answers on hormones and medical advice and also personal questions" retriever_tool = create_retriever_tool(retriever, retreiver_title, retreiver_description) tools = [confetti, get_word_length, retriever_tool] message_history = MongoDBChatMessageHistory(session_id=chat_ID, connection_string=current_app.config['MONGO_CLI'], database_name=current_app.config['MONGO_DB'], collection_name="chat_histories") agent = create_react_agent(llm=llm, tools=tools, prompt=custom_react_prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=5, max_execution_time=100, return_intermediate_steps=True, handle_parsing_errors=True).with_config({"run_name": "Agent"}) async def stream_output(chat_input): # async for chunk in agent_executor.stream({"input": chat_input, "chat_history": message_history}): # print(chunk) # if "output" in chunk: # # print(f'Final Output: {chunk["output"]}') # yield chunk["output"] async for event in agent_executor.astream_events({"input": chat_input, "chat_history": message_history}, version="v1"): kind = event["event"] if kind == "on_chain_start": if (event["name"] == "Agent"): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})` print(f"Starting agent: {event['name']} with input: {event['data'].get('input')}") elif kind == "on_chat_model_stream": content = event["data"]["chunk"].content if content: print(content, end="|") yield content update_chat_history(user_ID, chat_ID, chat_input) return Response(stream_with_context(stream_output(chat_input)), mimetype='text/event-stream') def update_chat_history(user_ID, chat_ID, chat_input): preview = None chats = current_user.chats # Assume chat_ID and user_ID are defined earlier in your code chat_exists = any(chat_ID == item.get('chat_ID') for item in chats) if not chat_exists: db.credentials.update_one({"user_ID": user_ID}, {"$push": {'chats': {'chat_ID': chat_ID, 'preview': chat_input}}}) else: #Move the items to the front of the list #get the preview for item in chats: if item['chat_ID'] == chat_ID: preview = item['preview'] # First, pull the item if it exists db.credentials.update_one({"user_ID": user_ID, "chats.chat_ID": chat_ID}, {"$pull": {"chats": {"chat_ID": chat_ID}}}) # Then, push the item to the array db.credentials.update_one({"user_ID": user_ID}, {"$push": {"chats": {"chat_ID": chat_ID, "preview": preview}}}) def get_react_template(): react_template=""" Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. TOOLS: ------ Assistant has access to the following tools: {tools} To use a tool, please use the following format: Thought: Do I need to use a tool? Yes Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format: Thought: Do I need to use a tool? No Final Answer: [your response here] Begin! Previous conversation history: {chat_history} Question: {input} New input: {agent_scratchpad} """ return react_template @tool async def confetti(string: str) -> str: """Adds a random word to the string""" return "confetti " + string @tool async def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word) @tool async def where_cat_is_hiding() -> str: """Where is the cat hiding right now?""" return choice(["under the bed", "on the shelf", "in your heart"]) ``` ### Error Message and Stack Trace (if applicable) Debugging middleware caught exception in streamed response at a point where response headers were already sent. Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/werkzeug/wsgi.py", line 500, in __next__ return self._next() ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded for item in iterable: TypeError: 'function' object is not iterable ### Description I am trying to stream the output from an OpenAI agent using astream or astream_events. Neither work. I have tried many different potential solutions and would appreciate some guidance. Thank You. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112 > Python Version: 3.11.2 (main, Mar 24 2023, 00:31:37) [Clang 14.0.0 (clang-1400.0.29.202)] Package Information ------------------- > langchain_core: 0.1.23 > langchain: 0.1.7 > langchain_community: 0.0.20 > langsmith: 0.0.87 > langchain_cli: 0.0.21 > langchain_experimental: 0.0.51 > langchain_openai: 0.0.6 > langchainhub: 0.1.14 > langserve: 0.0.41 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph
Streaming Agent Error -----> TypeError: 'function' object is not iterable
https://api.github.com/repos/langchain-ai/langchain/issues/17821/comments
4
2024-02-20T20:53:59Z
2024-03-07T21:50:42Z
https://github.com/langchain-ai/langchain/issues/17821
2,145,263,044
17,821
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. ### Example Code ```python from langchain.agents import AgentExecutor, create_react_agent, Tool ``` ### Error Message and Stack Trace (if applicable) ``` Traceback (most recent call last): File "/home/airflow/tm-ml-chat/new.py", line 25, in <module> from langchain.agents import AgentExecutor, create_react_agent, Tool File "/home/airflow/.local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 34, in <module> from langchain_community.agent_toolkits import ( File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/__init__.py", line 46, in <module> from langchain_community.agent_toolkits.sql.base import create_sql_agent File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py", line 29, in <module> from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/toolkit.py", line 9, in <module> from langchain_community.tools.sql_database.tool import ( File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/tools/sql_database/tool.py", line 33, in <module> class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool): File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/tools/sql_database/tool.py", line 47, in QuerySQLDataBaseTool ) -> Union[str, Sequence[Dict[str, Any]], Result[Any]]: TypeError: 'type' object is not subscriptable ``` ### Description When I try and run the import command as shown, I get the error ### System Info langchain 0.1.8 langchain_community-0.0.21 sqlalchemy 1.4.51
TypeError: 'type' object is not subscriptable when importing Agent
https://api.github.com/repos/langchain-ai/langchain/issues/17819/comments
8
2024-02-20T19:07:03Z
2024-02-23T00:52:08Z
https://github.com/langchain-ai/langchain/issues/17819
2,145,085,038
17,819
[ "langchain-ai", "langchain" ]
### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Doc page: https://python.langchain.com/docs/integrations/vectorstores/azuresearch The documentation for AzureSearch uses environment variables from the deprecated <v1.x openai library. It should be updated to use the environment variables that pertain to v1.x + Azure: https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai ### Idea or request for content: Additionally, since the docs show using Azure OpenAI, we should probably create an instance of `AzureOpenAIEmbeddings` instead of `OpenAIEmbeddings` in the example.
DOC: AzureSearch doc should be updated to use openai>=1.x
https://api.github.com/repos/langchain-ai/langchain/issues/17818/comments
1
2024-02-20T18:59:31Z
2024-05-31T23:56:16Z
https://github.com/langchain-ai/langchain/issues/17818
2,145,072,704
17,818