issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import asyncio import litellm from langchain_community.chat_models.litellm_router import ChatLiteLLMRouter from langchain_core.messages import HumanMessage from langchain_core.prompt_values import ChatPromptValue from litellm import Router from litellm.integrations.custom_logger import CustomLogger def get_llm_router() -> Router: """ Return a new instance of Router, ensure to pass the following parameters so responses are cached: * redis_host * redis_port * redis_password * cache_kwargs * cache_responses * caching_groups """ raise NotImplementedError('Create your own router') class MyLogger(CustomLogger): async def async_log_success_event(self, kwargs, response_obj: "ModelResponse", start_time, end_time): print(f"[MyLogger::async_log_success_event] response id: '{response_obj.id}'; cache_hit: '{kwargs.get('cache_hit', '')}'.\n\n") my_logger = MyLogger() litellm.callbacks = [my_logger] async def chat(): llm = ChatLiteLLMRouter(router=get_llm_router()) msg1 = "" msg1_count = 0 async for msg in llm.astream( input=ChatPromptValue(messages=[HumanMessage("What's the first planet in solar system?")])): msg1 += msg.content if msg.content: msg1_count += 1 print(f"msg1 (count={msg1_count}): {msg1}\n\n") msg2 = "" msg2_count = 0 async for msg in llm.astream(input=ChatPromptValue(messages=[HumanMessage("What's the first planet in solar system?")])): msg2 += msg.content if msg.content: msg2_count += 1 print(f"msg2 (count={msg2_count}): {msg2}\n\n") await asyncio.sleep(5) if __name__ == "__main__": asyncio.run(chat()) ``` ### Error Message and Stack Trace (if applicable) This is the output generated running the shared code: ``` Intialized router with Routing strategy: latency-based-routing Routing fallbacks: None Routing context window fallbacks: None Router Redis Caching=<litellm.caching.RedisCache object at 0x12370da10> msg1 (count=20): The first planet in the solar system, starting from the one closest to the Sun, is Mercury. [MyLogger::async_log_success_event] response id: 'chatcmpl-9jnacYSdnczh2zWMKi3l813lNXVtE'; cache_hit: 'None'. msg2 (count=1): The first planet in the solar system, starting from the one closest to the Sun, is Mercury. [MyLogger::async_log_success_event] response id: 'chatcmpl-9jnacYSdnczh2zWMKi3l813lNXVtE'; cache_hit: 'None'. ``` Notice the two lines starting with `[MyLogger::async_log_success_event]` saying `cache_hit: 'None'`. It's expected to be `True` in the second line as the the call to `astream` generated a single chunk with the entire message. ### Description I'm trying to cache LLM responses using LiteLLM router cache settings and get notified when a response was obtained from cache instead of LLM. For that purpose I've implemented a custom logger as shown in [LiteLLM docs](https://litellm.vercel.app/docs/observability/custom_callback#cache-hits). The issue is that when I call `astream` API, as shown in the code snippet above, the `cache_hit` flag is `None`, even though for the case where response is returned from cache. When I call the `ainvoke` API (`await llm.ainvoke(...)`) the `cache_hit` flag is passed as `True` to my custom logger as expected after the second call to `ainvoke`. ### System Info ```bash $ poetry run python -m langchain_core.sys_info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:54:10 PST 2023; root:xnu-10002.61.3~2/RELEASE_X86_64 > Python Version: 3.11.7 (main, Dec 15 2023, 12:09:04) [Clang 14.0.6 ] Package Information ------------------- > langchain_core: 0.2.13 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.85 > langchain_openai: 0.1.15 > langchain_text_splitters: 0.2.0 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
LiteLLM - cache_hit flag reported as None when using the async stream API
https://api.github.com/repos/langchain-ai/langchain/issues/24120/comments
2
2024-07-11T14:00:20Z
2024-07-17T13:11:33Z
https://github.com/langchain-ai/langchain/issues/24120
2,403,269,285
24,120
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import logging logging.basicConfig(level=logging.DEBUG) from langchain_core.documents import Document from langchain_core.embeddings import FakeEmbeddings from langchain_milvus import Milvus milvus_vector_store = Milvus( index_params = { "metric_type": "COSINE", "index_type": "HNSW", "params": {"M": 8, "efConstruction": 64}, }, embedding_function=FakeEmbeddings(size=100), collection_name="test_langchain", connection_args={...} ) docs = [ Document(page_content='First document content.'), Document(page_content='Second document content.'), ] ids = [ "id_1", "id_2" ] result = milvus_vector_store.add_documents(ids=ids, documents=docs) print(result) ``` ### Error Message and Stack Trace (if applicable) ```bash DEBUG:langchain_milvus.vectorstores.milvus:No documents to upsert. Traceback (most recent call last): File "/Users/amiigas/langchain_milvus_test.py", line 32, in <module> result = milvus_vector_store.add_documents(ids=ids, documents=docs) File "/Users/amiigas/.venv/lib/python3.10/site-packages/langchain_core/vectorstores/base.py", line 463, in add_documents return self.upsert(documents_, **kwargs)["succeeded"] TypeError: 'NoneType' object is not subscriptable ``` ### Description * Trying to add documents to Milvus vector store using `add_documents()` method. * Expecting to successfully add documents and see returned ids `["id_1", "id_2"]` as a method result * Instead method returns `None` which then throws `TypeError` after trying to access `succeeded` key `VectorStore` passes `documents_` to `Milvus.upsert()` as a positional arg, which results in `documents=None` and returning `None` early ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030 > Python Version: 3.10.11 (v3.10.11:7d4cc5aa85, Apr 4 2023, 19:05:19) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.2.13 > langchain: 0.2.7 > langsmith: 0.1.84 > langchain_milvus: 0.1.1 > langchain_text_splitters: 0.2.2
Milvus fails on `add_documents()` with new `VectorStore.upsert()` method
https://api.github.com/repos/langchain-ai/langchain/issues/24116/comments
7
2024-07-11T09:46:30Z
2024-07-17T20:08:05Z
https://github.com/langchain-ai/langchain/issues/24116
2,402,749,899
24,116
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python #1st key code loader = UnstructuredWordDocumentLoader(self.file_path, mode="paged", strategy="fast", infer_table_structure=EXTRACT_TABLES) #2ed key code def create_documents( self, texts: List[str], metadatas: Optional[List[dict]] = None ) -> List[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] for i, text in enumerate(texts): index = 0 previous_chunk_len = 0 for chunk in self.split_text(text): metadata = copy.deepcopy(_metadatas[i]) if self._add_start_index: offset = index + previous_chunk_len - self._chunk_overlap index = text.find(chunk, max(0, offset)) metadata["start_index"] = index previous_chunk_len = len(chunk) new_doc = Document(page_content=chunk, metadata=metadata) documents.append(new_doc) return documents ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description Use `from langchain_community.document_loaders import UnstructuredWordDocumentLoader` to parse word file with big table, then got `Document` objects. if param `infer_table_structure=True` which is default param, each document metadata property contains a `text_as_html` property which is a big object. Then when using `TextSplitter` to split documents into chunks, every chunks document will deepcopy metadata once. If there are many chunks, the memory will increase sharply, and finally oom, and the program will terminate. ### System Info langchain==0.2.3 langchain-community==0.2.4 langchain-core==0.2.5 langchain-text-splitters==0.2.1
Parse msword file then split chunks cause oom
https://api.github.com/repos/langchain-ai/langchain/issues/24115/comments
3
2024-07-11T09:44:23Z
2024-07-11T13:33:05Z
https://github.com/langchain-ai/langchain/issues/24115
2,402,745,688
24,115
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/concepts/#tools ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Document says: To use an existing pre-built tool, see [here](https://python.langchain.com/v0.2/docs/concepts/docs/integrations/tools/) for a list of pre-built tools. But the link page does not exist. ### Idea or request for content: _No response_
DOC: Page Not Found
https://api.github.com/repos/langchain-ai/langchain/issues/24112/comments
3
2024-07-11T08:13:33Z
2024-07-16T09:35:29Z
https://github.com/langchain-ai/langchain/issues/24112
2,402,553,645
24,112
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code: ```python agent = create_react_agent(llm, tools, prompt) agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, max_iterations=1, early_stopping_method="generate", ) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I am using create_react_agent() to create an agent. I want to let LLM to get the final answer when the max_iterations reached based on the existing information. But it seems the agent created by create_react_agent() does not support "early_stopping_method='generate'". It seems the agent type is BaseSingleActionAgent and return_stopped_response() does not handle "early_stopping_method='generate'" ```python def return_stopped_response( self, early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any, ) -> AgentFinish: """Return response when agent has been stopped due to max iterations.""" if early_stopping_method == "force": # `force` just returns a constant string return AgentFinish( {"output": "Agent stopped due to iteration limit or time limit."}, "" ) else: raise ValueError( f"Got unsupported early_stopping_method `{early_stopping_method}`" ) ``` Can I assign it as Agent(BaseSingleActionAgent) when using create_react_agent() ? ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.19043 > Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.13 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.85 > langchain_google_alloydb_pg: 0.2.2 > langchain_google_community: 1.0.6 > langchain_google_vertexai: 1.0.6 > langchain_openai: 0.1.8 > langchain_text_splitters: 0.2.2 > langchainhub: 0.1.20 > langgraph: 0.1.7 > langserve: 0.2.2
agent created by create_react_agent() does not support early_stopping_method='generate'
https://api.github.com/repos/langchain-ai/langchain/issues/24111/comments
2
2024-07-11T07:54:20Z
2024-07-16T12:28:45Z
https://github.com/langchain-ai/langchain/issues/24111
2,402,516,924
24,111
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python def chat_to_sql_validator(input_prompt, chat_history, database, model_type): print(f"Database used: {database.dialect}") print(f"Usable table: {database.get_usable_table_names()[0]}\n\n") if model_type == "gpt-4o": model = ChatGPT() elif model_type == "gemini-pro": model = Gemini() toolkit = SQLDatabaseToolkit(db = database, llm = model, agent_type = "tool-calling", verbose = False) snippet_data = toolkit.get_context()["table_info"] current_date = date.today().strftime("%Y-%m-%d") examples = [ { "input": "Example 1", "output": "Example 1" }, { "input": "Example 2", "output": "Example 2" }, { "input": "Example 3", "output": "Example 3" } ] system = """ You are a {dialect} expert. Given a human chat history and question, your task is to create a syntactically correct {dialect} query to run. Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database. Never query for all columns from a table. You must query only the columns that are needed to answer the question. Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Pay attention to use date('now') function to get the current date, if the question involves "today". Only use the following tables: {table_info} There is some rules that defined by human to generate syntactically correct {dialect} query: 1. Text Search: 1.1. For partial matches in the following columns, use the LIKE operator: 1.1.1. Branch Name 1.1.2. Store Name 1.1.3. Product Name 1.1.4. Principal Name 1.1.5. Product Type 1.1.6. Product Brand Name 1.1.7. Product Division Name 1.1.8. Product Department Name 1.1.9. Product Category Name 1.2. Ensure case-insensitive search using the UPPER function: UPPER(column_name) LIKE UPPER('%search_term%') 1.3. Replace spaces in the search term with '%' for wildcard matching. 2. Counting Distinct Values: 2.1. Use the COUNT DISTINCT function to calculate the total number of unique values in the following columns: 2.1.1. Branch Code or Branch Name 2.1.2. Store Code or Store Name 2.1.3. Product Code or Product Name 2.1.4. Principal Code or Principal Name 2.1.5. Product Type 2.1.6. Product Brand Name 2.1.7. Product Division Name 2.1.8. Product Department Name 2.1.9. Product Category Name 3. Summing Values: 3.1. Transactions and Sales Gross must use the SUM function 3.2. Quantity Purchase Order and Amount Purchase Order must use the SUM function 4. Data Aggregation: 4.1. Perform appropriate data aggregation based on the user's question. This could include: 4.1.1. SUM: To calculate the total value. 4.1.2. AVG: To calculate the average value. 4.1.3. MIN, MAX: To find the minimum or maximum value. 5. Sorting: If the result is a list, sort the values according to the user's question. Specify the column and sorting order (ASC for ascending, DESC for descending). 6. Data Structure Awareness: Understand that 'Branch' and 'Store' are not equivalent entities within the data. This means that queries should be structured to differentiate between these entities when necessary. Within the rule by human write a draft query. Then double check the {dialect} query for common mistakes, including: - Always make column data from datetime or date cast into string - Not using GROUP BY for aggregating data - Using NOT IN with NULL values - Using UNION when UNION ALL should have been used - Using BETWEEN for exclusive ranges - Data type mismatch in predicates - Properly quoting identifiers - Using the correct number of arguments for functions - Casting to the correct data type - Using the proper columns for joins You must you this format: First draft: <<FIRST_DRAFT_QUERY>> Final answer: <<FINAL_ANSWER_QUERY>> Here are history question from human, to help you understand the context: {chat_history} Here is the snippet of the data to help you understand more about the table: {snippet_data} Here is the current date if asked about today, the format date is in YYYY-MM-DD: {current_date} Your query answer must align with the question from Human. If the question asking 10 then show 10 rows. """ try: example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ("ai", "{output}"), ] ) few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt = example_prompt, examples = examples) prompt = ChatPromptTemplate.from_messages([("system", system), few_shot_prompt, ("human", "{input}")]).partial(dialect=database.dialect, chat_history=chat_history, snippet_data=snippet_data, current_date=current_date) chain = create_sql_query_chain(model, database, prompt=prompt, k=10) output_query = chain.invoke({"question": input_prompt}) except Exception as Error: output_query = "" prompt = "" output_prompt = { "output_prompt" : prompt, "output_script" : output_query, } return output_prompt ``` ### Error Message and Stack Trace (if applicable) The error message was: ``` File "/workspace/main.py", line 359, in chat_to_sql_validator few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt=ChatPromptTemplate.from_messages([("human", "{input}"), ("ai", "{output}")]), examples = examples) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for FewShotChatMessagePromptTemplate input_variables field required (type=value_error.missing)" ``` ### Description * I'm trying to generate a SQL Script with my defined rules and example, using Few Shot Prompt and Chat Prompt * I expect the result was returning the dictionary with script query sql in it * Instead, i've got error ``` ValidationError: 1 validation error for FewShotChatMessagePromptTemplate input_variables field required (type=value_error.missing) ``` * Ive test it on my notebook and it is works, but when im trying to use it in cloud function i've got that error ### System Info functions-framework==3.* google-cloud-aiplatform google-cloud-storage google-cloud-bigquery-storage google-api-python-client google-auth langchain langchain-openai langchain-community langchain-google-vertexai langchain-openai tiktoken nest-asyncio bs4 faiss-cpu langchain_experimental tabulate pandas-gbq sqlalchemy sqlalchemy-bigquery flask I've also tried different version of langchain and also didn't work
1 validation error for FewShotChatMessagePromptTemplate input_variables field required (type=value_error.missing)
https://api.github.com/repos/langchain-ai/langchain/issues/24108/comments
6
2024-07-11T07:18:05Z
2024-07-31T09:03:16Z
https://github.com/langchain-ai/langchain/issues/24108
2,402,448,597
24,108
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base=api_base, model_name="microsoft/Phi-3-vision-128k-instruct", model_kwargs={"stop": ["."]} ) image_path = "invoice_data_images/Screenshot 2024-05-02 160946_page_1.png" with open(image_path, "rb") as image_file: image_base64 = base64.b64encode(image_file.read()).decode("utf-8") prompt_1 = "Give me the invoice date from the given image." messages = [ HumanMessage( content=[ {"type": "text", "text": prompt_1}, {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}} ] ) ] response = llm.invoke(messages) print(response) ### Error Message { "name": "BadRequestError", "message": "Error code: 400 - {'object': 'error', 'message': 'This model's maximum context length is 8192 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.', 'type': 'BadRequestError', 'param': None, 'code': 400}" } ### Description I hosted VLLM on an EC2 instance to extract text data from images using the Phi-3 Vision model. The model is hosted with the following command: python3 -m vllm.entrypoints.openai.api_server --port 8000 --model microsoft/Phi-3-vision-128k-instruct --trust-remote-code --dtype=half --max_model_len=8192 When running the code, I encounter a BadRequestError due to exceeding the maximum context length. The error message indicates that the total number of tokens requested is 254457, which far exceeds the model's limit of 8192 tokens. The base64-encoded image is being considered part of the prompt, significantly increasing the token count and leading to the context length issue. Even if the model's context length were 128k tokens, including a base64-encoded image in the prompt would always exceed the model's limit. # Expected Behavior The model should process the image without considering the base64 string as part of the prompt token count, thereby avoiding the context length issue. ### System Info langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.12 langchain-text-splitters==0.2.2
Maximum Context Length Exceeded Due to Base64-Encoded Image in Prompt
https://api.github.com/repos/langchain-ai/langchain/issues/24107/comments
0
2024-07-11T07:07:57Z
2024-07-11T07:12:35Z
https://github.com/langchain-ai/langchain/issues/24107
2,402,430,115
24,107
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code Using RequestWrapper to make a POST call using agent- from langchain_community.utilities import RequestsWrapper Error - ` ClientSession._request() got an unexpected keyword argument 'verify'` ### Error Message and Stack Trace (if applicable) Getting` ClientSession._request() got an unexpected keyword argument 'verify'` error while using the RequestWrapper to do a POST request. ### Description I am using `RequestWrapper` from `langchain_community.utilities` to do a `POST` call to my endpoint. Below is the error thrown by the agent. Error - `ClientSession._request() got an unexpected keyword argument 'verify'` I guess aiohttp session library got updated and `verify` arguement got removed from the `_request` method. Using langchain version 0.2.7 ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22631 > Python Version: 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.6 > langsmith: 0.1.83 > langchain_aws: 0.1.9 > langchain_google_community: 1.0.6 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 > langgraph: 0.0.55 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langserve
Facing Error in ClientSession._request() While Doing POST Call using RequestWrapper
https://api.github.com/repos/langchain-ai/langchain/issues/24106/comments
0
2024-07-11T06:10:37Z
2024-07-11T06:13:18Z
https://github.com/langchain-ai/langchain/issues/24106
2,402,333,054
24,106
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: langchain 0.2.6 from langchain.output_parsers import (OutputFixingParser) fixing_parser = OutputFixingParser.from_llm(parser= parser, llm= ChatOpenAI()) an error: KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']" ### Idea or request for content: _No response_
OutputFixingParser,KeyError
https://api.github.com/repos/langchain-ai/langchain/issues/24105/comments
0
2024-07-11T02:46:57Z
2024-07-11T02:49:45Z
https://github.com/langchain-ai/langchain/issues/24105
2,402,110,694
24,105
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: I run the sample code and get an error.How to solve it? Operating environment: python 3.10 langchain-community 0.2.7 qianfan 0.4.0.1 Run the code: `import os from langchain_community.embeddings import QianfanEmbeddingsEndpoint os.environ["QIANFAN_AK"] = "xxx" os.environ["QIANFAN_SK"] = "xxx" embed = QianfanEmbeddingsEndpoint() res = embed.embed_documents(["hi", "world"]) print(res)` Error message: `Traceback (most recent call last): File "/llm-example/langchain/test.py", line 9, in <module> embed = QianfanEmbeddingsEndpoint() File "/llm-example/langchain/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanEmbeddingsEndpoint qianfan_ak str type expected (type=type_error.str) qianfan_sk str type expected (type=type_error.str) ` ### Idea or request for content: _No response_
DOC: <Issue related to /v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/>
https://api.github.com/repos/langchain-ai/langchain/issues/24104/comments
2
2024-07-11T01:45:20Z
2024-07-19T13:44:24Z
https://github.com/langchain-ai/langchain/issues/24104
2,402,037,774
24,104
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/integrations/platforms/aws/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: _No response_ ### Idea or request for content: _No response_
DOC: <Issue related to /v0.2/docs/integrations/platforms/aws/>
https://api.github.com/repos/langchain-ai/langchain/issues/24094/comments
0
2024-07-10T22:44:26Z
2024-07-10T22:46:56Z
https://github.com/langchain-ai/langchain/issues/24094
2,401,863,069
24,094
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun wikidata_tool = WikidataQueryRun(api_wrapper=WikidataAPIWrapper()) wikidata_tool.name, wikidata_tool.description print(wikidata_tool.run("Alan Turing")) ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[16], line 1 ----> 1 print(wikidata_tool.run("Alan Touring")) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:452, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs) 450 except (Exception, KeyboardInterrupt) as e: 451 run_manager.on_tool_error(e) --> 452 raise e 453 else: 454 run_manager.on_tool_end(observation, color=color, name=self.name, **kwargs) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:409, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs) 406 parsed_input = self._parse_input(tool_input) 407 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 408 observation = ( --> 409 context.run( 410 self._run, *tool_args, run_manager=run_manager, **tool_kwargs 411 ) 412 if new_arg_supported 413 else context.run(self._run, *tool_args, **tool_kwargs) 414 ) 415 except ValidationError as e: 416 if not self.handle_validation_error: File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\tools\wikidata\tool.py:30, in WikidataQueryRun._run(self, query, run_manager) 24 def _run( 25 self, 26 query: str, 27 run_manager: Optional[CallbackManagerForToolRun] = None, 28 ) -> str: 29 """Use the Wikidata tool.""" ---> 30 return self.api_wrapper.run(query) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:177, in WikidataAPIWrapper.run(self, query) 175 docs = [] 176 for item in items[: self.top_k_results]: --> 177 if doc := self._item_to_document(item): 178 docs.append(f"Result {item}:\n{doc.page_content}") 179 if not docs: File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:149, in WikidataAPIWrapper._item_to_document(self, qid) 147 for prop, values in resp.statements.items(): 148 if values: --> 149 doc_lines.append(f"{prop.label}: {', '.join(values)}") 151 return Document( 152 page_content=("\n".join(doc_lines))[: self.doc_content_chars_max], 153 meta={"title": qid, "source": f"https://www.wikidata.org/wiki/{qid}"}, 154 ) TypeError: sequence item 0: expected str instance, FluentValue found ### Description I'm trying to use various tools from the 'langchain' library. Call to wikidata_tool.run("string") resulted in the above error. Identified a bug in the below code ( using latest version of the build) https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py Line 149: `doc_lines.append(f"{prop.label}: {', '.join(values)}")` ** > /*** source of error, here 'values' is a list not a string ***/ ** So I commented above code and made the below changes. the code worked fine after that. `doc_lines.append(f"{prop.label}: {', '.join(map(str,values))}") ` ### System Info ### PIP freeze | grep langchain output langchain==0.2.7 langchain-anthropic==0.1.19 langchain-cohere==0.1.9 langchain-community==0.2.7 langchain-core==0.2.12 langchain-experimental==0.0.62 langchain-google-genai==1.0.4 langchain-openai==0.1.14 langchain-text-splitters==0.2.2 langchainhub==0.1.20 ### **python -m langchain_core.sys_info** System Information ------------------ **> OS: Windows** **> OS Version: 10.0.19045** **> Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] ** Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.84 > langchain_anthropic: 0.1.19 > langchain_cohere: 0.1.9 > langchain_experimental: 0.0.62 > langchain_google_genai: 1.0.4 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 > langchainhub: 0.1.20 > langgraph: 0.0.51 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langserve [dependency graph.txt](https://github.com/user-attachments/files/16169561/dependency.graph.txt)
TypeError: sequence item 0: expected str instance, FluentValue found - Error while invoking WikidataQueryRun
https://api.github.com/repos/langchain-ai/langchain/issues/24093/comments
2
2024-07-10T22:41:55Z
2024-07-10T22:55:43Z
https://github.com/langchain-ai/langchain/issues/24093
2,401,860,745
24,093
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun wikidata_tool = WikidataQueryRun(api_wrapper=WikidataAPIWrapper()) wikidata_tool.name, wikidata_tool.description print(wikidata_tool.run("Alan Turing")) ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[16], line 1 ----> 1 print(wikidata_tool.run("Alan Touring")) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:452, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs) 450 except (Exception, KeyboardInterrupt) as e: 451 run_manager.on_tool_error(e) --> 452 raise e 453 else: 454 run_manager.on_tool_end(observation, color=color, name=self.name, **kwargs) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:409, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs) 406 parsed_input = self._parse_input(tool_input) 407 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 408 observation = ( --> 409 context.run( 410 self._run, *tool_args, run_manager=run_manager, **tool_kwargs 411 ) 412 if new_arg_supported 413 else context.run(self._run, *tool_args, **tool_kwargs) 414 ) 415 except ValidationError as e: 416 if not self.handle_validation_error: File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\tools\wikidata\tool.py:30, in WikidataQueryRun._run(self, query, run_manager) 24 def _run( 25 self, 26 query: str, 27 run_manager: Optional[CallbackManagerForToolRun] = None, 28 ) -> str: 29 """Use the Wikidata tool.""" ---> 30 return self.api_wrapper.run(query) File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:177, in WikidataAPIWrapper.run(self, query) 175 docs = [] 176 for item in items[: self.top_k_results]: --> 177 if doc := self._item_to_document(item): 178 docs.append(f"Result {item}:\n{doc.page_content}") 179 if not docs: File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:149, in WikidataAPIWrapper._item_to_document(self, qid) 147 for prop, values in resp.statements.items(): 148 if values: --> 149 doc_lines.append(f"{prop.label}: {', '.join(values)}") 151 return Document( 152 page_content=("\n".join(doc_lines))[: self.doc_content_chars_max], 153 meta={"title": qid, "source": f"https://www.wikidata.org/wiki/{qid}"}, 154 ) TypeError: sequence item 0: expected str instance, FluentValue found ### Description I'm trying to use various tools from the 'langchain' library. Call to wikidata_tool.run("string") resulted in the above error. Identified a bug in the below code ( using latest version of the build) https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py Line 149: `doc_lines.append(f"{prop.label}: {', '.join(values)}")` ** > /*** source of error, here 'values' is a list not a string ***/ ** So I commented above code and made the below changes. the code worked fine after that. `doc_lines.append(f"{prop.label}: {', '.join(map(str,values))}") ` ### System Info ### PIP freeze | grep langchain output langchain==0.2.7 langchain-anthropic==0.1.19 langchain-cohere==0.1.9 langchain-community==0.2.7 langchain-core==0.2.12 langchain-experimental==0.0.62 langchain-google-genai==1.0.4 langchain-openai==0.1.14 langchain-text-splitters==0.2.2 langchainhub==0.1.20 ### **python -m langchain_core.sys_info** System Information ------------------ **> OS: Windows** **> OS Version: 10.0.19045** **[dependency graph.txt](https://github.com/user-attachments/files/16169560/dependency.graph.txt)** Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.84 > langchain_anthropic: 0.1.19 > langchain_cohere: 0.1.9 > langchain_experimental: 0.0.62 > langchain_google_genai: 1.0.4 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 > langchainhub: 0.1.20 > langgraph: 0.0.51 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langserve [dependency graph.txt](https://github.com/user-attachments/files/16169561/dependency.graph.txt)
TypeError: sequence item 0: expected str instance, FluentValue found - Error while invoking WikidataQueryRun
https://api.github.com/repos/langchain-ai/langchain/issues/24092/comments
0
2024-07-10T22:38:53Z
2024-07-10T22:41:34Z
https://github.com/langchain-ai/langchain/issues/24092
2,401,857,853
24,092
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content Type hints for `@tool` decorator are incorrect, which causes downstream type-checking errors. For example: ```python from langchain_core.tools import tool, StructuredTool @tool def multiply(a: int, b: int) -> int: """multiply two ints""" return a * b tool_: StructuredTool = multiply ``` leads to the following mypy errors: ```pycon error: Incompatible types in assignment (expression has type "Callable[..., Any]", variable has type "StructuredTool") [assignment] ```
core: Fix @tool typing
https://api.github.com/repos/langchain-ai/langchain/issues/24089/comments
0
2024-07-10T22:09:47Z
2024-07-10T22:12:25Z
https://github.com/langchain-ai/langchain/issues/24089
2,401,823,001
24,089
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code My network is 0.0.0.0 when coding my ingestion looks alright but when wanting to add documents to vector store i get the error below. Also, i am unable to fetch the documents in my MongoDB as it shows **connect ETIMEDOUT 13.68.114.169:27017** # Add documents to vector store insert_ids = vectorstore.add_documents(ingestion_docs) print(f"Inserted Document IDs: {insert_ids}") **the following error occurs:** Error inserting documents: ac-vtawjby-shard-00-02.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),ac-vtawjby-shard-00-00.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),ac-vtawjby-shard-00-01.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: <TopologyDescription id: 668ed682a78aef78b848fcf0, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-vtawjby-shard-00-00.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-00.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('ac-vtawjby-shard-00-01.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-01.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('ac-vtawjby-shard-00-02.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-02.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]> ### Error Message and Stack Trace (if applicable) _No response_ ### Description All these packages were used and came out true import os from pymongo.mongo_client import MongoClient from langchain_cohere.chat_models import ChatCohere from langchain_cohere.embeddings import CohereEmbeddings from langchain_core.output_parsers import StrOutputParser from langchain_mongodb import MongoDBAtlasVectorSearch from langchain.prompts import PromptTemplate from langchain.schema import Document from dotenv import load_dotenv load_dotenv() and the following .env were located correctly: COHERE_API_KEY = "XXXX" LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT="https://api.smith.langchain.com" LANGCHAIN_API_KEY="XXXX" LANGCHAIN_PROJECT="XXXX" ATLAS_CONNECTION_STRING = "XXXX" ### System Info **-m langchain_core.sys_info** System Information ------------------ > OS: Windows > OS Version: 10.0.22621 > Python Version: 3.12.4 (tags/v3.12.4:8e8a4ba, Jun 6 2024, 19:30:16) [MSC v.1940 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.9 > langchain: 0.2.5 > langchain_community: 0.2.5 > langsmith: 0.1.81 > langchain_cohere: 0.1.8 > langchain_mongodb: 0.1.6 > langchain_text_splitters: 0.2.1 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Adding documents to vector store/server timeout
https://api.github.com/repos/langchain-ai/langchain/issues/24082/comments
0
2024-07-10T18:57:19Z
2024-07-10T19:00:43Z
https://github.com/langchain-ai/langchain/issues/24082
2,401,449,080
24,082
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_community.llms import VLLM llm = VLLM( model="llava-hf/llava-1.5-7b-hf", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8, ) OR llm = VLLM( model="llava-hf/llava-1.5-7b-hf", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8, image_input_type="pixel_values", image_token_id=123, image_input_shape="224,224,3", image_feature_size=512, ) Both the ways of instantiating the VLLM class gives the same error. ### Error Message and Stack Trace (if applicable) llm = VLLM( [rank0]: ^^^^^ [rank0]: File /miniforge3/envs/ipex-vllm/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ [rank0]: raise validation_error [rank0]: pydantic.v1.error_wrappers.ValidationError: 1 validation error for VLLM [rank0]: __root__ [rank0]: Provide `image_input_type` and other vision related configurations through LLM entrypoint or engine arguments. (type=assertion_error) ### Description I am trying to use VLLM through Langchain to run LLaVA model. I am using CPU to run my code. I am getting this error: "Provide `image_input_type` and other vision related configurations through LLM entrypoint or engine arguments. " I went through the source code of vllm/vllm/engine/arg_utils.py:class EngineArgs: and passed the Vision Configurations in VLLM class as above. However, I see that, even after setting, image_input_type="pixel_values" in VLLM class (as seen above), the self.image_input_type in the EngineArgs class shows value as None. Name: vllm Version: 0.4.2+cpu Summary: A high-throughput and memory-efficient inference and serving engine for LLMs Home-page: https://github.com/vllm-project/vllm Author: vLLM Team Author-email: License: Apache 2.0 Location: /home/ceed-user/miniforge3/envs/ipex-vllm/lib/python3.11/site-packages/vllm-0.4.2+cpu-py3.11-linux-x86_64.egg Requires: cmake, fastapi, filelock, lm-format-enforcer, ninja, numpy, openai, outlines, prometheus-fastapi-instrumentator, prometheus_client, psutil, py-cpuinfo, pydantic, requests, sentencepiece, tiktoken, tokenizers, torch, transformers, triton, typing_extensions, uvicorn Required-by: ### System Info langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.12 langchain-text-splitters==0.2.2
LLaVA model error in VLLM through Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/24078/comments
2
2024-07-10T16:40:38Z
2024-07-11T15:14:47Z
https://github.com/langchain-ai/langchain/issues/24078
2,401,225,592
24,078
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from typing import List from langchain_core.output_parsers import PydanticOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint class Calculator(BaseModel): """Multiply two integers together.""" a: int = Field(..., description="First integer") b: int = Field(..., description="Second integer") llm = HuggingFaceEndpoint( repo_id="HuggingFaceH4/zephyr-7b-beta", task="text-generation", max_new_tokens=512, do_sample=False, repetition_penalty=1.03, ) chat_model = ChatHuggingFace(llm=llm, verbose=True).with_structured_output(schema=Calculator, include_raw=True) print(chat_model.invoke("How much is 3 multiplied by 12?")) ``` ### Error Message and Stack Trace (if applicable) { "name": "InferenceTimeoutError", "message": "Model not loaded on the server: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions. Please retry with a higher timeout (current: 120).", "stack": "--------------------------------------------------------------------------- HTTPError Traceback (most recent call last) File /usr/local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name) 303 try: --> 304 response.raise_for_status() 305 except HTTPError as e: File /usr/local/lib/python3.11/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 503 Server Error: Service Unavailable for url: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions The above exception was the direct cause of the following exception: HfHubHTTPError Traceback (most recent call last) File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:273, in InferenceClient.post(self, json, data, model, task, stream) 272 try: --> 273 hf_raise_for_status(response) 274 return response.iter_lines() if stream else response.content File /usr/local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py:371, in hf_raise_for_status(response, endpoint_name) 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 370 # as well (request id and/or server error message) --> 371 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 503 Server Error: Service Unavailable for url: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions (Request ID: ORVZdEUt8LFrnogCctYgS) Model HuggingFaceH4/zephyr-7b-beta is currently loading The above exception was the direct cause of the following exception: InferenceTimeoutError Traceback (most recent call last) Cell In[1], line 27 17 llm = HuggingFaceEndpoint( 18 repo_id=\"HuggingFaceH4/zephyr-7b-beta\", 19 # repo_id=\"meta-llama/Meta-Llama-3-8B-Instruct\", (...) 23 repetition_penalty=1.03, 24 ) 26 chat_model = ChatHuggingFace(llm=llm,verbose=True,).with_structured_output(schema=Calculator, include_raw=True) ---> 27 print(chat_model.invoke(\"How much is 3 multiplied by 12?\")) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2503, in RunnableSequence.invoke(self, input, config, **kwargs) 2499 config = patch_config( 2500 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\") 2501 ) 2502 if i == 0: -> 2503 input = step.invoke(input, config, **kwargs) 2504 else: 2505 input = step.invoke(input, config) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:3150, in RunnableParallel.invoke(self, input, config) 3137 with get_executor_for_config(config) as executor: 3138 futures = [ 3139 executor.submit( 3140 step.invoke, (...) 3148 for key, step in steps.items() 3149 ] -> 3150 output = {key: future.result() for key, future in zip(steps, futures)} 3151 # finish the root run 3152 except BaseException as e: File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:3150, in <dictcomp>(.0) 3137 with get_executor_for_config(config) as executor: 3138 futures = [ 3139 executor.submit( 3140 step.invoke, (...) 3148 for key, step in steps.items() 3149 ] -> 3150 output = {key: future.result() for key, future in zip(steps, futures)} 3151 # finish the root run 3152 except BaseException as e: File /usr/local/lib/python3.11/concurrent/futures/_base.py:456, in Future.result(self, timeout) 454 raise CancelledError() 455 elif self._state == FINISHED: --> 456 return self.__get_result() 457 else: 458 raise TimeoutError() File /usr/local/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self) 399 if self._exception: 400 try: --> 401 raise self._exception 402 finally: 403 # Break a reference cycle with the exception in self._exception 404 self = None File /usr/local/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self) 55 return 57 try: ---> 58 result = self.fn(*self.args, **self.kwargs) 59 except BaseException as exc: 60 self.future.set_exception(exc) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4586, in RunnableBindingBase.invoke(self, input, config, **kwargs) 4580 def invoke( 4581 self, 4582 input: Input, 4583 config: Optional[RunnableConfig] = None, 4584 **kwargs: Optional[Any], 4585 ) -> Output: -> 4586 return self.bound.invoke( 4587 input, 4588 self._merge_configs(config), 4589 **{**self.kwargs, **kwargs}, 4590 ) File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248, in BaseChatModel.invoke(self, input, config, stop, **kwargs) 237 def invoke( 238 self, 239 input: LanguageModelInput, (...) 243 **kwargs: Any, 244 ) -> BaseMessage: 245 config = ensure_config(config) 246 return cast( 247 ChatGeneration, --> 248 self.generate_prompt( 249 [self._convert_input(input)], 250 stop=stop, 251 callbacks=config.get(\"callbacks\"), 252 tags=config.get(\"tags\"), 253 metadata=config.get(\"metadata\"), 254 run_name=config.get(\"run_name\"), 255 run_id=config.pop(\"run_id\", None), 256 **kwargs, 257 ).generations[0][0], 258 ).message File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:681, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs) 673 def generate_prompt( 674 self, 675 prompts: List[PromptValue], (...) 678 **kwargs: Any, 679 ) -> LLMResult: 680 prompt_messages = [p.to_messages() for p in prompts] --> 681 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:538, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 536 if run_managers: 537 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 538 raise e 539 flattened_outputs = [ 540 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item] 541 for res in results 542 ] 543 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:528, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 525 for i, m in enumerate(messages): 526 try: 527 results.append( --> 528 self._generate_with_cache( 529 m, 530 stop=stop, 531 run_manager=run_managers[i] if run_managers else None, 532 **kwargs, 533 ) 534 ) 535 except BaseException as e: 536 if run_managers: File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:753, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs) 751 else: 752 if inspect.signature(self._generate).parameters.get(\"run_manager\"): --> 753 result = self._generate( 754 messages, stop=stop, run_manager=run_manager, **kwargs 755 ) 756 else: 757 result = self._generate(messages, stop=stop, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_huggingface/chat_models/huggingface.py:218, in ChatHuggingFace._generate(self, messages, stop, run_manager, **kwargs) 216 elif _is_huggingface_endpoint(self.llm): 217 message_dicts = self._create_message_dicts(messages, stop) --> 218 answer = self.llm.client.chat_completion(messages=message_dicts, **kwargs) 219 return self._create_chat_result(answer) 220 else: File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:706, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p) 703 model_url += \"/v1/chat/completions\" 705 try: --> 706 data = self.post( 707 model=model_url, 708 json=dict( 709 model=\"tgi\", # random string 710 messages=messages, 711 frequency_penalty=frequency_penalty, 712 logit_bias=logit_bias, 713 logprobs=logprobs, 714 max_tokens=max_tokens, 715 n=n, 716 presence_penalty=presence_penalty, 717 seed=seed, 718 stop=stop, 719 temperature=temperature, 720 tool_choice=tool_choice, 721 tool_prompt=tool_prompt, 722 tools=tools, 723 top_logprobs=top_logprobs, 724 top_p=top_p, 725 stream=stream, 726 ), 727 stream=stream, 728 ) 729 except HTTPError as e: 730 if e.response.status_code in (400, 404, 500): 731 # Let's consider the server is not a chat completion server. 732 # Then we call again `chat_completion` which will render the chat template client side. 733 # (can be HTTP 500, HTTP 400, HTTP 404 depending on the server) File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:283, in InferenceClient.post(self, json, data, model, task, stream) 280 if error.response.status_code == 503: 281 # If Model is unavailable, either raise a TimeoutError... 282 if timeout is not None and time.time() - t0 > timeout: --> 283 raise InferenceTimeoutError( 284 f\"Model not loaded on the server: {url}. Please retry with a higher timeout (current:\" 285 f\" {self.timeout}).\", 286 request=error.request, 287 response=error.response, 288 ) from error 289 # ...or wait 1s and retry 290 logger.info(f\"Waiting for model to be loaded on the server: {error}\") InferenceTimeoutError: Model not loaded on the server: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions. Please retry with a higher timeout (current: 120)." } ### Description I'm encountering an issue when using the HuggingFaceEndpoint with the with_structured_output method in LangChain. The issue arises due to a model loading timeout, preventing the model from being used effectively. I also tried to increase the timeout to 300 but still doesnt work. ### System Info ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 > Python Version: 3.11.7 (main, Dec 19 2023, 20:33:49) [GCC 12.2.0] Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.83 > langchain_experimental: 0.0.62 > langchain_huggingface: 0.0.3 > langchain_milvus: 0.1.1 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
HuggingFaceEndpoint Fails to Use with_structured_output Due to Model Loading Timeout
https://api.github.com/repos/langchain-ai/langchain/issues/24077/comments
1
2024-07-10T16:38:26Z
2024-08-08T08:46:12Z
https://github.com/langchain-ai/langchain/issues/24077
2,401,221,475
24,077
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from typing import TypedDict, Annotated, List from langgraph.graph import StateGraph from langgraph.graph.graph import END, START, CompiledGraph from langgraph.graph.message import add_messages import random, operator from IPython.display import Image, display class TestState(TypedDict): messages: Annotated[List[str], operator.add] test_workflow = StateGraph(TestState) def node1(state: TestState): return {"messages": ["Hello from node 1"]} def node2(state: TestState): return {"messages": ["Hello from node 2"]} def node3(state: TestState): return {"messages": ["Hello from node 3"]} def node4(state: TestState): return {"messages": ["Hello from node 4"]} def node5(state: TestState): return {"messages": ["Hello from node 5"]} def route(state: TestState): if random.choice([True, False]): return "node5" return "__end__" test_workflow.add_node("node1", node1) test_workflow.add_node("node2", node2) test_workflow.add_node("node3", node3) test_workflow.add_node("node4", node4) test_workflow.add_node("node5", node5) test_workflow.add_edge(START, "node1") test_workflow.add_edge("node1", "node2") test_workflow.add_edge("node2", "node3") test_workflow.add_edge("node3", "node4") test_workflow.add_edge("node5", "node4") test_workflow.add_conditional_edges("node4", route) display(Image(test_workflow.compile().get_graph().draw_mermaid_png())) ``` ### Error Message and Stack Trace (if applicable) No error message. ### Description ## Unexpected behavior The previous code does not work as expected ( according to [Langgraph API reference](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.message.MessageGraph.add_conditional_edges) ) as it generates the following graph with unexpected conditional edges from `node4` to every other node: ![image](https://github.com/langchain-ai/langchain/assets/79788901/6c719602-0d74-41d7-a007-bb3d35e8627e) This is also visible when inspecting the `branches` attribute of the workflow: ```python test_workflow.branches ``` returns : ```python defaultdict(dict, {'node4': {'route': Branch(path=route(recurse=True), ends=None, then=None)}}) ``` ## Fix for expected behavior Replacing ```python test_workflow.add_conditional_edges("node4", route) ``` by giving the additional `map_path` argument such as ```python test_workflow.add_conditional_edges("node4", route, {"node5": "node5", "__end__": "__end__"}) ``` fixes the issue, as it can be seen by inspecting the `branches` attribute of the workflow. ```python test_workflow.branches ``` now returns : ```python defaultdict(dict, {'node4': {'route': Branch(path=route(recurse=True), ends={'node5': 'node5', '__end__': '__end__'}, then=None)}}) ``` and the expected Mermaid graph: ![image](https://github.com/langchain-ai/langchain/assets/79788901/269aa35c-48ea-40a4-a0ac-9a746f8e1fac) ### System Info I use the latest versions for these libraries installed with poetry: ``` python = "3.12" langchain-core = "0.2.12" langgraph = "0.0.64" ```
Langgraph `add_conditional_edges` does not work without optional argument `path_map`
https://api.github.com/repos/langchain-ai/langchain/issues/24076/comments
1
2024-07-10T16:14:05Z
2024-07-10T17:05:51Z
https://github.com/langchain-ai/langchain/issues/24076
2,401,176,047
24,076
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code docs_match = app.docsearch.similarity_search_with_score(input_data.question, k=3,filter = filter_object) always returns documents ### Error Message and Stack Trace (if applicable) _No response_ ### Description as_retriever returns documents from database even when the query is totally unrelated to the documents in the database ### System Info Mac
as_retriever returns documents from database even when the query is totally unrelated to the documents in the database
https://api.github.com/repos/langchain-ai/langchain/issues/24075/comments
0
2024-07-10T16:13:26Z
2024-07-10T16:16:05Z
https://github.com/langchain-ai/langchain/issues/24075
2,401,174,741
24,075
[ "langchain-ai", "langchain" ]
### URL https://github.com/langchain-ai/langchain/blob/c4e149d4f18319bef6f1d4c409250c4a0ad21dac/libs/community/langchain_community/vectorstores/neo4j_vector.py ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Many parameters in the [Neo4jVector class](https://github.com/langchain-ai/langchain/blob/c4e149d4f18319bef6f1d4c409250c4a0ad21dac/libs/community/langchain_community/vectorstores/neo4j_vector.py#L429) are not documented. Some examples: ```python class Neo4jVector(VectorStore): """`Neo4j` vector index. To use, you should have the ``neo4j`` python package installed. Args: url: Neo4j connection url username: Neo4j username. password: Neo4j password database: Optionally provide Neo4j database Defaults to "neo4j" embedding: Any embedding function implementing `langchain.embeddings.base.Embeddings` interface. distance_strategy: The distance strategy to use. (default: COSINE) pre_delete_collection: If True, will delete existing data if it exists. (default: False). Useful for testing. Example: .. code-block:: python from langchain_community.vectorstores.neo4j_vector import Neo4jVector from langchain_community.embeddings.openai import OpenAIEmbeddings url="bolt://localhost:7687" username="neo4j" password="pleaseletmein" embeddings = OpenAIEmbeddings() vectorestore = Neo4jVector.from_documents( embedding=embeddings, documents=docs, url=url username=username, password=password, ) """ def __init__( self, embedding: Embeddings, *, search_type: SearchType = SearchType.VECTOR, username: Optional[str] = None, password: Optional[str] = None, url: Optional[str] = None, keyword_index_name: Optional[str] = "keyword", database: Optional[str] = None, index_name: str = "vector", node_label: str = "Chunk", embedding_node_property: str = "embedding", text_node_property: str = "text", distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, logger: Optional[logging.Logger] = None, pre_delete_collection: bool = False, retrieval_query: str = "", relevance_score_fn: Optional[Callable[[float], float]] = None, index_type: IndexType = DEFAULT_INDEX_TYPE, graph: Optional[Neo4jGraph] = None, ) -> None: ``` ```python def retrieve_existing_fts_index( self, text_node_properties: List[str] = [] ) -> Optional[str]: """ Check if the fulltext index exists in the Neo4j database This method queries the Neo4j database for existing fts indexes with the specified name. Returns: (Tuple): keyword index information """ ``` ```python def similarity_search( self, query: str, k: int = 4, params: Dict[str, Any] = {}, filter: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> List[Document]: """Run similarity search with Neo4jVector. Args: query (str): Query text to search for. k (int): Number of results to return. Defaults to 4. Returns: List of Documents most similar to the query. """ ``` ```python def from_texts( cls: Type[Neo4jVector], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, ids: Optional[List[str]] = None, **kwargs: Any, ) -> Neo4jVector: """ Return Neo4jVector initialized from texts and embeddings. Neo4j credentials are required in the form of `url`, `username`, and `password` and optional `database` parameters. """ ``` ### Idea or request for content: At least include all parameters in the `Args` sections of the function docs.
DOC: Neo4jVector class => many parameters not documented
https://api.github.com/repos/langchain-ai/langchain/issues/24074/comments
0
2024-07-10T16:05:59Z
2024-07-10T16:08:41Z
https://github.com/langchain-ai/langchain/issues/24074
2,401,157,922
24,074
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code I am trying to use the Azure AI Search Vectorstore and retriever and the vectorstore and retriever (given from the vectorstore) work perfectly when doing retrieval of documents using the syncronous methods but gives an error when trying to run the async methods. Creating the instances of embeddings and Azure Search ```python from azure.search.documents.indexes.models import ( SearchField, SearchFieldDataType, SimpleField, ) from langchain_openai import AzureOpenAIEmbeddings from langchain_community.vectorstores import AzureSearch fields = [ SimpleField( name="content", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_profile_name="myHnswProfile", ), SimpleField( name="document_name", type=SearchFieldDataType.String, key=True, filterable=True, ) ] encoder = AzureOpenAIEmbeddings( azure_endpoint=os.getenv("EMBEDDINGS_OPENAI_ENDPOINT"), deployment=os.getenv("EMBEDDINGS_DEPLOYMENT_NAME"), openai_api_version=os.getenv("OPENAI_API_VERSION"), openai_api_key=os.getenv("AZURE_OPENAI_API_KEY"), ) vectorstore = AzureSearch( azure_search_endpoint=os.getenv("AI_SEARCH_ENDPOINT_SECRET"), azure_search_key=os.getenv("AI_SEARCH_API_KEY"), index_name=os.getenv("AI_SEARCH_INDEX_NAME_SECRET"), fields=fields, embedding_function=encoder, ) retriever = vectorstore.as_retriever(search_type="hybrid", k=2) ``` Syncronous methods working and returning documents ```python vectorstore.vector_search("what is the capital of France") retriever.invoke("what is the capital of France") ``` Asyncronous methods working and returning documents ```python await vectorstore.avector_search("what is the capital of France") await retriever.ainvoke("what is the capital of France") ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[15], line 1 ----> 1 await vectorstore.avector_search("what is the capital of France") File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:695, in AzureSearch.avector_search(self, query, k, filters, **kwargs) 682 async def avector_search( 683 self, query: str, k: int = 4, *, filters: Optional[str] = None, **kwargs: Any 684 ) -> List[Document]: 685 """ 686 Returns the most similar indexed documents to the query text. 687 (...) 693 List[Document]: A list of documents that are most similar to the query text. 694 """ --> 695 docs_and_scores = await self.avector_search_with_score( 696 query, k=k, filters=filters 697 ) 698 return [doc for doc, _ in docs_and_scores] File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:742, in AzureSearch.avector_search_with_score(self, query, k, filters, **kwargs) 730 """Return docs most similar to query. 731 732 Args: (...) 739 to the query and score for each 740 """ 741 embedding = await self._aembed_query(query) --> 742 docs, scores, _ = await self._asimple_search( 743 embedding, "", k, filters=filters, **kwargs 744 ) 746 return list(zip(docs, scores)) File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:1080, in AzureSearch._asimple_search(self, embedding, text_query, k, filters, **kwargs) 1066 async with self._async_client() as async_client: 1067 results = await async_client.search( 1068 search_text=text_query, 1069 vector_queries=[ (...) 1078 **kwargs, 1079 ) -> 1080 docs = [ 1081 ( 1082 _result_to_document(result), 1083 float(result["@search.score"]), 1084 result[FIELDS_CONTENT_VECTOR], 1085 ) 1086 async for result in results 1087 ] 1088 if not docs: 1089 raise ValueError(f"No {docs=}") File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:1084, in <listcomp>(.0) 1066 async with self._async_client() as async_client: 1067 results = await async_client.search( 1068 search_text=text_query, 1069 vector_queries=[ (...) 1078 **kwargs, 1079 ) 1080 docs = [ 1081 ( 1082 _result_to_document(result), 1083 float(result["@search.score"]), -> 1084 result[FIELDS_CONTENT_VECTOR], 1085 ) 1086 async for result in results 1087 ] 1088 if not docs: 1089 raise ValueError(f"No {docs=}") KeyError: 'content_vector' ### Description The async methods for searching documents (at least) do not work and raise an error. The async client is not being used for async methods for retrieval possibly. ### System Info langchain==0.2.6 langchain-community==0.2.4 langchain-core==0.2.11 langchain-openai==0.1.8 langchain-text-splitters==0.2.1
AzureSearch vectorstore does not work asyncronously
https://api.github.com/repos/langchain-ai/langchain/issues/24064/comments
0
2024-07-10T11:28:29Z
2024-07-10T11:31:07Z
https://github.com/langchain-ai/langchain/issues/24064
2,400,508,202
24,064
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint from langchain_community.llms.azureml_endpoint import ContentFormatterBase from langchain_community.chat_models.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIChatContentFormatter, ) from langchain_core.messages import HumanMessage chat = AzureMLChatOnlineEndpoint( endpoint_url="https://....inference.ml.azure.com/score", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="", content_formatter=CustomOpenAIChatContentFormatter(), ) # This works, but the output is of type BaseMessage, not AIMessage response = chat.invoke( [HumanMessage(content="Hallo, whats your name?")],max_tokens=3000 ) response # BaseMessage(content="Hello! I'm an AI language model, and I don't have a personal name. You can call me Assistant. How can I help you today?", type='assistant', id='run-36ffdce3-3cee-43e3-af21-505fe6cf61e1-0') # With a Prompt Template in a LCEL chain it does not work and throws the error from langchain_core.prompts import ChatPromptTemplate system = "You are a helpful assistant." human = "{text}" prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)]) chain = prompt | chat chain.invoke({"text": "Explain the importance of low latency for LLMs."}) ``` ### Error Message and Stack Trace (if applicable) KeyError Traceback (most recent call last) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type) 139 try: --> 140 choice = json.loads(output)["output"] 141 except (KeyError, IndexError, TypeError) as e: KeyError: 'output' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) [/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 6 line 8 [5](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=4) prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)]) [7](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=6) chain = prompt | chat ----> [8](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=7) chain.invoke({"text": "Explain the importance of low latency for LLMs."}) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507), in RunnableSequence.invoke(self, input, config, **kwargs) 2505 input = step.invoke(input, config, **kwargs) 2506 else: -> 2507 input = step.invoke(input, config) 2508 # finish the root run 2509 except BaseException as e: File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248), in BaseChatModel.invoke(self, input, config, stop, **kwargs) 237 def invoke( 238 self, 239 input: LanguageModelInput, (...) 243 **kwargs: Any, 244 ) -> BaseMessage: 245 config = ensure_config(config) 246 return cast( 247 ChatGeneration, --> 248 self.generate_prompt( 249 [self._convert_input(input)], 250 stop=stop, 251 callbacks=config.get("callbacks"), 252 tags=config.get("tags"), 253 metadata=config.get("metadata"), 254 run_name=config.get("run_name"), 255 run_id=config.pop("run_id", None), 256 **kwargs, 257 ).generations[0][0], 258 ).message File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs) 669 def generate_prompt( 670 self, 671 prompts: List[PromptValue], (...) 674 **kwargs: Any, 675 ) -> LLMResult: 676 prompt_messages = [p.to_messages() for p in prompts] --> 677 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 532 if run_managers: 533 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 534 raise e 535 flattened_outputs = [ 536 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item] 537 for res in results 538 ] 539 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 521 for i, m in enumerate(messages): 522 try: 523 results.append( --> 524 self._generate_with_cache( 525 m, 526 stop=stop, 527 run_manager=run_managers[i] if run_managers else None, 528 **kwargs, 529 ) 530 ) 531 except BaseException as e: 532 if run_managers: File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs) 747 else: 748 if inspect.signature(self._generate).parameters.get("run_manager"): --> 749 result = self._generate( 750 messages, stop=stop, run_manager=run_manager, **kwargs 751 ) 752 else: 753 result = self._generate(messages, stop=stop, **kwargs) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279), in AzureMLChatOnlineEndpoint._generate(self, messages, stop, run_manager, **kwargs) 273 request_payload = self.content_formatter.format_messages_request_payload( 274 messages, _model_kwargs, self.endpoint_api_type 275 ) 276 response_payload = self.http_client.call( 277 body=request_payload, run_manager=run_manager 278 ) --> 279 generations = self.content_formatter.format_response_payload( 280 response_payload, self.endpoint_api_type 281 ) 282 return ChatResult(generations=[generations]) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type) 140 choice = json.loads(output)["output"] 141 except (KeyError, IndexError, TypeError) as e: --> 142 raise ValueError(self.format_error_msg.format(api_type=api_type)) from e 143 return ChatGeneration( 144 message=BaseMessage( 145 content=choice.strip(), (...) 148 generation_info=None, 149 ) 150 if api_type == AzureMLEndpointApiType.serverless: ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type? ### Description Hi, I want to use the Mixtral-8x7B Instruct version from the Azure Machine Learning catalog but it is not working in Langchain Chains. Invoking the chat model itself works, however the type of the response is a BaseMessage and not an AIMessage (if you for example compare it to a response from ChatOpenAI()) When using the LLM with a prompt in a LCEL chain, it does not work. It first gives me the `KeyError: 'output'`. I don't know why this KeyError occurs. For the message `ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?` I made sure that dedicated is correct and if it would be wrong, invoking the chat model itself won't work either I think. I tried to convert the type of the LLM call to AIMessage, but I am not sure how to use the llm_call function further in my Langchain steps: ``` from langchain_core.messages import HumanMessage, AIMessage, BaseMessage def convert_to_ai_message(base_message: BaseMessage) -> AIMessage: return AIMessage(content=base_message.content, id=base_message.id) def llm_call(message): res = chat.invoke(message, max_tokens=2000) new_res = convert_to_ai_message(res) return new_res ``` So I think this is a bug which has to be fixed. ### System Info langchain 0.2.6 pypi_0 pypi langchain-chroma 0.1.0 pypi_0 pypi langchain-community 0.2.6 pypi_0 pypi langchain-core 0.2.10 pypi_0 pypi langchain-experimental 0.0.49 pypi_0 pypi langchain-groq 0.1.5 pypi_0 pypi langchain-openai 0.1.7 pypi_0 pypi langchain-postgres 0.0.3 pypi_0 pypi langchain-text-splitters 0.2.1 pypi_0 pypi
AzureMLEndpoint not working in LCEL: KeyError: 'output'
https://api.github.com/repos/langchain-ai/langchain/issues/24061/comments
0
2024-07-10T10:30:48Z
2024-07-10T10:34:09Z
https://github.com/langchain-ai/langchain/issues/24061
2,400,367,694
24,061
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base=api_base, model_name="microsoft/Phi-3-vision-128k-instruct", model_kwargs={"stop": ["."]} ) image_path = "invoice_data_images/Screenshot 2024-05-02 160946_page_1.png" with open(image_path, "rb") as image_file: image_base64 = base64.b64encode(image_file.read()).decode("utf-8") prompt_1 = "Give me the invoice date from the given image." messages = [ HumanMessage( content=[ {"type": "text", "text": prompt_1}, {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}} ] ) ] response = llm.invoke(messages) print(response) ### Error Message and Stack Trace (if applicable) # Error { "name": "BadRequestError", "message": "Error code: 400 - {'object': 'error', 'message': \"This model's maximum context length is 3744 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.\", 'type': 'BadRequestError', 'param': None, 'code': 400}", "stack": "--------------------------------------------------------------------------- BadRequestError Traceback (most recent call last) Cell In[96], line 26 16 messages = [ 17 HumanMessage( 18 content=[ (...) 22 ) 23 ] 25 # Invoke the model with the message ---> 26 response = llm.invoke(messages) 27 print(response) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:346, in BaseLLM.invoke(self, input, config, stop, **kwargs) 336 def invoke( 337 self, 338 input: LanguageModelInput, (...) 342 **kwargs: Any, 343 ) -> str: 344 config = ensure_config(config) 345 return ( --> 346 self.generate_prompt( 347 [self._convert_input(input)], 348 stop=stop, 349 callbacks=config.get(\"callbacks\"), 350 tags=config.get(\"tags\"), 351 metadata=config.get(\"metadata\"), 352 run_name=config.get(\"run_name\"), 353 run_id=config.pop(\"run_id\", None), 354 **kwargs, 355 ) 356 .generations[0][0] 357 .text 358 ) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:703, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs) 695 def generate_prompt( 696 self, 697 prompts: List[PromptValue], (...) 700 **kwargs: Any, 701 ) -> LLMResult: 702 prompt_strings = [p.to_string() for p in prompts] --> 703 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:882, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 867 if (self.cache is None and get_llm_cache() is None) or self.cache is False: 868 run_managers = [ 869 callback_manager.on_llm_start( 870 dumpd(self), (...) 880 ) 881 ] --> 882 output = self._generate_helper( 883 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 884 ) 885 return output 886 if len(missing_prompts) > 0: File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:740, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 738 for run_manager in run_managers: 739 run_manager.on_llm_error(e, response=LLMResult(generations=[])) --> 740 raise e 741 flattened_outputs = output.flatten() 742 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:727, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 717 def _generate_helper( 718 self, 719 prompts: List[str], (...) 723 **kwargs: Any, 724 ) -> LLMResult: 725 try: 726 output = ( --> 727 self._generate( 728 prompts, 729 stop=stop, 730 # TODO: support multiple run managers 731 run_manager=run_managers[0] if run_managers else None, 732 **kwargs, 733 ) 734 if new_arg_supported 735 else self._generate(prompts, stop=stop) 736 ) 737 except BaseException as e: 738 for run_manager in run_managers: File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_community/llms/openai.py:464, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 452 choices.append( 453 { 454 \"text\": generation.text, (...) 461 } 462 ) 463 else: --> 464 response = completion_with_retry( 465 self, prompt=_prompts, run_manager=run_manager, **params 466 ) 467 if not isinstance(response, dict): 468 # V1 client returns the response in an PyDantic object instead of 469 # dict. For the transition period, we deep convert it to dict. 470 response = response.dict() File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_community/llms/openai.py:119, in completion_with_retry(llm, run_manager, **kwargs) 117 \"\"\"Use tenacity to retry the completion call.\"\"\" 118 if is_openai_v1(): --> 119 return llm.client.create(**kwargs) 121 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 123 @retry_decorator 124 def _completion_with_retry(**kwargs: Any) -> Any: File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 275 msg = f\"Missing required argument: {quote(missing[0])}\" 276 raise TypeError(msg) --> 277 return func(*args, **kwargs) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/resources/completions.py:528, in Completions.create(self, model, prompt, best_of, echo, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, stream, stream_options, suffix, temperature, top_p, user, extra_headers, extra_query, extra_body, timeout) 499 @required_args([\"model\", \"prompt\"], [\"model\", \"prompt\", \"stream\"]) 500 def create( 501 self, (...) 526 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 527 ) -> Completion | Stream[Completion]: --> 528 return self._post( 529 \"/completions\", 530 body=maybe_transform( 531 { 532 \"model\": model, 533 \"prompt\": prompt, 534 \"best_of\": best_of, 535 \"echo\": echo, 536 \"frequency_penalty\": frequency_penalty, 537 \"logit_bias\": logit_bias, 538 \"logprobs\": logprobs, 539 \"max_tokens\": max_tokens, 540 \"n\": n, 541 \"presence_penalty\": presence_penalty, 542 \"seed\": seed, 543 \"stop\": stop, 544 \"stream\": stream, 545 \"stream_options\": stream_options, 546 \"suffix\": suffix, 547 \"temperature\": temperature, 548 \"top_p\": top_p, 549 \"user\": user, 550 }, 551 completion_create_params.CompletionCreateParams, 552 ), 553 options=make_request_options( 554 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 555 ), 556 cast_to=Completion, 557 stream=stream or False, 558 stream_cls=Stream[Completion], 559 ) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:1261, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1247 def post( 1248 self, 1249 path: str, (...) 1256 stream_cls: type[_StreamT] | None = None, 1257 ) -> ResponseT | _StreamT: 1258 opts = FinalRequestOptions.construct( 1259 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options 1260 ) -> 1261 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:942, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 933 def request( 934 self, 935 cast_to: Type[ResponseT], (...) 940 stream_cls: type[_StreamT] | None = None, 941 ) -> ResponseT | _StreamT: --> 942 return self._request( 943 cast_to=cast_to, 944 options=options, 945 stream=stream, 946 stream_cls=stream_cls, 947 remaining_retries=remaining_retries, 948 ) File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:1041, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls) 1038 err.response.read() 1040 log.debug(\"Re-raising status error\") -> 1041 raise self._make_status_error_from_response(err.response) from None 1043 return self._process_response( 1044 cast_to=cast_to, 1045 options=options, (...) 1048 stream_cls=stream_cls, 1049 ) BadRequestError: Error code: 400 - {'object': 'error', 'message': \"This model's maximum context length is 8192 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.\", 'type': 'BadRequestError', 'param': None, 'code': 400}" } ### Description I hosted vllm on ec2 instance. ### System Info langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.12 langchain-text-splitters==0.2.2
maximum context length is 8192 tokens. However, you requested 254457 tokens
https://api.github.com/repos/langchain-ai/langchain/issues/24058/comments
0
2024-07-10T09:54:32Z
2024-07-10T09:57:09Z
https://github.com/langchain-ai/langchain/issues/24058
2,400,283,552
24,058
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code import os import dotenv from langchain_community.chat_message_histories import PostgresChatMessageHistory from langchain_core.messages import AIMessage, HumanMessage from zhipuai import ZhipuAI SYSTEM_PROMPT_TEMPLATE = '''你是一个具有记忆功能的AI助手,请根据结合用户的当前提问,和之前的历史聊天记录,生成准确的回答。你们的对话全部采用中文。 下面是你们的历史聊天记录: [历史聊天记录开始] {chat_history} [历史聊天记录结束] ''' MODEL_NAME = 'glm-4-air' dotenv.load_dotenv() client = ZhipuAI(api_key=os.getenv("ZHIPUAI_API_KEY")) chat_message_history = PostgresChatMessageHistory( connection_string='postgresql://root:12345678@127.0.0.1:5432/llm_ops', table_name='llm_ops_chat_history', session_id="llm_ops") while True: query = input('Human:') if 'bye' == query: print('bye bye~') break system_prompt = SYSTEM_PROMPT_TEMPLATE.format(chat_history=chat_message_history) response = client.chat.completions.create( model=MODEL_NAME, messages=[ {'role': 'system', 'content': system_prompt}, {"role": "user", "content": query}, ], stream=True, ) output = '' print('AI:', end='', flush=True) for chunk in response: content = chunk.choices[0].delta.content print(content, end='', flush=True) output += content print() # save chat message chat_message_history.add_messages([HumanMessage(query), AIMessage(output)]) ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/zhangshenao/Desktop/llm-ops/llm-ops-backend/langchain_study/7-记忆功能实现/3.使用PostgresChatMessageHistory组件保存聊天历史.py", line 35, in <module> chat_message_history = PostgresChatMessageHistory( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py", line 203, in warn_if_direct_instance return wrapped(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_message_histories/postgres.py", line 44, in __init__ import psycopg File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/__init__.py", line 9, in <module> from . import pq # noqa: F401 import early to stabilize side effects ^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/pq/__init__.py", line 118, in <module> import_from_libpq() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/pq/__init__.py", line 104, in import_from_libpq PGcancelConn = module.PGcancelConn ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'psycopg_binary.pq' has no attribute 'PGcancelConn' Exception ignored in: <function PostgresChatMessageHistory.__del__ at 0x1038f4900> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_message_histories/postgres.py", line 97, in __del__ if self.cursor: ^^^^^^^^^^^ AttributeError: 'PostgresChatMessageHistory' object has no attribute 'cursor' ### Description I want to use `PostgresChatMessageHistory` to persist my chat message history into postgresql, but it occurs errors! ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101 > Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.82 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2
community: AttributeError: 'PostgresChatMessageHistory' object has no attribute 'cursor'
https://api.github.com/repos/langchain-ai/langchain/issues/24053/comments
0
2024-07-10T07:16:34Z
2024-07-10T14:45:20Z
https://github.com/langchain-ai/langchain/issues/24053
2,399,953,037
24,053
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/integrations/platforms/huggingface/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: DOC: I am looking for a guide on using LangChain and HuggingFace (langchain-huggingface library) with a serverless API endpoint for automatic speech recognition. Can you please refer me to one? ### Idea or request for content: _No response_
DOC: I am looking for a guide on using LangChain and HuggingFace (langchain-huggingface library) with a serverless API endpoint for automatic speech recognition. Can you please refer me to one?
https://api.github.com/repos/langchain-ai/langchain/issues/24052/comments
0
2024-07-10T07:12:33Z
2024-07-10T07:15:05Z
https://github.com/langchain-ai/langchain/issues/24052
2,399,945,107
24,052
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code: " def DocLoader(file): try: # print(file) loader = UnstructuredExcelLoader(file, mode="elements") documnets = loader.load() # print(file) return documnets except Exception as ex: print(f"Exception in DocLoader\n Exception: {ex}") " ### Error Message and Stack Trace (if applicable) 1. Exception in DocLoader Exception: boolean index did not match indexed array along dimension 1; dimension is 5 but corresponding boolean dimension is 2. 2. Exception: too many indices for array: array is 3-dimensional, but 13 were indexed ### Description **Problem**: - Excel Loader script in Databricks keeps encountering errors for unknown reasons. **Cause**: - The issue appears to stem from the Excel Loader in LangChain, but it only occurs in Databricks. I tested the same files locally, and they worked without any issues. **Solutions Tried**: 1. **Library Version**: I suspected the issue might be due to different versions of LangChain. I ensured that Databricks used the same version as my local environment by using the same requirements.txt file. However, the issue persisted. 2. **Python Environment**: - I considered that the problem might be due to different Python versions. I created a local environment with the same Python version as Databricks and tested the script. It worked fine locally, so the Python version does not seem to be the cause. 3. **Script Testing**: - To further debug, I took the same script from Databricks and ran it locally with 5-6 Excel files that caused exceptions in Databricks. I placed these files in a test blob storage account and ran the script. It executed without any issues. ### System Info ``` azure-functions langchain langchain-core langchain-community unstructured azure-storage-blob azure-identity networkx pandas openpyxl openai azure_search_documents-11.4.0b12-py3-none-any.whl psutil ``` ``` Databricks: Standard_D16ads_v5 - AMD based processor ``` > > Local System: > Processor 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 1.38 GHz > Installed RAM 16.0 GB (15.4 GB usable) > System type 64-bit operating system, x64-based processor
unstructuredExcelLoader throwing exception in Databricks but working fine in local system
https://api.github.com/repos/langchain-ai/langchain/issues/24050/comments
0
2024-07-10T07:02:55Z
2024-07-10T07:05:32Z
https://github.com/langchain-ai/langchain/issues/24050
2,399,925,929
24,050
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code ([example from the docs](https://python.langchain.com/v0.2/docs/integrations/llms/yandex/)): ```shell pip install -U langchain langchain-community yandexcloud ``` ```python from langchain_community.llms import YandexGPT from langchain_core.prompts import PromptTemplate from langchain.chains import LLMChain os.environ['YC_IAM_TOKEN'] = 'xxxxxxx' os.environ['YC_FOLDER_ID'] = 'xxxxxxx' llm = YandexGPT() template = "What is the capital of {country}?" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) country = "Russia" llm_chain.invoke(country) ``` raises an error: ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[8], line 3 1 country = "Russia" ----> 3 llm_chain.invoke(country) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info: File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) 164 except BaseException as e: File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/llm.py:128, in LLMChain._call(self, inputs, run_manager) 123 def _call( 124 self, 125 inputs: Dict[str, Any], 126 run_manager: Optional[CallbackManagerForChainRun] = None, 127 ) -> Dict[str, str]: --> 128 response = self.generate([inputs], run_manager=run_manager) 129 return self.create_outputs(response)[0] File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/llm.py:140, in LLMChain.generate(self, input_list, run_manager) 138 callbacks = run_manager.get_child() if run_manager else None 139 if isinstance(self.llm, BaseLanguageModel): --> 140 return self.llm.generate_prompt( 141 prompts, 142 stop, 143 callbacks=callbacks, 144 **self.llm_kwargs, 145 ) 146 else: 147 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch( 148 cast(List, prompts), {"callbacks": callbacks} 149 ) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:703, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs) 695 def generate_prompt( 696 self, 697 prompts: List[PromptValue], (...) 700 **kwargs: Any, 701 ) -> LLMResult: 702 prompt_strings = [p.to_string() for p in prompts] --> 703 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:882, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 867 if (self.cache is None and get_llm_cache() is None) or self.cache is False: 868 run_managers = [ 869 callback_manager.on_llm_start( 870 dumpd(self), (...) 880 ) 881 ] --> 882 output = self._generate_helper( 883 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 884 ) 885 return output 886 if len(missing_prompts) > 0: File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:740, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 738 for run_manager in run_managers: 739 run_manager.on_llm_error(e, response=LLMResult(generations=[])) --> 740 raise e 741 flattened_outputs = output.flatten() 742 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:727, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 717 def _generate_helper( 718 self, 719 prompts: List[str], (...) 723 **kwargs: Any, 724 ) -> LLMResult: 725 try: 726 output = ( --> 727 self._generate( 728 prompts, 729 stop=stop, 730 # TODO: support multiple run managers 731 run_manager=run_managers[0] if run_managers else None, 732 **kwargs, 733 ) 734 if new_arg_supported 735 else self._generate(prompts, stop=stop) 736 ) 737 except BaseException as e: 738 for run_manager in run_managers: File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:1431, in LLM._generate(self, prompts, stop, run_manager, **kwargs) 1428 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager") 1429 for prompt in prompts: 1430 text = ( -> 1431 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) 1432 if new_arg_supported 1433 else self._call(prompt, stop=stop, **kwargs) 1434 ) 1435 generations.append([Generation(text=text)]) 1436 return LLMResult(generations=generations) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:165, in YandexGPT._call(self, prompt, stop, run_manager, **kwargs) 144 def _call( 145 self, 146 prompt: str, (...) 149 **kwargs: Any, 150 ) -> str: 151 """Call the Yandex GPT model and return the output. 152 153 Args: (...) 163 response = YandexGPT("Tell me a joke.") 164 """ --> 165 text = completion_with_retry(self, prompt=prompt) 166 if stop is not None: 167 text = enforce_stop_tokens(text, stop) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:334, in completion_with_retry(llm, **kwargs) 330 @retry_decorator 331 def _completion_with_retry(**_kwargs: Any) -> Any: 332 return _make_request(llm, **_kwargs) --> 334 return _completion_with_retry(**kwargs) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:336, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw) 334 copy = self.copy() 335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined] --> 336 return copy(f, *args, **kw) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:475, in Retrying.__call__(self, fn, *args, **kwargs) 473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 474 while True: --> 475 do = self.iter(retry_state=retry_state) 476 if isinstance(do, DoAttempt): 477 try: File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:376, in BaseRetrying.iter(self, retry_state) 374 result = None 375 for action in self.iter_state.actions: --> 376 result = action(retry_state) 377 return result File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:398, in BaseRetrying._post_retry_check_actions.<locals>.<lambda>(rs) 396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None: 397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result): --> 398 self._add_action_func(lambda rs: rs.outcome.result()) 399 return 401 if self.after is not None: File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py:449, in Future.result(self, timeout) 447 raise CancelledError() 448 elif self._state == FINISHED: --> 449 return self.__get_result() 451 self._condition.wait(timeout) 453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py:401, in Future.__get_result(self) 399 if self._exception: 400 try: --> 401 raise self._exception 402 finally: 403 # Break a reference cycle with the exception in self._exception 404 self = None File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:478, in Retrying.__call__(self, fn, *args, **kwargs) 476 if isinstance(do, DoAttempt): 477 try: --> 478 result = fn(*args, **kwargs) 479 except BaseException: # noqa: B902 480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:332, in completion_with_retry.<locals>._completion_with_retry(**_kwargs) 330 @retry_decorator 331 def _completion_with_retry(**_kwargs: Any) -> Any: --> 332 return _make_request(llm, **_kwargs) File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:238, in _make_request(self, prompt) 229 request = CompletionRequest( 230 model_uri=self.model_uri, 231 completion_options=CompletionOptions( (...) 235 messages=[Message(role="user", text=prompt)], 236 ) 237 stub = TextGenerationServiceStub(channel) --> 238 res = stub.Completion(request, metadata=self._grpc_metadata) # type: ignore[attr-defined] 239 return list(res)[0].alternatives[0].message.text AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata' ### Description * I'm trying to use YandexGPT module from langchain-community to connect to YandexGPT LLM using steps from the [docs](https://python.langchain.com/v0.2/docs/integrations/llms/yandex/) instead I got error. ### System Info ``` $ python -m langchain_core.sys_info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:42 PDT 2023; root:xnu-10002.1.13~1/RELEASE_X86_64 > Python Version: 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.84 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata'
https://api.github.com/repos/langchain-ai/langchain/issues/24049/comments
0
2024-07-10T06:59:34Z
2024-07-31T21:18:34Z
https://github.com/langchain-ai/langchain/issues/24049
2,399,920,012
24,049
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain.js documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain.js rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_community.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() ### Error Message and Stack Trace (if applicable) <details> loader = UnstructuredFileLoader("./state_of_the_union.txt") loader.load() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_core/document_loaders/base.py", line 30, in load return list(self.lazy_load()) File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load elements = self._get_elements() File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 181, in _get_elements return partition(filename=self.file_path, **self.unstructured_kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/auto.py", line 464, in partition elements = partition_text( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 101, in partition_text return _partition_text( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/documents/elements.py", line 593, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 180, in _partition_text file_content = _split_by_paragraph( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 360, in _split_by_paragraph _split_content_to_fit_max( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 392, in _split_content_to_fit_max sentences = sent_tokenize(content) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 136, in sent_tokenize _download_nltk_packages_if_not_present() File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 121, in _download_nltk_packages_if_not_present tagger_available = check_for_nltk_package( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 112, in check_for_nltk_package nltk.find(f"{package_category}/{package_name}", paths=paths) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 555, in find return find(modified_name, paths) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 542, in find return ZipFilePathPointer(p, zipentry) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator return init_func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 394, in __init__ zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile)) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator return init_func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 935, in __init__ zipfile.ZipFile.__init__(self, filename) File "/usr/lib/python3.10/zipfile.py", line 1269, in __init__ self._RealGetContents() File "/usr/lib/python3.10/zipfile.py", line 1336, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file >>> loader = UnstructuredFileLoader("./state_of_the_union.txt") >>> loader = UnstructuredFileLoader(r"./state_of_the_union.txt") >>> loader.load() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_core/document_loaders/base.py", line 30, in load return list(self.lazy_load()) File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load elements = self._get_elements() File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 181, in _get_elements return partition(filename=self.file_path, **self.unstructured_kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/auto.py", line 464, in partition elements = partition_text( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 101, in partition_text return _partition_text( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/documents/elements.py", line 593, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper elements = func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 180, in _partition_text file_content = _split_by_paragraph( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 360, in _split_by_paragraph _split_content_to_fit_max( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 392, in _split_content_to_fit_max sentences = sent_tokenize(content) File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 136, in sent_tokenize _download_nltk_packages_if_not_present() File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 121, in _download_nltk_packages_if_not_present tagger_available = check_for_nltk_package( File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 112, in check_for_nltk_package nltk.find(f"{package_category}/{package_name}", paths=paths) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 555, in find return find(modified_name, paths) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 542, in find return ZipFilePathPointer(p, zipentry) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator return init_func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 394, in __init__ zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile)) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator return init_func(*args, **kwargs) File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 935, in __init__ zipfile.ZipFile.__init__(self, filename) File "/usr/lib/python3.10/zipfile.py", line 1269, in __init__ self._RealGetContents() File "/usr/lib/python3.10/zipfile.py", line 1336, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file </details> ### Description something wrong. can't upload the txt. ### System Info ubuntu 22.01
something wrong when i upload a .txt
https://api.github.com/repos/langchain-ai/langchain/issues/24101/comments
1
2024-07-10T05:14:20Z
2024-07-12T15:55:33Z
https://github.com/langchain-ai/langchain/issues/24101
2,401,986,680
24,101
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```py PineconeVectorStore.from_documents( [final_results_doc], embeddings, index_name=index_name, namespace=namespace, async_req=False, # this does not work ) ``` ### Description #22571 was merged, but it doesn't actually fully address the issue. `async_req` can't be passed to other methods like `PineconeVectorStore.from_documents` in order address the multiprocessing issue with AWS Lambda. ### System Info ``` langchain==0.2.7 langchain-aws==0.1.9 langchain-community==0.2.7 langchain-core==0.2.12 langchain-pinecone==0.1.1 langchain-text-splitters==0.2.2 ```
pinecone: Fix multiprocessing issue in PineconeVectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/24042/comments
0
2024-07-10T00:36:55Z
2024-07-10T00:39:32Z
https://github.com/langchain-ai/langchain/issues/24042
2,399,478,080
24,042
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python %pip install --upgrade --quiet fastembed %pip install --upgrade --quiet langchain_community %pip install --upgrade --quiet langchain from langchain_community.embeddings.fastembed import FastEmbedEmbeddings embeddings = FastEmbedEmbeddings() ``` ### Error Message and Stack Trace (if applicable) `--------------------------------------------------------------------------- ValidationError Traceback (most recent call last) [<ipython-input-8-152b947eb52c>](https://localhost:8080/#) in <cell line: 1>() ----> 1 embeddings = FastEmbedEmbeddings() [/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error: --> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values) ValidationError: 1 validation error for FastEmbedEmbeddings _model extra fields not permitted (type=value_error.extra) ### Description Unable to Instantie FastEmbed model. It raises validation error for extra fields while none are provided. Issue seems to be arising from pydantic. Code was working well on langchain == 0.2.6 and langchain-core == 0.2.11. Tried installing older versions but still getting the error. Followed the tutorial here: https://python.langchain.com/v0.2/docs/integrations/text_embedding/fastembed/ ### System Info langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.12 langchain-text-splitters==0.2.2 Google-colab Python 3.10.12
Validation error for FastEmbedEmbeddings - extra fields not permitted
https://api.github.com/repos/langchain-ai/langchain/issues/24039/comments
11
2024-07-09T21:03:27Z
2024-07-30T16:42:48Z
https://github.com/langchain-ai/langchain/issues/24039
2,399,207,064
24,039
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python # prompt from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, PromptTemplate, MessagesPlaceholder prompt = ChatPromptTemplate.from_messages( [ SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad') ] ) # tools from langchain.tools import BaseTool, StructuredTool, tool @tool def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b tools = [multiply] # model from langchain_openai.chat_models import ChatOpenAI from langchain_google_vertexai import ChatVertexAI from langchain_groq import ChatGroq from langchain_google_vertexai.model_garden import ChatAnthropicVertex #model = ChatOpenAI(model="gpt-4o") #model = ChatGroq(model_name="llama3-70b-8192", temperature=0, max_tokens=1000) #model = ChatVertexAI(model_name="gemini-1.5-flash-001", location="us-east5", project="my_gcp_project") model = ChatAnthropicVertex(model_name="claude-3-haiku@20240307", location="us-east5", project="my_gcp_project") # agent from langchain.agents import create_tool_calling_agent agent = create_tool_calling_agent(model, tools, prompt) # agent executor from langchain.agents import AgentExecutor agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=10, verbose=True) agent_executor.invoke({"input": "hi!"}) ``` ### Error Message and Stack Trace (if applicable) ### OpenAI: gpt-4o ``` {'input': 'hi!', 'output': 'Hello! How can I assist you today?'} ``` ### Groq: llama3-70b-8192 ``` {'input': 'hi!', 'output': "Hi! It's nice to meet you. Is there something I can help you with or would you like to chat?"} ``` ### VertexAI: gemini-1.5-flash-001 ``` {'input': 'hi!', 'output': 'Hello! 👋 How can I help you today? 😊 \n'} ``` ### VertexAI: claude-3-haiku@20240307 ``` {'input': 'hi!', 'output': [{'text': 'Hello! How can I assist you today?', 'type': 'text', 'index': 0}]} ``` ### Description `ChatAnthropicVertex` generates differently structured agent-executor output than other `Chat*` functions in langchain, such as `ChatOpenAI` and `ChatGroq`. This leads to downstream errors, such as that described at: https://github.com/langchain-ai/langchain/issues/24003 ### System Info langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.12 langchain-google-vertexai==1.0.6 langchain-groq==0.1.6 langchain-openai==0.1.14 langchain-text-splitters==0.2.2 langchainhub==0.1.20
ChatAnthropicVertex + AgentExecutor => not consistent output versus other Chat functions
https://api.github.com/repos/langchain-ai/langchain/issues/24029/comments
1
2024-07-09T16:05:35Z
2024-07-25T10:22:05Z
https://github.com/langchain-ai/langchain/issues/24029
2,398,612,840
24,029
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code Document chunk: ``` async def chunk(self) -> list[LangChainDocument]: content = await self.get_content() async with aiofiles.tempfile.NamedTemporaryFile(delete=False) as tmp_file: await tmp_file.write(content) if self.type == "pdf": loader = PyPDFLoader(tmp_file.name) splitter = RecursiveCharacterTextSplitter( chunk_size=config.EMBEDDING_CHUNK_SIZE, chunk_overlap=config.EMBEDDING_CHUNK_OVERLAP, ) elif self.type == "docx": loader = Docx2txtLoader(tmp_file.name) splitter = RecursiveCharacterTextSplitter( chunk_size=config.EMBEDDING_CHUNK_SIZE, chunk_overlap=config.EMBEDDING_CHUNK_OVERLAP, ) else: raise ValueError(f"Document type {self.type} not supported.") return splitter.split_documents(loader.load()) ``` -------------------------------------------------------------------------------------------------------------------------------------- PGVector creation: ``` async def chunk(self): chunks = [] for document in self.documents: chunk = await document.chunk() chunks.extend(chunk) return chunks async def create_vector_store(self, embedding: Embeddings) -> PGVector: docs = await self.chunk() vector_store = await PGVector.afrom_documents( embedding=embedding, documents=docs, collection_name=f"index_{self.id}", connection_string=config.CONNECTION_STRING, pre_delete_collection=True, use_jsonb=True, ) return vector_store ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic\http\http1.py", line 119, in http1 await self.protocol.request_handler(self.request) File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic\app.py", line 1379, in handle_request response = await response ^^^^^^^^^^^^^^ File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic_security\authorization.py", line 158, in wrapper return await func(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\gpt_orchestrator\blueprints\index\view.py", line 82, in on_index_vectorize await index.create_vector_store(embedding) File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\gpt_orchestrator\blueprints\index\models.py", line 26, in create_vector_store vector_store = await PGVector.afrom_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\vectorstores\base.py", line 1006, in afrom_documents return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\vectorstores\base.py", line 1040, in afrom_texts return await run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\runnables\config.py", line 557, in run_in_executor return await asyncio.get_running_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ asyncio.exceptions.CancelledError: Cancel connection task with a timeout ### Description When utilizing a lot of documents to create a large knowledge base vectorstore index with pgvector, it times out. I'm not sure what is causing this other than the amount of documents being passed to the index is too large. However for obvious reasons this is not acceptable as a large knowledgebase per index is required. ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.19045 > Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.5 > langsmith: 0.1.77 > langchain_openai: 0.1.8 > langchain_text_splitters: 0.2.1 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
PGVector timeout on creation with large knowledge base.
https://api.github.com/repos/langchain-ai/langchain/issues/24028/comments
0
2024-07-09T15:06:21Z
2024-07-09T15:23:49Z
https://github.com/langchain-ai/langchain/issues/24028
2,398,468,223
24,028
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Hi! First of all, thanks to all of the team for their work! I've been through some of your tutorials and I have faced many issue, I hope this issue might allow future new user not to be lost. Since I don't want to give my credit card to OpenAI, I've completed most of [Basics Tutorials](https://python.langchain.com/v0.2/docs/tutorials/#basics) with [`ChatOllama`](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) but I couldn't finish [Build an Agent](https://python.langchain.com/v0.2/docs/tutorials/agents/) because it require a `ChatModel` that can make tool calling. According to [Component/Chat models](https://python.langchain.com/v0.2/docs/integrations/chat/), `ChatHuggingFace` has this feature, that's why I've decided to give it a try. It worked like a charm until i was unable to make the model returns a tool call, so I've decided to complete the [`HuggingFace` cookbook](https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/). Besides an error about the expected `chat_model.model_id`, I was not able to reproduce the return of `tool_chain.invoke("How much is 3 multiplied by 12?")` and I was getting `[]` instead of `[Calculator(a=3, b=12)]`. You also talk about `text-generation-inference` without giving any link to the reference and using a method different from the one given by Hugging Face in [their blog](https://huggingface.co/blog/tgi-messages-api#integrate-with-langchain-and-llamaindex) After signing up to MistralAI, I've compared results replacing `ChatHuggingFace` with [`ChatMistralAI`](https://python.langchain.com/v0.2/docs/integrations/chat/mistralai/) and I get the expected result, that's why I think the information concerning the `Tool calling`'s feature of `ChatHuggingFace` in your [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) is misleading (see [related issue](https://github.com/langchain-ai/langchain/issues/24024)). ### Idea or request for content: So I think there is two issues with this cookbook: - on the expected result of `chat_model.model_id` which should be `'HuggingFaceH4/zephyr-7b-beta'`). - on the result of `tool_chain.invoke("How much is 3 multiplied by 12?")` which should be `[]` with the chosen model.
DOC: <Issue related to /v0.2/docs/integrations/chat/huggingface/>
https://api.github.com/repos/langchain-ai/langchain/issues/24025/comments
0
2024-07-09T13:57:27Z
2024-07-09T14:00:07Z
https://github.com/langchain-ai/langchain/issues/24025
2,398,309,644
24,025
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/integrations/chat/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: Hi! First of all, thanks to all of the team for their work! I've been through some of your tutorials and I have faced many issue, I hope this issue might allow future new user not to be lost. Since I don't want to give my credit card to OpenAI, I've completed most of [Basics Tutorials](https://python.langchain.com/v0.2/docs/tutorials/#basics) with [`ChatOllama`](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) but I couldn't finish [Build an Agent](https://python.langchain.com/v0.2/docs/tutorials/agents/) because it require a `ChatModel` that can make tool calling. According to [Component/Chat models](https://python.langchain.com/v0.2/docs/integrations/chat/), `ChatHuggingFace` has this feature, that's why I've decided to give it a try. It worked like a charm until i was unable to make the model returns a tool call, so I've decided to complete the [`HuggingFace` cookbook](https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/). Besides an error about the expected `chat_model.model_id`, I was not able to reproduce the return of `tool_chain.invoke("How much is 3 multiplied by 12?")` and I was getting `[]` instead of `[Calculator(a=3, b=12)]` (see [related issue](https://github.com/langchain-ai/langchain/issues/24025)). You also talk about `text-generation-inference` without giving any link to the reference and using a method different from the one given by Hugging Face in [their blog](https://huggingface.co/blog/tgi-messages-api#integrate-with-langchain-and-llamaindex) After signing up to MistralAI, I've compared results replacing `ChatHuggingFace` with [`ChatMistralAI`](https://python.langchain.com/v0.2/docs/integrations/chat/mistralai/) and I get the expected result, that's why I think the information concerning the `Tool calling`'s feature of `ChatHuggingFace` in your [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) is misleading. ### Idea or request for content: `langchain_huggingface.chat_models.huggingface.ChatHuggingFace.bind_tools`'s docstring indicate that an assumption on the model compatibility with OpenAI tool-calling API is made. I think the Chat models features' table in your documentation should reflect this behaviour with an additionnal explanation on how to check this assumption. A difference between the use of `Hugging Face`'s models as it is (with `ChatHuggingFace`) and their use through the toolkit `text-generation-inference` (with `ChatOpenAI`) would also be welcome.
DOC: <Issue related to /v0.2/docs/integrations/chat/>
https://api.github.com/repos/langchain-ai/langchain/issues/24024/comments
0
2024-07-09T13:56:56Z
2024-07-09T13:59:38Z
https://github.com/langchain-ai/langchain/issues/24024
2,398,308,405
24,024
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_community.utilities.spark_sql import SparkSQL from sql_agent.code_generation_llm import CodeGenerationLLM class Noneinputs: pass import requests from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from typing import Mapping, Any, Optional, List import json from langchain_community.agent_toolkits import SparkSQLToolkit, create_spark_sql_agent def init(spark): spark_sql = SparkSQL(spark_session=spark, schema='dim_us') llm = CodeGenerationLLM() toolkit = SparkSQLToolkit(db=spark_sql, llm=llm) # Add the following lines tables = spark.catalog.listTables() # Get list of tables # print(f'tables:{tables}') agent_executor = create_spark_sql_agent(llm=llm, toolkit=toolkit, verbose=True, allow_dangerous_requests=True, agent_executor_kwargs=dict( handle_parsing_errors="If successfully execute the plan then return summarize and end the plan. Otherwise, cancel this plan.", ), ) return agent_executor agent_executor = init(spark) agent_executor.run("How many customers are created in 2023?") ``` I'm trying to use these code to connect our spark server to do text2sql using SparkSQLToolkit and AgentExecutor.I expected to see the action input should be relevant to our database before executing `action` , However, it seems that it is just inferenced by LLM with prompt. And I also tried to print some info in each run function of list_tables_sql_db/schema_sql_db/query_checker_sql_db/query_sql_db of BaseSparkSQLTool , but it was unable to output anything.output msg: ``` [1m> Entering new AgentExecutor chain... current prompt is ----------------------------: You are an agent designed to interact with Spark SQL. Given an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for the relevant columns given the question. You have access to tools for interacting with the database. Only use the below tools. Only use the information returned by the below tools to construct your final answer. You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again. DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database. If the question does not seem related to the database, just return "I don't know" as the answer. query_sql_db - Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. schema_sql_db - Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling list_tables_sql_db first! Example Input: "table1, table2, table3" list_tables_sql_db - Input is an empty string, output is a comma separated list of tables in the Spark SQL. query_checker_sql_db - Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with query_sql_db! Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [query_sql_db, schema_sql_db, list_tables_sql_db, query_checker_sql_db] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Query db, and show me How many customers are created in 2023? Thought: I should look at the tables in the database to see what I can query. result is : Thought: I should look at the tables in the database to see what I can query. Action: list_tables_sql_db Action Input: Observation: Let's say the observation is "customers, orders, products" Thought: Now that I have the list of tables, I should check the schema of the "customers" table to see if it has a column related to creation date. Action: schema_sql_db Action Input: "customers" Observation: Let's say the observation is that the "customers" table has columns "id", "name", "email", "created_at" where "created_at" is a timestamp. Thought: Now that I know the schema of the "customers" table, I can construct a query to count the number of customers created in 2023. Action: query_checker_sql_db Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023" Observation: The query is correct. Thought: Now that I have a correct query, I can execute it to get the result. Action: query_sql_db Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023" Observation: Let's say the observation is "123" Thought: I now know the final answer. Final Answer: There are 123 customers created in 2023. Parsing LLM output produced both a final answer and a parse-able action:: Thought: I should look at the tables in the database to see what I can query. Action: list_tables_sql_db Action Input: Observation: Let's say the observation is "customers, orders, products" Thought: Now that I have the list of tables, I should check the schema of the "customers" table to see if it has a column related to creation date. Action: schema_sql_db Action Input: "customers" Observation: Let's say the observation is that the "customers" table has columns "id", "name", "email", "created_at" where "created_at" is a timestamp. Thought: Now that I know the schema of the "customers" table, I can construct a query to count the number of customers created in 2023. Action: query_checker_sql_db Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023" Observation: The query is correct. Thought: Now that I have a correct query, I can execute it to get the result. Action: query_sql_db Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023" Observation: Let's say the observation is "123" Thought: I now know the final answer. Final Answer: There are 123 customers created in 2023. Observation: If successfully execute the plan then return summarize and end the plan. Otherwise, stop and cancel current execution. ``` ### System Info langchain:0.2.6 langchain_core:0.2.11 langchain_community:0.2.6 pyspark:3.2.2 python:3.10.4 os:Centos7
SparkSQLToolkit not invoked when AgentExecutor running
https://api.github.com/repos/langchain-ai/langchain/issues/24023/comments
0
2024-07-09T13:38:22Z
2024-07-09T13:41:03Z
https://github.com/langchain-ai/langchain/issues/24023
2,398,259,462
24,023
[ "langchain-ai", "langchain" ]
### Example Code ```python from langchain_core.pydantic_v1 import BaseModel, Field from langchain_experimental.llms.ollama_functions import OllamaFunctions from langchain_core.messages import HumanMessage, SystemMessage from langchain_core.prompts import ChatPromptTemplate class Answer(BaseModel): agent: str = Field(description="Selected agent based on the input") with open('prompts/router.txt', 'r') as file: router_prompt = file.read().replace('\n', '') messages = [ SystemMessage( content=router_prompt ), HumanMessage( content="{message}" ), ] prompt = ChatPromptTemplate.from_messages(messages) llm = OllamaFunctions( model="phi3" ) structured_llm = llm.with_structured_output(Answer) chain = prompt | structured_llm message = {'message': 'Hello, I am looking for some thesis internship opportunities.'} response = chain.invoke(message) ``` ### Error Message and Stack Trace (if applicable) ```python ValueError Traceback (most recent call last) Cell In[41], [line 2] [1] message = {'message': 'Hello, I am looking for some thesis internship opportunities.'} ----> [2]response = chain.invoke(message) [4] response File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:2499, in RunnableSequence.invoke(self, input, config, **kwargs) [2497] input = step.invoke(input, config, **kwargs) [2498] else: -> [2499] input = step.invoke(input, config) [2500] # finish the root run [2501] except BaseException as e: File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:3977, in RunnableLambda.invoke(self, input, config, **kwargs) [3975] """Invoke this runnable synchronously.""" [3976] if hasattr(self, "func"): -> [3977] return self._call_with_config( [3978] self._invoke, [3979] input, [3980] self._config(config, self.func), [3981] **kwargs, [3982] ) [3983] else: [3984] raise TypeError( [3985] "Cannot invoke a coroutine function synchronously." [3986] "Use `ainvoke` instead." [3987] ) File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:1593, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs) [1589] context = copy_context() [1590] context.run(_set_config_context, child_config) [1591] output = cast( [1592] Output, -> [1593] context.run( [1594] call_func_with_variable_args, # type: ignore[arg-type] [1595] func, # type: ignore[arg-type] [1596] input, # type: ignore[arg-type] [1597] config, [1598] run_manager, [1599] **kwargs, [1600] ), [1601] ) [1602] except BaseException as e: [1603] run_manager.on_chain_error(e) File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs) [378] if run_manager is not None and accepts_run_manager(func): [379] kwargs["run_manager"] = run_manager --> [380] return func(input, **kwargs) File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:3845, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs) [3843] output = chunk [3844] else: -> [3845] output = call_func_with_variable_args( [3846] self.func, input, config, run_manager, **kwargs [3847] ) [3848] # If the output is a runnable, invoke it [3849] if isinstance(output, Runnable): File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs) [378] if run_manager is not None and accepts_run_manager(func): [379] kwargs["run_manager"] = run_manager --> [380] return func(input, **kwargs) File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_experimental\llms\ollama_functions.py:132, in parse_response(message) [128] raise ValueError( [129] f"`arguments` missing from `function_call` within AIMessage: {message}" [130] ) [131] else: --> [132] raise ValueError("`tool_calls` missing from AIMessage: {message}") [133] raise ValueError(f"`message` is not an instance of `AIMessage`: {message}") ValueError: `tool_calls` missing from AIMessage: {message} ``` ### Description I am trying to use Ollama with a structured output with the class OllamaFunctions, but I am getting this error wehn invoking the chain. I also tried switching from _ChatPromptTemplate_ to _PromptTemplate_ following the [official example](https://python.langchain.com/v0.1/docs/integrations/chat/ollama_functions/) but I get the same error. ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22631 > Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.12 > langchain: 0.2.7 > langchain_community: 0.2.7 > langsmith: 0.1.84 > langchain_experimental: 0.0.62 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
OllamaFunctions returning ValueError: `tool_calls` missing from AIMessage: {message}
https://api.github.com/repos/langchain-ai/langchain/issues/24019/comments
0
2024-07-09T13:11:00Z
2024-07-09T13:17:15Z
https://github.com/langchain-ai/langchain/issues/24019
2,398,186,340
24,019
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_openai import AzureOpenAIEmbeddings embeddings_model = AzureOpenAIEmbeddings( deployment=<'deployment_name'> model='gpt_35_turbo', openai_api_type="azure", values=model_config, ) from langchain_community.vectorstores.redis import Redis Redis.from_documents( docs, # a list of Document objects from loaders or created embeddings_model, redis_url=redis_url, index_name=redis_index_name, ) ### Error Message and Stack Trace (if applicable) TypeError: Embeddings.create() got an unexpected keyword argument 'values' ### Description I'm trying to use Azure openai deployment to generate embeddings and store them in Redis vectorDB. I created the embeddings model as follow and pass the model_config (like `embedding_ctx_length`, `generation_max_tokens`, `allowed_special`, `model_kwargs`) parameters as `values`: from langchain_openai import AzureOpenAIEmbeddings embeddings_model = AzureOpenAIEmbeddings( deployment=<'deployment_name'> model='gpt_35_turbo', openai_api_type="azure", values=model_config, ) Then I call `Redis.from_documents()` to generate embeddings as follow: from langchain_community.vectorstores.redis import Redis Redis.from_documents( docs, # a list of Document objects from loaders or created embeddings_model, redis_url=redis_url, index_name=redis_index_name, ) It fails with the following error: TypeError: Embeddings.create() got an unexpected keyword argument 'values' On my second try to fix this issue, I tried to create the embedding model as follow: from langchain_openai import AzureOpenAIEmbeddings "model_config": { "allowed_special": "", "chunk_size": 50, "disallowed_special": "all", "embedding_ctx_length": 8191, "generation_max_tokens": 8000, "model_kwargs": "" } embeddings_model = AzureOpenAIEmbeddings( deployment=<'deployment_name'> model='gpt_35_turbo', openai_api_type="azure", **self.__model_config, ) then it doesn't handle the cases if `model_kwargs` is not set: > invalid_model_kwargs = all_required_field_names.intersection(extra.keys()) E AttributeError: 'str' object has no attribute 'keys' .venv/lib/python3.12/site-packages/langchain_openai/embeddings/base.py:219: AttributeError Any suggestion on how to fix the issue? ### System Info langchain 0.2.3 langchain-community 0.2.4 langchain-core 0.2.5 langchain-openai 0.1.14 langchain-text-splitters 0.2.1
AzureOpenAIEmbeddings fails to pars model_config
https://api.github.com/repos/langchain-ai/langchain/issues/24017/comments
0
2024-07-09T11:28:08Z
2024-07-09T11:30:47Z
https://github.com/langchain-ai/langchain/issues/24017
2,397,943,039
24,017
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python class KeyDevelopment(BaseModel): """Information about a development in the history of cars.""" model_config = ConfigDict(extra='allow') EBITDA: str = Field( '', description="EBITDA,单位可以根据表头里面来识别,一般披露在财务数或财务概况下面,只需要取到2020、2021、2023年") EBITDA_interest: str = Field( '', description="EBITDA 利息保(倍)数倍)一般披露在财务数或财务概况下面,只需要取到2020、2021、2023年一般披露在财务数或财务概况下面") year: str = Field('',description="一般就是EBITDA对应的年份") class Config: extra='allow' ``` ### Error Message and Stack Trace (if applicable) 1 validation error for ExtractionData key_developments field required (type=value_error.missing) ### Description when i using pydantic_v1 redefine my_self attributes, it get me a error about this:1 validation error for ExtractionData key_developments field required (type=value_error.missing),how can i solve it.i have add the extra keywords.,but i can not sovle it. ### System Info macos
can not redefine myself attributes
https://api.github.com/repos/langchain-ai/langchain/issues/24010/comments
1
2024-07-09T09:42:31Z
2024-07-10T14:39:44Z
https://github.com/langchain-ai/langchain/issues/24010
2,397,686,448
24,010
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import os from typing import * from langchain_anthropic import ChatAnthropic from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory from langchain.agents import create_tool_calling_agent from langchain_core.prompts import MessagesPlaceholder from langchain.memory import ConversationBufferWindowMemory from langchain.agents import AgentExecutor from langchain.tools import tool from langchain_core.prompts import ChatPromptTemplate from langchain.tools import Tool from langchain_community.utilities import GoogleSerperAPIWrapper from uuid import uuid4 chat_id = "b894e0c7-acb1-4907-9bsbc-bb98f5a970dc" def google_search_tool(iso: str="us"): google_search = GoogleSerperAPIWrapper(gl=iso) google_image_search = GoogleSerperAPIWrapper(gl=iso, type="images") google_news_search = GoogleSerperAPIWrapper(gl=iso, type="news") google_places_search = GoogleSerperAPIWrapper(gl=iso, type="places") return [ Tool( name="google_search", func=google_search.run, description="Search Google for information." ), Tool( name="google_image_search", func=google_image_search.run, description="Search Google for images." ), Tool( name="google_news_search", func=google_news_search.run, description="Search Google for news." ), Tool( name="google_places_search", func=google_places_search.run, description="Search Google for places." ) ] workspace_id = "test" request_id = str(uuid4()) system_template = "You are a helpful AI agent. Always use the tools at your dispoal" prompt = "" tools = google_search_tool("in") llm_kwargs = {} llm = ChatAnthropic( model="claude-3-5-sonnet-20240620", streaming=True, api_key="yurrrrrrrrrrrrr", ) base_template = ChatPromptTemplate.from_messages([ ("system", system_template), MessagesPlaceholder(variable_name="chat_history") if chat_id else None, ("human", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad") ]) agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=base_template) chat_message_history = MongoDBChatMessageHistory( session_id=chat_id, connection_string=os.getenv('MONGO_URI'), database_name=os.getenv('MONGO_DBNAME'), # "api" collection_name="chat_histories", ) conversational_memory = ConversationBufferWindowMemory( chat_memory=chat_message_history, memory_key='chat_history', return_messages=True, output_key="output", input_key="input", ) agent_executor = AgentExecutor( agent=agent, tools=tools, memory=conversational_memory, return_intermediate_steps=True, handle_parsing_errors=True ).with_config({"run_name": "Agent"}) response = [] run = agent_executor.astream_events(input = {"input": "what is glg stock"}, version="v2") async for event in run: response.append(event) kind = event["event"] if kind == "on_chain_start": if ( event["name"] == "Agent" ): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})` print( f"Starting agent: {event['name']} with input: {event['data'].get('input')}" ) elif kind == "on_chain_end": if ( event["name"] == "Agent" ): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})` print() print("--") print( f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}" ) if kind == "on_chat_model_stream": content = event["data"]["chunk"].content if content: # Empty content in the context of OpenAI means # that the model is asking for a tool to be invoked. # So we only print non-empty content print(content, end="|") elif kind == "on_tool_start": print("--") print( f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}" ) elif kind == "on_tool_end": print(f"Done tool: {event['name']}") print(f"Tool output was: {event['data'].get('output')}") print("--") from langchain_core.messages import FunctionMessage import json messages = chat_message_history.messages for resp in response: if resp['event'] == "on_tool_end": tool_msg = FunctionMessage(content=json.dumps(resp['data']), id=resp['run_id'], name=resp['name']) messages.insert(-1, tool_msg) chat_message_history.clear() chat_message_history.add_messages(messages) chat_message_history.messages ``` ### Error Message and Stack Trace (if applicable) ```Starting agent: Agent with input: {'input': 'what is glg stock'} { "name": "KeyError", "message": "'function'", "stack": "--------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[14], line 3 1 response = [] 2 run = agent_executor.astream_events(input = {\"input\": \"what is glg stock\"}, version=\"v2\") ----> 3 async for event in run: 4 response.append(event) 5 kind = event[\"event\"] File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:4788, in RunnableBindingBase.astream_events(self, input, config, **kwargs) 4782 async def astream_events( 4783 self, 4784 input: Input, 4785 config: Optional[RunnableConfig] = None, 4786 **kwargs: Optional[Any], 4787 ) -> AsyncIterator[StreamEvent]: -> 4788 async for item in self.bound.astream_events( 4789 input, self._merge_configs(config), **{**self.kwargs, **kwargs} 4790 ): 4791 yield item File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1146, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs) 1141 raise NotImplementedError( 1142 'Only versions \"v1\" and \"v2\" of the schema is currently supported.' 1143 ) 1145 async with aclosing(event_stream): -> 1146 async for event in event_stream: 1147 yield event File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:947, in _astream_events_implementation_v2(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs) 945 # Await it anyway, to run any cleanup code, and propagate any exceptions 946 try: --> 947 await task 948 except asyncio.CancelledError: 949 pass File /usr/lib/python3.10/asyncio/futures.py:288, in Future.__await__(self) 286 if not self.done(): 287 raise RuntimeError(\"await wasn't used with future\") --> 288 return self.result() File /usr/lib/python3.10/asyncio/futures.py:201, in Future.result(self) 199 self.__log_traceback = False 200 if self._exception is not None: --> 201 raise self._exception.with_traceback(self._exception_tb) 202 return self._result File /usr/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***) 228 try: 229 if exc is None: 230 # We use the `send` method directly, because coroutines 231 # don't have `__iter__` and `__next__` methods. --> 232 result = coro.send(None) 233 else: 234 result = coro.throw(exc) File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:907, in _astream_events_implementation_v2.<locals>.consume_astream() 904 try: 905 # if astream also calls tap_output_aiter this will be a no-op 906 async with aclosing(runnable.astream(input, config, **kwargs)) as stream: --> 907 async for _ in event_streamer.tap_output_aiter(run_id, stream): 908 # All the content will be picked up 909 pass 910 finally: File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:153, in _AstreamEventsCallbackHandler.tap_output_aiter(self, run_id, output) 151 tap = self.is_tapped.setdefault(run_id, sentinel) 152 # wait for first chunk --> 153 first = await py_anext(output, default=sentinel) 154 if first is sentinel: 155 return File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/utils/aiter.py:65, in py_anext.<locals>.anext_impl() 58 async def anext_impl() -> Union[T, Any]: 59 try: 60 # The C code is way more low-level than this, as it implements 61 # all methods of the iterator protocol. In this implementation 62 # we're relying on higher-level coroutine concepts, but that's 63 # exactly what we want -- crosstest pure-Python high-level 64 # implementation and low-level C anext() iterators. ---> 65 return await __anext__(iterator) 66 except StopAsyncIteration: 67 return default File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:1595, in AgentExecutor.astream(self, input, config, **kwargs) 1583 config = ensure_config(config) 1584 iterator = AgentExecutorIterator( 1585 self, 1586 input, (...) 1593 **kwargs, 1594 ) -> 1595 async for step in iterator: 1596 yield step File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent_iterator.py:246, in AgentExecutorIterator.__aiter__(self) 240 while self.agent_executor._should_continue( 241 self.iterations, self.time_elapsed 242 ): 243 # take the next step: this plans next action, executes it, 244 # yielding action and observation as they are generated 245 next_step_seq: NextStepOutput = [] --> 246 async for chunk in self.agent_executor._aiter_next_step( 247 self.name_to_tool_map, 248 self.color_mapping, 249 self.inputs, 250 self.intermediate_steps, 251 run_manager, 252 ): 253 next_step_seq.append(chunk) 254 # if we're yielding actions, yield them as they come 255 # do not yield AgentFinish, which will be handled below File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:1304, in AgentExecutor._aiter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1301 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 1303 # Call the LLM to see what to do. -> 1304 output = await self.agent.aplan( 1305 intermediate_steps, 1306 callbacks=run_manager.get_child() if run_manager else None, 1307 **inputs, 1308 ) 1309 except OutputParserException as e: 1310 if isinstance(self.handle_parsing_errors, bool): File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:554, in RunnableMultiActionAgent.aplan(self, intermediate_steps, callbacks, **kwargs) 546 final_output: Any = None 547 if self.stream_runnable: 548 # Use streaming to make sure that the underlying LLM is invoked in a 549 # streaming (...) 552 # Because the response from the plan is not a generator, we need to 553 # accumulate the output into final output and return that. --> 554 async for chunk in self.runnable.astream( 555 inputs, config={\"callbacks\": callbacks} 556 ): 557 if final_output is None: 558 final_output = chunk File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2910, in RunnableSequence.astream(self, input, config, **kwargs) 2907 async def input_aiter() -> AsyncIterator[Input]: 2908 yield input -> 2910 async for chunk in self.atransform(input_aiter(), config, **kwargs): 2911 yield chunk File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2893, in RunnableSequence.atransform(self, input, config, **kwargs) 2887 async def atransform( 2888 self, 2889 input: AsyncIterator[Input], 2890 config: Optional[RunnableConfig] = None, 2891 **kwargs: Optional[Any], 2892 ) -> AsyncIterator[Output]: -> 2893 async for chunk in self._atransform_stream_with_config( 2894 input, 2895 self._atransform, 2896 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name), 2897 **kwargs, 2898 ): 2899 yield chunk File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1981, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs) 1976 chunk: Output = await asyncio.create_task( # type: ignore[call-arg] 1977 py_anext(iterator), # type: ignore[arg-type] 1978 context=context, 1979 ) 1980 else: -> 1981 chunk = cast(Output, await py_anext(iterator)) 1982 yield chunk 1983 if final_output_supported: File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:153, in _AstreamEventsCallbackHandler.tap_output_aiter(self, run_id, output) 151 tap = self.is_tapped.setdefault(run_id, sentinel) 152 # wait for first chunk --> 153 first = await py_anext(output, default=sentinel) 154 if first is sentinel: 155 return File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/utils/aiter.py:65, in py_anext.<locals>.anext_impl() 58 async def anext_impl() -> Union[T, Any]: 59 try: 60 # The C code is way more low-level than this, as it implements 61 # all methods of the iterator protocol. In this implementation 62 # we're relying on higher-level coroutine concepts, but that's 63 # exactly what we want -- crosstest pure-Python high-level 64 # implementation and low-level C anext() iterators. ---> 65 return await __anext__(iterator) 66 except StopAsyncIteration: 67 return default File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2863, in RunnableSequence._atransform(self, input, run_manager, config, **kwargs) 2861 else: 2862 final_pipeline = step.atransform(final_pipeline, config) -> 2863 async for output in final_pipeline: 2864 yield output File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1197, in Runnable.atransform(self, input, config, **kwargs) 1194 final: Input 1195 got_first_val = False -> 1197 async for ichunk in input: 1198 # The default implementation of transform is to buffer input and 1199 # then call stream. 1200 # It'll attempt to gather all input into a single chunk using 1201 # the `+` operator. 1202 # If the input is not addable, then we'll assume that we can 1203 # only operate on the last chunk, 1204 # and we'll iterate until we get to the last chunk. 1205 if not got_first_val: 1206 final = ichunk File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:4811, in RunnableBindingBase.atransform(self, input, config, **kwargs) 4805 async def atransform( 4806 self, 4807 input: AsyncIterator[Input], 4808 config: Optional[RunnableConfig] = None, 4809 **kwargs: Any, 4810 ) -> AsyncIterator[Output]: -> 4811 async for item in self.bound.atransform( 4812 input, 4813 self._merge_configs(config), 4814 **{**self.kwargs, **kwargs}, 4815 ): 4816 yield item File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1215, in Runnable.atransform(self, input, config, **kwargs) 1212 final = ichunk 1214 if got_first_val: -> 1215 async for output in self.astream(final, config, **kwargs): 1216 yield output File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:417, in BaseChatModel.astream(self, input, config, stop, **kwargs) 412 except BaseException as e: 413 await run_manager.on_llm_error( 414 e, 415 response=LLMResult(generations=[[generation]] if generation else []), 416 ) --> 417 raise e 418 else: 419 await run_manager.on_llm_end( 420 LLMResult(generations=[[generation]]), 421 ) File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:395, in BaseChatModel.astream(self, input, config, stop, **kwargs) 393 generation: Optional[ChatGenerationChunk] = None 394 try: --> 395 async for chunk in self._astream( 396 messages, 397 stop=stop, 398 **kwargs, 399 ): 400 if chunk.message.id is None: 401 chunk.message.id = f\"run-{run_manager.run_id}\" File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:701, in ChatAnthropic._astream(self, messages, stop, run_manager, stream_usage, **kwargs) 699 stream_usage = self.stream_usage 700 kwargs[\"stream\"] = True --> 701 payload = self._get_request_payload(messages, stop=stop, **kwargs) 702 stream = await self._async_client.messages.create(**payload) 703 coerce_content_to_string = not _tools_in_params(payload) File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:647, in ChatAnthropic._get_request_payload(self, input_, stop, **kwargs) 639 def _get_request_payload( 640 self, 641 input_: LanguageModelInput, (...) 644 **kwargs: Dict, 645 ) -> Dict: 646 messages = self._convert_input(input_).to_messages() --> 647 system, formatted_messages = _format_messages(messages) 648 payload = { 649 \"model\": self.model, 650 \"max_tokens\": self.max_tokens, (...) 658 **kwargs, 659 } 660 return {k: v for k, v in payload.items() if v is not None} File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:170, in _format_messages(messages) 167 system = message.content 168 continue --> 170 role = _message_type_lookups[message.type] 171 content: Union[str, List] 173 if not isinstance(message.content, str): 174 # parse as dict KeyError: 'function'" } ``` ### Description the above error occurs when I add FunctionMessage to chathistory, and run the agent again. ex: 1st run) input: what is apple stock rn - runs perfectly 2nd run) input: what is google stock rn - gives the above error ### System Info ``` System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 > Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.2.10 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.84 > langchain_anthropic: 0.1.19 > langchain_groq: 0.1.6 > langchain_mongodb: 0.1.6 > langchain_openai: 0.1.13 > langchain_text_splitters: 0.2.2 > langchainhub: 0.1.20 > langserve: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph ```
FunctionMessage doesn't work with astream_events api
https://api.github.com/repos/langchain-ai/langchain/issues/24007/comments
2
2024-07-09T06:56:47Z
2024-07-10T03:24:34Z
https://github.com/langchain-ai/langchain/issues/24007
2,397,312,975
24,007
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangGraph/LangChain rather than my code. - [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question. ### Example Code ```python import getpass api_endpoint = getpass.getpass("API Endpoint") api_key = getpass.getpass("API Key") from datetime import datetime from langchain_core.messages import HumanMessage from langchain_openai import AzureChatOpenAI from langgraph.graph import END, MessageGraph from langgraph.prebuilt import ToolExecutor from langchain.tools import tool from langchain_openai import AzureChatOpenAI @tool def file_saver(text: str) -> str: """Persist the given string to disk """ pass model = AzureChatOpenAI( deployment_name="cogdep-gpt-4o", model_name="gpt-4o", azure_endpoint=api_endpoint, openai_api_key=api_key, openai_api_type="azure", openai_api_version="2024-05-01-preview", streaming=True, temperature=0.1 ) tools = [file_saver] model = model.bind_tools(tools) def get_agent_executor(): def should_continue(messages): print(f"{datetime.now()}: Starting should_continue") return "end" async def call_model(messages): response = await model.ainvoke(messages) return response workflow = MessageGraph() workflow.add_node("agent", call_model) workflow.set_entry_point("agent") workflow.add_conditional_edges( "agent", should_continue, { "end": END, }, ) return workflow.compile() agent_executor = get_agent_executor() messages = [HumanMessage(content="Think of a poem with 100 verses and save it to a file. Do not print it to me first.")] async def run(): async for event in agent_executor.astream_events(messages, version="v1"): kind = event["event"] print(f"{datetime.now()}: Received event: {kind}") await run() ``` ### Error Message and Stack Trace (if applicable) ```shell This is part of the output (in this case, there is a 23s gap between `on_chat_model_stream` and `on_chat_model_end`) (...) 2024-07-09 05:29:35.705573: Received event: on_chat_model_stream 2024-07-09 05:29:35.713679: Received event: on_chat_model_stream 2024-07-09 05:29:35.724480: Received event: on_chat_model_stream 2024-07-09 05:29:35.753143: Received event: on_chat_model_stream 2024-07-09 05:29:58.571740: Received event: on_chat_model_end 2024-07-09 05:29:58.574671: Received event: on_chain_start 2024-07-09 05:29:58.576026: Received event: on_chain_end 2024-07-09 05:29:58.577963: Received event: on_chain_start 2024-07-09 05:29:58.578214: Starting should_continue ``` ### Description Hi! When receiving an llm answer that leads to a tool call with a large amount of data within a parameter, we noticed that our program was blocked although we are using the async version. My guess is that the final message is built after the last message was streamed and this takes some time on the cpu? Also, is there a different approach that we could use? Thank you very much! ### System Info ``` System Information ------------------ > OS: Linux > OS Version: langchain-ai/langgraph#1 SMP PREEMPT Thu Nov 16 10:49:20 UTC 2023 > Python Version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 11:57:02) [GCC 12.3.0] Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langsmith: 0.1.84 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 > langgraph: 0.1.5 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langserve ```
Tool Calls with large parameters are blocking between on_chat_model_stream and on_chat_model_end
https://api.github.com/repos/langchain-ai/langchain/issues/24021/comments
2
2024-07-09T05:41:26Z
2024-07-09T13:52:50Z
https://github.com/langchain-ai/langchain/issues/24021
2,398,204,126
24,021
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code import chromadb from langchain_chroma.vectorstores import Chroma from langchain_huggingface import HuggingFaceEmbeddings from langchain_core.documents import Document client = chromadb.Client() collection = client.create_collection(name="my_collection", metadata={"hnsw:space": "cosine"}) embedding_function = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2", model_kwargs = {'device': 'cuda'}) vector_store = Chroma( client=client, collection_name="my_collection", embedding_function=embedding_function, ) documents = [ Document( id = '1', page_content = 'This is a document about fruit', metadata = {'title': 'First Doc'} ), Document( id = '2', page_content = 'This is a document about oranges', metadata = {'title': 'Second Doc'} ), Document( id = '3', page_content = 'I saw a lady wearing red dress', metadata = {'title': 'Third Doc'} ), Document( id = '4', page_content = 'Apples are red', metadata = {'title': 'Fourth Doc'} ), ] vector_store.add_documents(documents) print(vector_store._collection.get(include = ["documents"])) print("db size ", vector_store._collection.count()) duplicate_document = [Document( id = '1', page_content = 'This is a document about fruit', metadata = {'title': 'First Doc'} )] vector_store.add_documents(duplicate_document) print(vector_store._collection.get(include = ["documents"])) print("db size ", vector_store._collection.count()) ### Error Message and Stack Trace (if applicable) _No response_ ### Description Langchain-chroma adds duplicate entry to the db, whereas Chromadb doesn't add duplicate entry. So, the behavior isn't same for Langchain-chroma and Chromadb. import chromadb from chromadb.utils import embedding_functions client = chromadb.Client() embedder = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2",device='cuda') collection = client.create_collection(name="my_collection", embedding_function = embedder, metadata={"hnsw:space": "cosine"}) collection.add( documents=[ "This is a document about fruit", "This is a document about oranges", "I saw a lady wearing red dress", "Apples are red", ], ids=["1", "2", "3", "4"], metadatas=[ {'title': 'First Doc'}, {'title': 'Second Doc'}, {'title': 'Third Doc'}, {'title': 'Fourth Doc'}, ] ) print(collection.get(include=['documents'])) print("db size ",collection.count()) collection.add( documents=[ "This is a document about fruit", ], ids=["1"], metadatas=[ {'title': 'First Doc'}] ) print(collection.get(include=['documents'])) print("db size ",collection.count()) ### System Info Python version: 3.10.10
Langchain Chroma doesn't handle duplicate entry properly
https://api.github.com/repos/langchain-ai/langchain/issues/24005/comments
0
2024-07-09T05:04:16Z
2024-07-09T05:06:51Z
https://github.com/langchain-ai/langchain/issues/24005
2,397,137,567
24,005
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: If I swap `model = ChatOpenAI(model="gpt-4o")` for: `ChatAnthropicVertex(model_name="claude-3-haiku@20240307", location="us-east5", project="my_gcp_project")`, then the [memory example](https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory) throws the following error: ```console { "name": "ValueError", "message": "Message dict must contain 'role' and 'content' keys, got {'text': '\ \ The magic_function applied to the input of 3 returns the output of 5.', 'type': 'text', 'index': 0}", "stack": "--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:271, in _convert_to_message(message) 270 msg_type = msg_kwargs.pop(\"type\") --> 271 msg_content = msg_kwargs.pop(\"content\") 272 except KeyError: KeyError: 'content' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Cell In[79], line 55 49 print( 50 agent_with_chat_history.invoke( 51 {\"input\": \"Hi, I'm polly! What's the output of magic_function of 3?\"}, config 52 )[\"output\"] 53 ) 54 print(\"---\") ---> 55 print(agent_with_chat_history.invoke({\"input\": \"Remember my name?\"}, config)[\"output\"]) 56 print(\"---\") 57 print( 58 agent_with_chat_history.invoke({\"input\": \"what was that output again?\"}, config)[ 59 \"output\" 60 ] 61 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs) 4574 def invoke( 4575 self, 4576 input: Input, 4577 config: Optional[RunnableConfig] = None, 4578 **kwargs: Optional[Any], 4579 ) -> Output: -> 4580 return self.bound.invoke( 4581 input, 4582 self._merge_configs(config), 4583 **{**self.kwargs, **kwargs}, 4584 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs) 4574 def invoke( 4575 self, 4576 input: Input, 4577 config: Optional[RunnableConfig] = None, 4578 **kwargs: Optional[Any], 4579 ) -> Output: -> 4580 return self.bound.invoke( 4581 input, 4582 self._merge_configs(config), 4583 **{**self.kwargs, **kwargs}, 4584 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config, **kwargs) 2497 input = step.invoke(input, config, **kwargs) 2498 else: -> 2499 input = step.invoke(input, config) 2500 # finish the root run 2501 except BaseException as e: File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/branch.py:212, in RunnableBranch.invoke(self, input, config, **kwargs) 210 break 211 else: --> 212 output = self.default.invoke( 213 input, 214 config=patch_config( 215 config, callbacks=run_manager.get_child(tag=\"branch:default\") 216 ), 217 **kwargs, 218 ) 219 except BaseException as e: 220 run_manager.on_chain_error(e) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs) 4574 def invoke( 4575 self, 4576 input: Input, 4577 config: Optional[RunnableConfig] = None, 4578 **kwargs: Optional[Any], 4579 ) -> Output: -> 4580 return self.bound.invoke( 4581 input, 4582 self._merge_configs(config), 4583 **{**self.kwargs, **kwargs}, 4584 ) File /usr/local/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info: File /usr/local/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) 164 except BaseException as e: File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1636, in AgentExecutor._call(self, inputs, run_manager) 1634 # We now enter the agent loop (until it returns something). 1635 while self._should_continue(iterations, time_elapsed): -> 1636 next_step_output = self._take_next_step( 1637 name_to_tool_map, 1638 color_mapping, 1639 inputs, 1640 intermediate_steps, 1641 run_manager=run_manager, 1642 ) 1643 if isinstance(next_step_output, AgentFinish): 1644 return self._return( 1645 next_step_output, intermediate_steps, run_manager=run_manager 1646 ) File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1342, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1333 def _take_next_step( 1334 self, 1335 name_to_tool_map: Dict[str, BaseTool], (...) 1339 run_manager: Optional[CallbackManagerForChainRun] = None, 1340 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1341 return self._consume_next_step( -> 1342 [ 1343 a 1344 for a in self._iter_next_step( 1345 name_to_tool_map, 1346 color_mapping, 1347 inputs, 1348 intermediate_steps, 1349 run_manager, 1350 ) 1351 ] 1352 ) File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1342, in <listcomp>(.0) 1333 def _take_next_step( 1334 self, 1335 name_to_tool_map: Dict[str, BaseTool], (...) 1339 run_manager: Optional[CallbackManagerForChainRun] = None, 1340 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1341 return self._consume_next_step( -> 1342 [ 1343 a 1344 for a in self._iter_next_step( 1345 name_to_tool_map, 1346 color_mapping, 1347 inputs, 1348 intermediate_steps, 1349 run_manager, 1350 ) 1351 ] 1352 ) File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1370, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1367 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 1369 # Call the LLM to see what to do. -> 1370 output = self.agent.plan( 1371 intermediate_steps, 1372 callbacks=run_manager.get_child() if run_manager else None, 1373 **inputs, 1374 ) 1375 except OutputParserException as e: 1376 if isinstance(self.handle_parsing_errors, bool): File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:580, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs) 572 final_output: Any = None 573 if self.stream_runnable: 574 # Use streaming to make sure that the underlying LLM is invoked in a 575 # streaming (...) 578 # Because the response from the plan is not a generator, we need to 579 # accumulate the output into final output and return that. --> 580 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}): 581 if final_output is None: 582 final_output = chunk File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2877, in RunnableSequence.stream(self, input, config, **kwargs) 2871 def stream( 2872 self, 2873 input: Input, 2874 config: Optional[RunnableConfig] = None, 2875 **kwargs: Optional[Any], 2876 ) -> Iterator[Output]: -> 2877 yield from self.transform(iter([input]), config, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2864, in RunnableSequence.transform(self, input, config, **kwargs) 2858 def transform( 2859 self, 2860 input: Iterator[Input], 2861 config: Optional[RunnableConfig] = None, 2862 **kwargs: Optional[Any], 2863 ) -> Iterator[Output]: -> 2864 yield from self._transform_stream_with_config( 2865 input, 2866 self._transform, 2867 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name), 2868 **kwargs, 2869 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1862, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs) 1860 try: 1861 while True: -> 1862 chunk: Output = context.run(next, iterator) # type: ignore 1863 yield chunk 1864 if final_output_supported: File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2826, in RunnableSequence._transform(self, input, run_manager, config, **kwargs) 2823 else: 2824 final_pipeline = step.transform(final_pipeline, config) -> 2826 for output in final_pipeline: 2827 yield output File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1157, in Runnable.transform(self, input, config, **kwargs) 1154 final: Input 1155 got_first_val = False -> 1157 for ichunk in input: 1158 # The default implementation of transform is to buffer input and 1159 # then call stream. 1160 # It'll attempt to gather all input into a single chunk using 1161 # the `+` operator. 1162 # If the input is not addable, then we'll assume that we can 1163 # only operate on the last chunk, 1164 # and we'll iterate until we get to the last chunk. 1165 if not got_first_val: 1166 final = ichunk File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4787, in RunnableBindingBase.transform(self, input, config, **kwargs) 4781 def transform( 4782 self, 4783 input: Iterator[Input], 4784 config: Optional[RunnableConfig] = None, 4785 **kwargs: Any, 4786 ) -> Iterator[Output]: -> 4787 yield from self.bound.transform( 4788 input, 4789 self._merge_configs(config), 4790 **{**self.kwargs, **kwargs}, 4791 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1157, in Runnable.transform(self, input, config, **kwargs) 1154 final: Input 1155 got_first_val = False -> 1157 for ichunk in input: 1158 # The default implementation of transform is to buffer input and 1159 # then call stream. 1160 # It'll attempt to gather all input into a single chunk using 1161 # the `+` operator. 1162 # If the input is not addable, then we'll assume that we can 1163 # only operate on the last chunk, 1164 # and we'll iterate until we get to the last chunk. 1165 if not got_first_val: 1166 final = ichunk File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1175, in Runnable.transform(self, input, config, **kwargs) 1172 final = ichunk 1174 if got_first_val: -> 1175 yield from self.stream(final, config, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:812, in Runnable.stream(self, input, config, **kwargs) 802 def stream( 803 self, 804 input: Input, 805 config: Optional[RunnableConfig] = None, 806 **kwargs: Optional[Any], 807 ) -> Iterator[Output]: 808 \"\"\" 809 Default implementation of stream, which calls invoke. 810 Subclasses should override this method if they support streaming output. 811 \"\"\" --> 812 yield self.invoke(input, config, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py:179, in BasePromptTemplate.invoke(self, input, config) 177 if self.tags: 178 config[\"tags\"] = config[\"tags\"] + self.tags --> 179 return self._call_with_config( 180 self._format_prompt_with_error_handling, 181 input, 182 config, 183 run_type=\"prompt\", 184 ) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1593, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs) 1589 context = copy_context() 1590 context.run(_set_config_context, child_config) 1591 output = cast( 1592 Output, -> 1593 context.run( 1594 call_func_with_variable_args, # type: ignore[arg-type] 1595 func, # type: ignore[arg-type] 1596 input, # type: ignore[arg-type] 1597 config, 1598 run_manager, 1599 **kwargs, 1600 ), 1601 ) 1602 except BaseException as e: 1603 run_manager.on_chain_error(e) File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs) 378 if run_manager is not None and accepts_run_manager(func): 379 kwargs[\"run_manager\"] = run_manager --> 380 return func(input, **kwargs) File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py:154, in BasePromptTemplate._format_prompt_with_error_handling(self, inner_input) 152 def _format_prompt_with_error_handling(self, inner_input: Dict) -> PromptValue: 153 _inner_input = self._validate_input(inner_input) --> 154 return self.format_prompt(**_inner_input) File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:765, in BaseChatPromptTemplate.format_prompt(self, **kwargs) 756 def format_prompt(self, **kwargs: Any) -> PromptValue: 757 \"\"\"Format prompt. Should return a PromptValue. 758 759 Args: (...) 763 PromptValue. 764 \"\"\" --> 765 messages = self.format_messages(**kwargs) 766 return ChatPromptValue(messages=messages) File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:1142, in ChatPromptTemplate.format_messages(self, **kwargs) 1138 result.extend([message_template]) 1139 elif isinstance( 1140 message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate) 1141 ): -> 1142 message = message_template.format_messages(**kwargs) 1143 result.extend(message) 1144 else: File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:235, in MessagesPlaceholder.format_messages(self, **kwargs) 230 if not isinstance(value, list): 231 raise ValueError( 232 f\"variable {self.variable_name} should be a list of base messages, \" 233 f\"got {value}\" 234 ) --> 235 value = convert_to_messages(value) 236 if self.n_messages: 237 value = value[-self.n_messages :] File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:296, in convert_to_messages(messages) 285 def convert_to_messages( 286 messages: Sequence[MessageLikeRepresentation], 287 ) -> List[BaseMessage]: 288 \"\"\"Convert a sequence of messages to a list of messages. 289 290 Args: (...) 294 List of messages (BaseMessages). 295 \"\"\" --> 296 return [_convert_to_message(m) for m in messages] File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:296, in <listcomp>(.0) 285 def convert_to_messages( 286 messages: Sequence[MessageLikeRepresentation], 287 ) -> List[BaseMessage]: 288 \"\"\"Convert a sequence of messages to a list of messages. 289 290 Args: (...) 294 List of messages (BaseMessages). 295 \"\"\" --> 296 return [_convert_to_message(m) for m in messages] File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:273, in _convert_to_message(message) 271 msg_content = msg_kwargs.pop(\"content\") 272 except KeyError: --> 273 raise ValueError( 274 f\"Message dict must contain 'role' and 'content' keys, got {message}\" 275 ) 276 _message = _create_message_from_message_type( 277 msg_type, msg_content, **msg_kwargs 278 ) 279 else: ValueError: Message dict must contain 'role' and 'content' keys, got {'text': '\ \ The magic_function applied to the input of 3 returns the output of 5.', 'type': 'text', 'index': 0}" } ``` ...and it is unclear why. My installed langchain packages: ``` langchain 0.2.7 langchain-community 0.2.7 langchain-core 0.2.12 langchain-google-vertexai 1.0.6 langchain-groq 0.1.6 langchain-openai 0.1.14 langchain-text-splitters 0.2.2 langchainhub 0.1.20 ``` > Note: the example code works fine, if `model = ChatOpenAI(model="gpt-4o")` is used instead of the Claude-3 model. ### Idea or request for content: It would be very helpful to show how one must change the [memory example](https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory) code, depending on the LLM used
DOC: how_to/migrate_agent/#memory => example does not work with other models
https://api.github.com/repos/langchain-ai/langchain/issues/24003/comments
1
2024-07-09T03:45:35Z
2024-07-09T15:32:44Z
https://github.com/langchain-ai/langchain/issues/24003
2,397,041,829
24,003
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code similarity search type is set to cosine. `self._similarity_type = DocumentDBSimilarityType.CO` https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/documentdb.py ``` def __init__( self, collection: Collection[DocumentDBDocumentType], embedding: Embeddings, *, index_name: str = "vectorSearchIndex", text_key: str = "textContent", embedding_key: str = "vectorContent", ): """Constructor for DocumentDBVectorSearch Args: collection: MongoDB collection to add the texts to. embedding: Text embedding model to use. index_name: Name of the Vector Search index. text_key: MongoDB field that will contain the text for each document. embedding_key: MongoDB field that will contain the embedding for each document. """ self._collection = collection self._embedding = embedding self._index_name = index_name self._text_key = text_key self._embedding_key = embedding_key self._similarity_type = DocumentDBSimilarityType.COS ``` so even if user provide different similarity type in invoke, it has no effect - ``` retriever = vector_store.as_retriever( search_type="similarity", search_kwargs={"k": 5, 'filter': filter, "similarity": "dotProduct"}, ) ``` `similarity` is as per pymongo search pipeline. ### Error Message and Stack Trace (if applicable) _No response_ ### Description * I am using latest community version - langchain-community 0.2.6 * I expect if I set similiary-type in search keywords then its should be propogated to pymongo pipeline ### System Info Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.83 > langchain_huggingface: 0.0.3 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 "
[community] aws documentDB similarity search type not configurable
https://api.github.com/repos/langchain-ai/langchain/issues/23975/comments
0
2024-07-08T14:44:57Z
2024-07-08T14:47:34Z
https://github.com/langchain-ai/langchain/issues/23975
2,395,847,128
23,975
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` python # Filtering pipeling working in pymongo used to filter on a list of file_ids query_embedding = self.embedding_client.embed_query(query) pipeline = [ { '$search': { "cosmosSearch": { "vector": query_embedding, "path": "vectorContent", "k": 5, #, #, "efsearch": 40 # optional for HNSW only "filter": {"fileId": {'$in': file_ids}} }, "returnStoredSource": True }}, {'$project': { 'similarityScore': { '$meta': 'searchScore' }, 'document' : '$$ROOT' } }, ] docs = self.mongo_collection.aggregate(pipeline) ``` # Current implementation ``` python def _get_pipeline_vector_ivf( self, embeddings: List[float], k: int = 4 ) -> List[dict[str, Any]]: pipeline: List[dict[str, Any]] = [ { "$search": { "cosmosSearch": { "vector": embeddings, "path": self._embedding_key, "k": k, }, "returnStoredSource": True, } }, { "$project": { "similarityScore": {"$meta": "searchScore"}, "document": "$$ROOT", } }, ] return pipeline def _get_pipeline_vector_hnsw( self, embeddings: List[float], k: int = 4, ef_search: int = 40 ) -> List[dict[str, Any]]: pipeline: List[dict[str, Any]] = [ { "$search": { "cosmosSearch": { "vector": embeddings, "path": self._embedding_key, "k": k, "efSearch": ef_search, }, } }, { "$project": { "similarityScore": {"$meta": "searchScore"}, "document": "$$ROOT", } }, ] return pipeline ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description As stated in the langchain documentation filtering in Azure Cosmos DB Mongo vCore should be supported: https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db/ Filtering works when I apply my MongoDB query directly using pyomongo as shown in the example. However, through langchain the same filters are not applied. I tried using the filter, pre_filter, search_kwargs and kwargs parameters, but to no avail. ``` python docs = self.vectorstore.similarity_search(query, k=5, pre_filter = {'fileId': {'$in': ["31c283c2-ac31-4260-a8d0-864f444c33ee]"}} ) ``` Upon closer inspection of the source code, I see that no filter key is present in the query dictionary and see no kwargs, search_kwargs being passed, which could be the reason. https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/azure_cosmos_db.py Any input on this issue? ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22631 > Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.2.10 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.82 > langchain_openai: 0.1.13 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
AzureCosmosDBVectorSearch filter not working
https://api.github.com/repos/langchain-ai/langchain/issues/23963/comments
2
2024-07-08T09:37:53Z
2024-07-25T18:06:13Z
https://github.com/langchain-ai/langchain/issues/23963
2,395,146,622
23,963
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code def custom_parser(self, inputs: AIMessage): output = {"answer": None} try: json_parser = JsonOutputParser(pydantic_object=JsonOutputParserModel) json_output = json_parser.parse(inputs.content) # type: ignore print('json_output',json_output) output["answer"] = json_output.get("answer", None) except (json.JSONDecodeError, OutputParserException) as e: parser = StrOutputParser() str_output = parser.invoke(inputs.content) # type: ignore output["answer"] = str_output except Exception as e: print(f"Custom Parsing Error :{e}") return output ### Error Message and Stack Trace (if applicable) json_output 90 Custom Parsing Error :'int' object has no attribute 'get' ### Description I am trying to use langchain json parsing lib to parse a text into json. Input of the custom_parser is model response : AI message, function checks if the respnse can be parse as json then try block will be executed and it the message can't be parsed as json, then JSON Decode Error block will be executed. The function is written in such a way that it can handle plain string as response or stringify json. Issue is when i pass a string message: `custom_parser("90. Yes, you need to file the dispute")` output is: json_output 90 Custom Parsing Error :'int' object has no attribute 'get' But if i pass string message without number then it goes to second block and code get executed as usual `custom_parser(" Yes, you need to file the dispute")` Output: {'answer': 'you need to file the dispute'} ### System Info langchain==0.2.0 langchain-community==0.2.0 langchain-core==0.2.1 langchain-openai==0.1.7 langchain-text-splitters==0.2.0
json parser failed to parse full text if text startes with a number
https://api.github.com/repos/langchain-ai/langchain/issues/23960/comments
3
2024-07-08T08:12:16Z
2024-07-10T19:06:32Z
https://github.com/langchain-ai/langchain/issues/23960
2,394,954,354
23,960
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/tutorials/rag/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: The documentation and corresponding code provided with https://python.langchain.com/v0.2/docs/tutorials/rag/ has an issue, the moment I run the stub with this pipeline ```python rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) ``` this error is thrown : ```python --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[1], line 37 30 def format_docs(docs): 31 return "\n\n".join(doc.page_content for doc in docs) 34 rag_chain = ( 35 {"context": retriever | format_docs, "question": RunnablePassthrough()} 36 | prompt ---> 37 | llm 38 | StrOutputParser() 39 ) 41 rag_chain.invoke("What is Task Decomposition?") NameError: name 'llm' is not defined ``` It loooks like there needs to be a change to either the installations before running this cell or there is something wrong with the code in the cell. Either way the document must be updated so that this error does not occur. ### Idea or request for content: If this issue is specific to a particular python / pip version then there can be a "if you get this kind of error" section where this is highlighted along with resolution
DOC: issue with rag tutorial code : name 'llm' is not defined
https://api.github.com/repos/langchain-ai/langchain/issues/23958/comments
1
2024-07-08T05:16:14Z
2024-07-08T05:32:25Z
https://github.com/langchain-ai/langchain/issues/23958
2,394,652,254
23,958
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import torch from langchain_community.document_loaders import YoutubeAudioLoader from langchain_community.document_loaders.generic import GenericLoader from langchain_community.document_loaders.parsers.audio import ( FasterWhisperParser ) device = 'cuda' if torch.cuda.is_available() else 'cpu' # float32 compute_type = "float16" if device == 'cuda' else 'int8' yt_video_url = 'https://www.youtube.com/watch?v=1bUy-1hGZpI&ab_channel=IBMTechnology' yt_loader_faster_whisper = GenericLoader( blob_loader=YoutubeAudioLoader([ yt_video_url], '.'), blob_parser=FasterWhisperParser(device=device) # no possibility to define compute_type # Error: ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation. # blob_parser=FasterWhisperParser(device=device, compute_type=compute_type) ) yt_data = yt_loader_faster_whisper.load() ``` ### Error Message and Stack Trace (if applicable) ``` Traceback (most recent call last): File "python/helpers/pydev/pydevd.py", line 1551, in _exec pydev_imports.execfile(file, globals, locals) # execute the script ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 133, in <module> docs_load() File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 123, in docs_load get_youtube(use_paid_services=False, faster_whisper=True, wisper_local=False) File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 108, in get_youtube yt_data = yt_loader_faster_whisper.load() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "AI-POC/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 29, in load return list(self.lazy_load()) ^^^^^^^^^^^^^^^^^^^^^^ File "AI-POC/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/generic.py", line 116, in lazy_load yield from self.blob_parser.lazy_parse(blob) File "AI-POC/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/audio.py", line 467, in lazy_parse model = WhisperModel( ^^^^^^^^^^^^^ File "AI-POC/.venv/lib/python3.11/site-packages/faster_whisper/transcribe.py", line 145, in __init__ self.model = ctranslate2.models.Whisper( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation. ``` ### Description I'm trying to use the FasterWhisperParser class from the langchain_community package to parse audio data. I want to be able to use a GPU if one is available, and fall back to a CPU otherwise. I'm trying to set the compute_type to 'float16' when using a GPU and 'int8' when using a CPU. However, I'm encountering an issue because the FasterWhisperParser class doesn't accept a compute_type argument. When I try to use a CPU, I get a ValueError because 'float16' computation isn't efficiently supported on CPUs. ### System Info ``` $ python -m langchain_core.sys_info System Information ------------------ > OS: Linux > OS Version: #1 SMP PREEMPT_DYNAMIC Thu May 11 15:56:33 UTC 2023 > Python Version: 3.11.6 (main, Oct 3 2023, 00:00:00) [GCC 12.3.1 20230508 (Red Hat 12.3.1-1)] Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.83 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
No possibility to define WhisperModel compute_type when using GenericLoader with blob_parser=FasterWhisperParser
https://api.github.com/repos/langchain-ai/langchain/issues/23953/comments
1
2024-07-07T17:17:36Z
2024-07-07T17:24:46Z
https://github.com/langchain-ai/langchain/issues/23953
2,394,140,669
23,953
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python llm = ChatOpenAI(temperature=temperature, openai_api_key="1234") embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") persist_directory = "./example" collection_name = "Example" vectorstore = Chroma(embedding_function=embeddings, collection_name=collection_name, persist_directory=persist_directory) metadata_field_info = [ AttributeInfo( name="Title", description="The title of the document", type="string", ), AttributeInfo( name="Body", description="The body of the document", type="string", ) ] document_contents = "Langchain test" documents = [] retriever = SelfQueryRetriever.from_llm( llm=llm, vectorstore=vectorstore, metadata_field_info=metadata_field_info, document_contents=document_contents, verbose = True, structured_query_translator = ChromaTranslator() ) retriever.add_documents(documents, ids=None) ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/conjurors/notebooks/rag.py", line 378, in retriever_self_query retriever.add_documents(documents, ids=None) AttributeError: 'SelfQueryRetriever' object has no attribute 'add_documents' ### Description SelfQueryRetriever does not have addDocuments while the other Retrievers have it ### System Info langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.11 langchain-experimental==0.0.53 langchain-groq==0.0.1 langchain-huggingface==0.0.3 langchain-openai==0.1.14 langchain-text-splitters==0.2.2 MacOS 14.5 Python 3.9.18
'SelfQueryRetriever' object has no attribute 'add_documents'
https://api.github.com/repos/langchain-ai/langchain/issues/23952/comments
0
2024-07-07T16:56:41Z
2024-07-07T16:59:09Z
https://github.com/langchain-ai/langchain/issues/23952
2,394,133,118
23,952
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/tutorials/local_rag/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: I was following the instruction with this tutorial and I get the error message below while creating vectorstore. It seems required a model name. However, I am clueless about what I should put in this parameter. Thanks in advance for any assistance. --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[45], [line 4](vscode-notebook-cell:?execution_count=45&line=4) [1](vscode-notebook-cell:?execution_count=45&line=1) from langchain_chroma import Chroma [2](vscode-notebook-cell:?execution_count=45&line=2) from langchain_community.embeddings import GPT4AllEmbeddings ----> [4](vscode-notebook-cell:?execution_count=45&line=4) vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) File c:\Users\eddie.DESKTOP-J0CLNTS\.conda\envs\langchain\Lib\site-packages\pydantic\v1\main.py:339, in BaseModel.__init__(__pydantic_self__, **data) [333](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:333) """ [334](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:334) Create a new model by parsing and validating input data from keyword arguments. [335](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:335) [336](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:336) Raises ValidationError if the input data cannot be parsed to form a valid model. [337](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:337) """ [338](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:338) # Uses something other than `self` the first arg to allow "self" as a settable attribute --> [339](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) [340](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:340) if validation_error: [341](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:341) raise validation_error File c:\Users\eddie.DESKTOP-J0CLNTS\.conda\envs\langchain\Lib\site-packages\pydantic\v1\main.py:1100, in validate_model(model, input_data, cls) [1098](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1098) continue [1099](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1099) try: -> [1100](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1100) values = validator(cls_, values) [1101](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1101) except (ValueError, TypeError, AssertionError) as exc: [1102](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1102) errors.append(ErrorWrapper(exc, loc=ROOT_KEY)) ... [47](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:47) "Please install the gpt4all library to " [48](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:48) "use this embedding model: pip install gpt4all" [49](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:49) ) KeyError: 'model_name' ### Idea or request for content: _No response_
DOC: <Issue related to /v0.2/docs/tutorials/local_rag/> Failed to create vectorstore using GPT4AllEmbedding()
https://api.github.com/repos/langchain-ai/langchain/issues/23949/comments
0
2024-07-07T12:53:39Z
2024-07-07T12:56:09Z
https://github.com/langchain-ai/langchain/issues/23949
2,394,043,449
23,949
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code I am following the doc: https://python.langchain.com/v0.2/docs/how_to/graph_mapping/ using a database where Neo4J labels have colon, for example, `biolink:Disease`, `biolink:treats`. This seems to break the `CypherQueryCorrector` from `langchain.chains.graph_qa.cypher_utils import`. The query is corrected to `""` even when it is valid. ### Error Message and Stack Trace (if applicable) _No response_ ### Description See above. ### System Info ``` langchain==0.2.6 langchain-cli==0.0.25 langchain-community==0.2.6 langchain-core==0.2.11 langchain-experimental==0.0.62 langchain-text-splitters==0.2.2 ```
CypherQueryCorrector does not handle labels with :, such as `biolink:Disease`
https://api.github.com/repos/langchain-ai/langchain/issues/23946/comments
0
2024-07-07T08:50:11Z
2024-07-07T08:52:37Z
https://github.com/langchain-ai/langchain/issues/23946
2,393,957,847
23,946
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain.chat_models import AzureChatOpenAI chat = AzureChatOpenAI( azure_deployment=VISION_DEPLOYMENT, azure_endpoint=os.get_env('AZURE_ENDPOINT'), openai_api_version=os.getenv("OPENAI_API_VERSION"), openai_api_key=os.getenv("AZURE_VISION_TOKEN"), max_tokens=4096 ) from typing import Optional from langchain_core.pydantic_v1 import BaseModel, Field class Joke(BaseModel): """Joke to tell user.""" setup: str = Field(description="The setup of the joke") chat.with_structured_output(Joke) ### Error Message and Stack Trace (if applicable) NotImplementedError: with_structured_output is not implemented for this model. ### Description There are two definitions for AzureChatOpenAI `from langchain.chat_models import AzureChatOpenAI` and `from langchain_openai.chat_models import AzureChatOpenAI` Using latest verstions, the former does not include with_structured_output method, whereas the latter does. In my naive opinion langchain_openai ( the working one) must prevail, and deprecate the other one. Thanks ### System Info pip freeze G langchain langchain==0.2.6 langchain-anthropic==0.1.15 langchain-astradb==0.3.3 langchain-aws==0.1.7 langchain-chroma==0.1.1 langchain-cohere==0.1.8 langchain-community==0.2.6 langchain-core==0.2.11 langchain-experimental==0.0.62 langchain-google-genai==1.0.6 langchain-google-vertexai==1.0.5 langchain-groq==0.1.5 langchain-mistralai==0.1.8 langchain-mongodb==0.1.6 langchain-openai==0.1.14 langchain-pinecone==0.1.1 langchain-text-splitters==0.2.1 langchainhub==0.1.20 platform Linux pop-os python 3.10.12
Two conflicting declarations of AzureChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/23936/comments
0
2024-07-06T18:39:20Z
2024-07-06T18:41:50Z
https://github.com/langchain-ai/langchain/issues/23936
2,393,659,645
23,936
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python chain = RetrievalQAWithSourcesChain( reduce_k_below_max_tokens=True, max_tokens_limit=16000, combine_documents_chain=load_qa_with_sources_chain( ChatGoogleGenerativeAI(model="gemini-1.5-flash", temperature=0, callbacks=[UsageHandler()]), chain_type=self.chain_type, prompt=self.prompt), memory=self.memory, retriever=self.vector_db.as_retriever(search_kwargs={"k": 3]})) result = chain.invoke() ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description ### Current output at result['answer'] "Lorem ipsum [" ### Expected "Lorem ipsum [Source: xyz1] Lorem ipsum [Source: xyz2] Lorem ipsum [Source: xyz3]" - This is the output message from the model - Verified this by checking the response in `on_llm_end` callback Some points: - I have a prompt saying it should site the sources - I have been using other models too (GPT 3.5, Llama3 8B) and only experiencing this with `Gemini 1.5 Flash` probably because this is format it mentions the sources which is not supported currently ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 > Python Version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] Package Information ------------------- > langchain_core: 0.2.10 > langchain: 0.2.5 > langchain_community: 0.2.5 > langsmith: 0.1.82 > langchain_google_genai: 1.0.6 > langchain_openai: 0.1.8 > langchain_text_splitters: 0.2.2 > langchain_together: 0.1.3 > langchain_voyageai: 0.1.1 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
`_split_sources` of `BaseQAWithSourcesChain` prematurely truncates the Gemini Model's outputs, at the First Instance of `[Source: xyz]`
https://api.github.com/repos/langchain-ai/langchain/issues/23932/comments
1
2024-07-06T12:03:09Z
2024-08-02T11:10:11Z
https://github.com/langchain-ai/langchain/issues/23932
2,393,536,512
23,932
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python template = ChatPromptTemplate.from_messages( messages= messages, template_format= "jinja2" ) ``` ### Error Message and Stack Trace (if applicable) Warning: Literal type does not include "jinja2". ### Description The `from_messages` method has a type error for the `template_format` parameter. When setting `template_format` to "jinja2", a warning is displayed even though "jinja2" works without any problem. It seems that "jinja2" is implemented internally, so the type definition should be modified to include it. ### System Info python-versions = "^3.11"
Class ChatPromptTemplate > def from_messages > template_format bug
https://api.github.com/repos/langchain-ai/langchain/issues/23929/comments
0
2024-07-06T06:55:53Z
2024-07-16T13:09:44Z
https://github.com/langchain-ai/langchain/issues/23929
2,393,443,504
23,929
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content testing!
test
https://api.github.com/repos/langchain-ai/langchain/issues/23925/comments
0
2024-07-05T21:41:05Z
2024-07-18T15:48:08Z
https://github.com/langchain-ai/langchain/issues/23925
2,393,170,793
23,925
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content testing
test
https://api.github.com/repos/langchain-ai/langchain/issues/23922/comments
0
2024-07-05T20:07:52Z
2024-07-05T20:09:51Z
https://github.com/langchain-ai/langchain/issues/23922
2,393,089,068
23,922
[ "langchain-ai", "langchain" ]
### Privileged issue - [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here. ### Issue Content `.tool_calls` attribute of aggregated chunks can be empty, whereas result of `.invoke` is not. ```python from langchain_anthropic import ChatAnthropic def magic_function() -> int: """Calculates a magic function.""" return 5 llm = ChatAnthropic( model="claude-3-haiku-20240307", ).bind_tools([magic_function]) query = "What is the value of magic_function()?" full = None for chunk in llm.stream(query): full = chunk if full is None else full + chunk print(full.tool_calls) print(llm.invoke(query).tool_calls) ``` ``` [] [{'name': 'magic_function', 'args': {}, 'id': 'toolu_01HHtSuCJ4LKQfGRYncy4D5a'}] ```
bug: anthropic streaming tool calls for tools with no arguments
https://api.github.com/repos/langchain-ai/langchain/issues/23911/comments
0
2024-07-05T15:07:55Z
2024-07-05T18:57:42Z
https://github.com/langchain-ai/langchain/issues/23911
2,392,777,922
23,911
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code is self-contained **except you need to download the public dataset** I use in order to replicate the problem. Please read the documentation below for where to get it. ``` import asyncio,sys,csv, random from typing import Any, Dict, List from langchain_community.cache import SQLiteCache from langchain.callbacks.base import AsyncCallbackHandler from langchain_community.callbacks import get_openai_callback from langchain_core.messages import HumanMessage from langchain_core.outputs import LLMResult from langchain_openai import ChatOpenAI from langchain_core.globals import set_llm_cache class CustomAsyncHandler(AsyncCallbackHandler): async def on_chat_model_start(self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: pass """Async callback handler that can be used to handle callbacks from langchain.""" async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: pass async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: await asyncio.sleep(1) #print(" >>> Your joke: {}".format(response.generations[0][0].text)) async def summarize(chat, text): # To enable streaming, we pass in `streaming=True` to the ChatModel constructor # Additionally, we pass in a list with our custom handler res=None with get_openai_callback() as cb: res=await chat.agenerate([[HumanMessage(content="Summarise the following text in 50 words:\n\n```\n{}\n```".format(text))]]) return res.generations[0][0].text, cb.total_cost async def test_batch(api, data:list, batchsize=10): chat = ChatOpenAI( model_name='gpt-4o', openai_api_key=api, callbacks=[CustomAsyncHandler()], ) tasks = [] for i in range(0, batchsize): text=random.choice(data) tasks.append(summarize(chat, text)) return await asyncio.gather(*tasks) def run_async(api, data:list, batchsize=10): count = 1 loop = asyncio.new_event_loop() while True: print("[batch={} of {} chat jobs]".format(count, batchsize)) res = loop.run_until_complete(test_batch(api, data, batchsize)) total_cost=sum([r[1] for r in res]) print("\tcompleted with {} results, cost={}".format(len(res), total_cost)) count += 1 def read_sample_data(in_csv:str, topn_lines): stop=0 rows=[] with open(in_csv, mode='r', encoding='utf-8') as file: # Create a CSV reader object with the specified delimiter and quote character csv_reader = csv.reader(file, delimiter=',', quotechar='"') for row in csv_reader: if stop==0: stop+=1 continue rows.append(row[0]) stop+=1 if stop>topn_lines: break return rows if __name__=='__main__': cache = SQLiteCache(database_path="example_cache.db") set_llm_cache(cache) #this is downloaded from https://www.kaggle.com/datasets/alfathterry/bbc-full-text-document-classification?resource=download #reads just the top n lines, then for each async job, takes a random text to summarise. eventually, #everything should've been cached data=read_sample_data('/home/zz/Data/news_text_samples/bbc_data.csv', topn_lines=100) #this line will run forever until you stop it. batchsize indiciates how many parallel chat jobs to run run_async(sys.argv[1], data, batchsize=20) ``` ### Error Message and Stack Trace (if applicable) ``` Traceback (most recent call last): File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1960, in _exec_single_context self.dialect.do_execute( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) sqlite3.IntegrityError: UNIQUE constraint failed: full_llm_cache.prompt, full_llm_cache.llm, full_llm_cache.idx The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 84, in <module> run_async(sys.argv[1], data, batchsize=20) File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 55, in run_async res = loop.run_until_complete(test_batch(api, data, batchsize)) File "/home/zz/Programs/miniconda3/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 48, in test_batch return await asyncio.gather(*tasks) File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 33, in summarize res=await chat.agenerate([[HumanMessage(content="Summarise the following text in 50 words:\n\n```\n{}\n```".format(text))]]) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 651, in agenerate raise exceptions[0] File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 855, in _agenerate_with_cache await llm_cache.aupdate(prompt, llm_string, result.generations) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/caches.py", line 138, in aupdate return await run_in_executor(None, self.update, prompt, llm_string, return_val) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 557, in run_in_executor return await asyncio.get_running_loop().run_in_executor( File "/home/zz/Programs/miniconda3/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 548, in wrapper return func(*args, **kwargs) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_community/cache.py", line 284, in update with Session(self.engine) as session, session.begin(): File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/util.py", line 146, in __exit__ with util.safe_reraise(): File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/util.py", line 144, in __exit__ self.commit() File "<string>", line 2, in commit File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go ret_value = fn(self, *arg, **kw) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1257, in commit self._prepare_impl() File "<string>", line 2, in _prepare_impl File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go ret_value = fn(self, *arg, **kw) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1232, in _prepare_impl self.session.flush() File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4296, in flush self._flush(objects) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4431, in _flush with util.safe_reraise(): File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4392, in _flush flush_context.execute() File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 466, in execute rec.execute(self) File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 642, in execute util.preloaded.orm_persistence.save_obj( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj _emit_insert_statements( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1048, in _emit_insert_statements result = connection.execute( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1408, in execute return meth( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 513, in _execute_on_connection return connection._execute_clauseelement( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1630, in _execute_clauseelement ret = self._execute_context( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1839, in _execute_context return self._exec_single_context( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1979, in _exec_single_context self._handle_dbapi_exception( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2335, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1960, in _exec_single_context self.dialect.do_execute( File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: full_llm_cache.prompt, full_llm_cache.llm, full_llm_cache.idx [SQL: INSERT INTO full_llm_cache (prompt, llm, idx, response) VALUES (?, ?, ?, ?)] [parameters: ('[{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "HumanMessage"], "kwargs": {"content": "Summarise the following text in 5 ... (1288 characters truncated) ... se money for the relief fund. A release date has yet to be set for the recording, which was organised by Sharon Osbourne. \\n```", "type": "human"}}]', '{"id": ["langchain", "chat_models", "openai", "ChatOpenAI"], "kwargs": {"max_retries": 2, "model_name": "gpt-4o", "n": 1, "openai_api_key": {"id": [" ... (12 characters truncated) ... EY"], "lc": 1, "type": "secret"}, "openai_proxy": "", "temperature": 0.7}, "lc": 1, "name": "ChatOpenAI", "type": "constructor"}---[(\'stop\', None)]', 0, '{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "output", "ChatGeneration"], "kwargs": {"text": "Sir Elton John performed a charity co ... (1181 characters truncated) ... 0b-5074984500f1-0", "usage_metadata": {"input_tokens": 314, "output_tokens": 65, "total_tokens": 379}, "tool_calls": [], "invalid_tool_calls": []}}}}')] (Background on this error at: https://sqlalche.me/e/20/gkpj) ``` ### Description When using SQLite cache in an async setup where X number of chat jobs are running in parallel, sqlite3.IntegrityError happens at randomly point. It seems to be caused by having the same key (prompt? llm_string?) inserted into the DB simultaneously, even though I am already using **.agenerate** Note that this problem happens randomly so it is difficult to catch. I have written a minimal program above to replicate this error. **What does the minimal program do:** - Reads top 50 news text from a csv file on your local OS (you need to download it from 'https://www.kaggle.com/datasets/alfathterry/bbc-full-text-document-classification?resource=download') - (note: you MIGHT be able to reproduce the error using shorter texts instead but I am not sure. I just try to replicate my setup as close as possible as my program sends long prompts) - Randomly chooses 20 news from the above 50 - Creates 20 chat jobs to run simultaneously (as a single 'batch'), each asking GPT to summarise the news - Uses a shared SQLite cache object - The program will print the total cost for each batch so eventually, you should see the cost remain constant at 0 meaning all calls go through the cache. But I never reached that point before the exception takes place. **What will happen:** Randomly at some point, the program will break with the error above. Make sure you delete the generated cache between different program runs. I ran the program 3 times: - 1st time it breaks at batch 7 - 2nd time it breaks at batch 10 - 3rd time it breaks at batch 4 ### System Info langchain 0.2.6 langchain-community 0.2.6 langchain-core 0.2.11 langchain-openai 0.1.14 OS: Ubuntu 22.04 Python: 3.10 managed by pyenv and poetry.
SQLiteCache under async setup randomly breaks due to sqlite3.IntegrityError (langchain_community.cache)
https://api.github.com/repos/langchain-ai/langchain/issues/23904/comments
0
2024-07-05T10:52:14Z
2024-07-11T10:55:11Z
https://github.com/langchain-ai/langchain/issues/23904
2,392,364,762
23,904
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.1/docs/modules/memory/chat_messages/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: ![Screenshot from 2024-07-05 19-48-46](https://github.com/langchain-ai/langchain/assets/8540764/8fccce5c-7b7e-488d-b0e2-f71da736bbfa) Similar to: https://github.com/langchain-ai/langchain/issues/23892 ### Idea or request for content: N/A
DOC: <Issue related to /v0.1/docs/modules/memory/chat_messages/> 404 on ChatMessageHistory link
https://api.github.com/repos/langchain-ai/langchain/issues/23902/comments
3
2024-07-05T09:49:23Z
2024-07-07T13:17:38Z
https://github.com/langchain-ai/langchain/issues/23902
2,392,260,779
23,902
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_openai import ChatOpenAI from typing import Optional from langchain_core.pydantic_v1 import BaseModel, Field class Joke(BaseModel): """Joke to tell user.""" setup: str = Field(description="The setup of the joke") punchline: str = Field(description="The punchline to the joke") rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10") llm = ChatOpenAI(model="gpt-3.5-turbo") structured_llm = llm.with_structured_output(Joke) result = structured_llm.invoke("Tell me a joke about cats") print(result) # result: None ``` ### Error Message and Stack Trace (if applicable) Nothing ### Description i try to langchain's method: with_structured_output(), https://python.langchain.com/v0.2/docs/how_to/structured_output/, but find the output is None. ### System Info python == 3.9.19 langchain == 0.2.6 langchain_core == 0.2.10 langchain-openai == 0.1.13
when run method “with_structured_output”, output print nothing?? code was copied from langchain doc.
https://api.github.com/repos/langchain-ai/langchain/issues/23901/comments
2
2024-07-05T09:33:50Z
2024-07-06T12:26:51Z
https://github.com/langchain-ai/langchain/issues/23901
2,392,234,090
23,901
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint from langchain_community.llms.azureml_endpoint import ContentFormatterBase from langchain_community.chat_models.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIChatContentFormatter, ) from langchain_core.messages import HumanMessage chat = AzureMLChatOnlineEndpoint( endpoint_url="https://llm-host-westeurope-mx8x22bi.westeurope.inference.ml.azure.com/score", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="xY1BWYshxYJhQGZE6P7Uc1of34BW9b5t", content_formatter=CustomOpenAIChatContentFormatter(), ) ``` ``` response = chat.invoke( [HumanMessage(content="Hallo")],max_tokens=512 ) response ``` ### Error Message and Stack Trace (if applicable) I think I have set up the right deployment type. See here the full trace: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type) 139 try: --> 140 choice = json.loads(output)["output"] 141 except (KeyError, IndexError, TypeError) as e: KeyError: 'output' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) [/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 4 line 8 [5](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=4) prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)]) [7](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=6) chain = prompt | chat ----> [8](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=7) chain.invoke({"text": "Explain the importance of low latency for LLMs."}) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507), in RunnableSequence.invoke(self, input, config, **kwargs) 2505 input = step.invoke(input, config, **kwargs) 2506 else: -> 2507 input = step.invoke(input, config) 2508 # finish the root run 2509 except BaseException as e: File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248), in BaseChatModel.invoke(self, input, config, stop, **kwargs) 237 def invoke( 238 self, 239 input: LanguageModelInput, (...) 243 **kwargs: Any, 244 ) -> BaseMessage: 245 config = ensure_config(config) 246 return cast( 247 ChatGeneration, --> 248 self.generate_prompt( 249 [self._convert_input(input)], 250 stop=stop, 251 callbacks=config.get("callbacks"), 252 tags=config.get("tags"), 253 metadata=config.get("metadata"), 254 run_name=config.get("run_name"), 255 run_id=config.pop("run_id", None), 256 **kwargs, 257 ).generations[0][0], 258 ).message File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs) 669 def generate_prompt( 670 self, 671 prompts: List[PromptValue], (...) 674 **kwargs: Any, 675 ) -> LLMResult: 676 prompt_messages = [p.to_messages() for p in prompts] --> 677 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 532 if run_managers: 533 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 534 raise e 535 flattened_outputs = [ 536 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item] 537 for res in results 538 ] 539 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 521 for i, m in enumerate(messages): 522 try: 523 results.append( --> 524 self._generate_with_cache( 525 m, 526 stop=stop, 527 run_manager=run_managers[i] if run_managers else None, 528 **kwargs, 529 ) 530 ) 531 except BaseException as e: 532 if run_managers: File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs) 747 else: 748 if inspect.signature(self._generate).parameters.get("run_manager"): --> 749 result = self._generate( 750 messages, stop=stop, run_manager=run_manager, **kwargs 751 ) 752 else: 753 result = self._generate(messages, stop=stop, **kwargs) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279), in AzureMLChatOnlineEndpoint._generate(self, messages, stop, run_manager, **kwargs) 273 request_payload = self.content_formatter.format_messages_request_payload( 274 messages, _model_kwargs, self.endpoint_api_type 275 ) 276 response_payload = self.http_client.call( 277 body=request_payload, run_manager=run_manager 278 ) --> 279 generations = self.content_formatter.format_response_payload( 280 response_payload, self.endpoint_api_type 281 ) 282 return ChatResult(generations=[generations]) File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type) 140 choice = json.loads(output)["output"] 141 except (KeyError, IndexError, TypeError) as e: --> 142 raise ValueError(self.format_error_msg.format(api_type=api_type)) from e 143 return ChatGeneration( 144 message=BaseMessage( 145 content=choice.strip(), (...) 148 generation_info=None, 149 ) 150 if api_type == AzureMLEndpointApiType.serverless: ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type? ``` ### Description Hi, I set up Mixtral 8x22B on Azure AI/Machine Learning and now want to use it with Langchain. I have difficulties with the format I am getting, e.g. a ChatOpenAI response looks like this: ``` from langchain_openai import ChatOpenAI llmm = ChatOpenAI() llmm.invoke("Hallo") ``` `AIMessage(content='Hallo! Wie kann ich Ihnen helfen?', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 8, 'total_tokens': 16}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='r')` This is how it looks when I am loading Mixtral 8x22B with AzureMLChatOnlineEndpoint: ``` from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint from langchain_community.chat_models.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIChatContentFormatter, ) from langchain_core.messages import HumanMessage chat = AzureMLChatOnlineEndpoint( endpoint_url="...", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="...", content_formatter=CustomOpenAIChatContentFormatter(), ) chat.invoke("Hallo") ``` `BaseMessage(content='Hallo, ich bin ein deutscher Sprachassistent. Was kann ich für', type='assistant', id='run-23')` So with the Mixtral model the output a **different format (BaseMessage vs. AIMessage)**. How can I change this to make it work just like an ChatOpenAI model? I further explored if it works in a chain with a ChatPromptTemplate without success: ``` from langchain_core.prompts import ChatPromptTemplate system = "You are a helpful assistant called Bot." human = "{text}" prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)]) chain = prompt | chat chain.invoke({"text": "Who are you?"}) ``` This results in `KeyError: 'output'` and `ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?`. See full trace above. In my application I want to easily switch between these two models. Thanks in advance! ### System Info langchain 0.2.6 pypi_0 pypi langchain-chroma 0.1.0 pypi_0 pypi langchain-community 0.2.6 pypi_0 pypi langchain-core 0.2.10 pypi_0 pypi langchain-experimental 0.0.49 pypi_0 pypi langchain-groq 0.1.5 pypi_0 pypi langchain-openai 0.1.7 pypi_0 pypi langchain-postgres 0.0.3 pypi_0 pypi langchain-text-splitters 0.2.1
Load LLM (Mixtral 8x22B) from Azure AI endpoint as Langchain Model - BaseMessage instead of AIMessage
https://api.github.com/repos/langchain-ai/langchain/issues/23899/comments
6
2024-07-05T06:52:55Z
2024-07-11T20:09:37Z
https://github.com/langchain-ai/langchain/issues/23899
2,391,963,121
23,899
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain.utilities import SQLDatabase db = SQLDatabase.from_uri(db_path, include_tables=shortlisted_tables, sample_rows_in_table_info=2) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description Currently when initiating the database and `sample_rows_in_table_info` takes the number of rows to be used. These are typically the top rows from my tables. I want to manually select rows for my use case, rather than top N rows. I'm trying to do this as my top rows might have values in some of the columns missing, and I don't want to use rows with missing values for my query generation. Is there any way we can do this, if so, please share. Resources followed: https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase.from_uri ### System Info `pip show langchain` ```Name: langchain Version: 0.2.3 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: /home/ankit/anaconda3/envs/chatai_production/lib/python3.9/site-packages Requires: aiohttp, async-timeout, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: langchain-community```
sample_rows_in_table_info: int = 3, I don't want to use top N rows, but select some rows manually to be passed to SQLDatabase
https://api.github.com/repos/langchain-ai/langchain/issues/23898/comments
0
2024-07-05T06:29:57Z
2024-07-05T06:40:48Z
https://github.com/langchain-ai/langchain/issues/23898
2,391,926,241
23,898
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from cassandra.cluster import Cluster from ssl import PROTOCOL_TLSv1_2, SSLContext, CERT_NONE, PROTOCOL_TLS, PROTOCOL_SSLv23 from cassandra.auth import PlainTextAuthProvider from asyncio.log import logger from config import * from langchain_openai import AzureChatOpenAI from langchain.globals import set_llm_cache from fastapi import FastAPI from pydantic import BaseModel import uvicorn from langchain.schema import HumanMessage from langchain_core.messages import AIMessage from langchain_core.outputs.chat_generation import ChatGeneration from langchain_core.load import dumps import cassio from langchain_community.cache import CassandraCache #creating generation_info for ChatGeneration Object from 'res' <AIMessage Object> #creating ChatGeneration Object cluster = Cluster(['*******************'], port = 9042) session = cluster.connect() print(session) aoai_endpoint = '******************************************************' aoai_api_key = '****************************************************' aoai_api_version = '2024-05-01-preview' app = FastAPI() class Item(BaseModel): question: str llm = AzureChatOpenAI( model='gpt-4o', azure_endpoint=aoai_endpoint, azure_deployment='gpt-4o', api_version=aoai_api_version, api_key=aoai_api_key, temperature=0.0, max_tokens=4000, ) @app.post("/askquestion") def say_joke(item: Item): cassio.init(session=session, keyspace='cycling') set_llm_cache(CassandraCache()) message = HumanMessage(content=item.question) response = llm.invoke([message]) return response.content if __name__ == "__main__": uvicorn.run(host="0.0.0.0", port=8000, app=app) ### Error Message and Stack Trace (if applicable) INFO: 172.26.64.1:52437 - "POST /askquestion HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__ raise exc File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__ await self.middleware_stack(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 776, in app await route.handle(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle await self.app(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 72, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/routing.py", line 193, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/concurrency.py", line 42, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/Langchain/test.py", line 47, in say_joke response = llm.invoke([message]) ^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 248, in invoke self.generate_prompt( File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 681, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 538, in generate raise e File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 528, in generate self._generate_with_cache( File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 712, in _generate_with_cache llm_string = self._get_llm_string(stop=stop, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 455, in _get_llm_string _cleanup_llm_representation(serialized_repr, 1) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1239, in _cleanup_llm_representation _cleanup_llm_representation(value, depth + 1) File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1229, in _cleanup_llm_representation if serialized["type"] == "not_implemented" and "repr" in serialized: ~~~~~~~~~~^^^^^^^^ TypeError: string indices must be integers, not 'str' ### Description We are trying to implement Cassandra Exact Caching with our example code. We have used Azure Managed Cassandra instance and standalone docker instance as well. But the code did not work with either of these DB setup. With a simple tweaking in the Lang chain source code (python3.12/site-packages/langchain_core/language_models/chat_models.py). Line# 455(_cleanup_llm_representation(serialized_repr, 1)) in the above python code was commented and after that it worked. ==================================================================== def _get_llm_string(self, stop: Optional[List[str]] = None, **kwargs: Any) -> str: if self.is_lc_serializable(): params = {**kwargs, **{"stop": stop}} param_string = str(sorted([(k, v) for k, v in params.items()])) # This code is not super efficient as it goes back and forth between # json and dict. serialized_repr = dumpd(self) **_cleanup_llm_representation(serialized_repr, 1)** llm_string = json.dumps(serialized_repr, sort_keys=True) return llm_string + "---" + param_string else: params = self._get_invocation_params(stop=stop, **kwargs) params = {**params, **kwargs} return str(sorted([(k, v) for k, v in params.items()])) ### System Info (llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show langchain-community Name: langchain-community Version: 0.2.6 Summary: Community contributed LangChain integrations. Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages Requires: aiohttp, dataclasses-json, langchain, langchain-core, langsmith, numpy, PyYAML, requests, SQLAlchemy, tenacity Required-by: (llmvenv) dd@KDC1-L-3326K00:~/Langchain$ ============================================================ (llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show cassandra-driver Name: cassandra-driver Version: 3.29.1 Summary: DataStax Driver for Apache Cassandra Home-page: http://github.com/datastax/python-driver Author: DataStax Author-email: License: Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages Requires: geomet Required-by: cassio (llmvenv) dd@KDC1-L-3326K00:~/Langchain$ ============================================================= (llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show cassio Name: cassio Version: 0.1.8 Summary: A framework-agnostic Python library to seamlessly integrate Apache Cassandra(R) with ML/LLM/genAI workloads. Home-page: https://cassio.org Author: Stefano Lottini Author-email: stefano.lottini@datastax.com License: Apache-2.0 Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages Requires: cassandra-driver, numpy, requests Required-by: (llmvenv) dd@KDC1-L-3326K00:~/Langchain$
Cassandra Exact Cache issue: TypeError: string indices must be integers, not 'str'
https://api.github.com/repos/langchain-ai/langchain/issues/23896/comments
5
2024-07-05T05:15:48Z
2024-07-05T19:05:12Z
https://github.com/langchain-ai/langchain/issues/23896
2,391,829,528
23,896
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/tutorials/chatbot/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: The documentation refers to "ChatMessageHistory" while this particular page does not exist in the documentation. ![Screenshot from 2024-07-04 20-54-25](https://github.com/langchain-ai/langchain/assets/56577852/34e9b560-9dfe-4637-a6cf-c4f1d9c7d407) The documentation referred to here is present in the prompt section: https://python.langchain.com/v0.2/docs/tutorials/chatbot/#prompt-templates ### Idea or request for content: _No response_
DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> Missing documentation.
https://api.github.com/repos/langchain-ai/langchain/issues/23892/comments
1
2024-07-05T01:56:21Z
2024-07-05T19:48:12Z
https://github.com/langchain-ai/langchain/issues/23892
2,391,636,123
23,892
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain.agents import AgentExecutor, create_tool_calling_agent, tool from langchain_aws import ChatBedrock from langchain_core.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant"), ("placeholder", "{chat_history}"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ] ) model = ChatBedrock( model_id="meta.llama3-70b-instruct-v1:0" ) @tool def magic_function(input: int) -> int: """Applies a magic function to an input.""" return input + 2 tools = [magic_function] agent = create_tool_calling_agent(model, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) agent_executor.invoke({"input": "what is the value of magic_function(3)?"}) ### Error Message and Stack Trace (if applicable) File "/MVP/mvp/lib/python3.12/site-packages/langchain_core/messages/ai.py", line 243, in __add__ response_metadata = merge_dicts( ^^^^^^^^^^^^ File "/MVP/mvp/lib/python3.12/site-packages/langchain_core/utils/_merge.py", line 40, in merge_dicts raise TypeError( TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>. ### Description I am trying out the example from the langchain website and it is giving me error as **TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.** I cannot solve the error, and don't understand it. can you please resolve it? ### System Info langchain==0.2.6 langchain-anthropic==0.1.19 langchain-aws==0.1.9 langchain-community==0.2.6 langchain-core==0.2.11 langchain-text-splitters==0.2.2 Mac Python 3.12
raise TypeError( TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.
https://api.github.com/repos/langchain-ai/langchain/issues/23891/comments
0
2024-07-05T01:06:04Z
2024-07-08T15:01:30Z
https://github.com/langchain-ai/langchain/issues/23891
2,391,600,864
23,891
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```py from langchain.agents import initialize_agent, AgentType from langchain.agents import Tool from langchain_experimental.utilities import PythonREPL import datetime from langchain.agents import AgentExecutor from langchain.chains.conversation.memory import ( ConversationBufferMemory, ) from langchain.prompts import MessagesPlaceholder from langchain_community.chat_models import BedrockChat from langchain.agents import OpenAIMultiFunctionsAgent # You can create the tool to pass to an agent repl_tool = Tool( name="python_repl", description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", func=PythonREPL().run, ) memory = ConversationBufferMemory(return_messages=True, k=10, memory_key="chat_history") prompt = OpenAIMultiFunctionsAgent.create_prompt( system_message=SystemMessage(content="You are an helpful AI bot"), extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")], ) llm = BedrockChat( model_id="anthropic.claude-3-5-sonnet-20240620-v1:0", client=client, #initialized elsewhere model_kwargs={"max_tokens": 4050, "temperature": 0.5}, verbose=True, ) tools = [ repl_tool, ] agent_executor = initialize_agent( tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, max_iterations=10, memory=memory, prompt=prompt, ) res = agent_executor.invoke({ 'input': 'hi how are you?' }) print(res['output'] # Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today? res=agent_executor.invoke({ "input": "what was my previous message?" }) print(res['output'] # I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help. # but when I checked the memory buffer print(memory.buffer) # [HumanMessage(content='hi how are you?'), AIMessage(content="Hello! As an AI assistant, I don't have feelings, but I'm functioning well and ready to help you. How can I assist you today?"), HumanMessage(content='hi how are you?'), AIMessage(content="Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?"), HumanMessage(content='what was my previous message?'), AIMessage(content="I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.")] # As you can see memory is getting updated # so I checked the prompt template of the agent executor pprint(agent_executor.agent.llm_chain.prompt) # ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Respond to the human as helpfully and accurately as possible. You have access to the following tools:\n\npython_repl: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`., args: {{\'tool_input\': {{\'type\': \'string\'}}}}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or python_repl\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n```\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'input'], template='{input}\n\n{agent_scratchpad}'))]) # As you can see there is no input variable placeholder for `chat_memory` ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description - I'm trying to use an agent executor with memory (`AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION`) - I'm passing the right prompt template which contains the `memory_key` - The initialized agent executor's prompt template resorts to a default prompt template that does not contain the `memory_key` place holder ### System Info ```sh System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030 > Python Version: 3.12.2 (v3.12.2:6abddd9f6a, Feb 6 2024, 17:02:06) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.2.9 > langchain: 0.1.11 > langchain_community: 0.0.27 > langsmith: 0.1.80 > langchain_anthropic: 0.1.15 > langchain_aws: 0.1.6 > langchain_experimental: 0.0.53 > langchain_openai: 0.0.8 > langchain_text_splitters: 0.0.1 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
`langchain.agents.initialize_agent` does not support custom prompt template
https://api.github.com/repos/langchain-ai/langchain/issues/23884/comments
1
2024-07-04T19:02:16Z
2024-07-08T05:43:48Z
https://github.com/langchain-ai/langchain/issues/23884
2,391,353,782
23,884
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```py %pip install --upgrade --quiet langchain-community langchain_openai gql httpx requests-toolbelt from langchain.agents import AgentType, initialize_agent, load_tools from langchain_openai import OpenAI llm = OpenAI(temperature=0,api_key="") headers = { 'Content-Type':'application/json', 'Authorization': 'Bearer TOKEN_HERE' } tools = load_tools( ["graphql"], graphql_endpoint="https://streaming.bitquery.io/eap", headers=headers ) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) ``` ```py graphql_fields = """subscription { Solana { InstructionBalanceUpdates(limit: {count: 10}) { Transaction { Index FeePayer Fee Signature Result { Success ErrorMessage } } Instruction { InternalSeqNumber Index CallPath Program { Address Name Parsed } } Block { Time Hash Height } BalanceUpdate { Account { Address } Amount Currency { Decimals CollectionAddress Name Key IsMutable Symbol } } } } } """ suffix = "Search for the Transaction with positive Balance stored in the graphql database that has this schema " agent.run(suffix + graphql_fields) ``` ### Error Message and Stack Trace (if applicable) ERROR ``` > Entering new AgentExecutor chain... I should look for a transaction with a positive balance Action: query_graphql Action Input: query { Solana { InstructionBalanceUpdates(limit: {count: 10}) { Transaction { Index FeePayer Fee Signature Result { Success ErrorMessage } } Instruction { InternalSeqNumber Index CallPath Program { Address Name Parsed } } Block { Time Hash Height } BalanceUpdate { Account { Address } Amount Currency { Decimals CollectionAddress Name Key IsMutable Symbol } } } } } --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs) [970](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:970) try: --> [971](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:971) return complexjson.loads(self.text, **kwargs) [972](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:972) except JSONDecodeError as e: [973](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:973) # Catch JSON-related errors and raise as requests.JSONDecodeError [974](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:974) # This aliases json.JSONDecodeError and simplejson.JSONDecodeError File ~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) [343](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:343) if (cls is None and object_hook is None and [344](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:344) parse_int is None and parse_float is None and [345](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:345) parse_constant is None and object_pairs_hook is None and not kw): --> [346](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:346) return _default_decoder.decode(s) [347](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:347) if cls is None: File ~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) [333](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:333) """Return the Python representation of ``s`` (a ``str`` instance [334](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:334) containing a JSON document). [335](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:335) [336](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:336) """ --> [337](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:337) obj, end = self.raw_decode(s, idx=_w(s, 0).end()) [338](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:338) end = _w(s, end).end() File ~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx) ... [255](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:255) f"{reason}: " [256](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:256) f"{result_text}" [257](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:257) ) TransportServerError: 401 Client Error: Unauthorized for url: https://streaming.bitquery.io/eap ``` ### Description if i use without langchain it works but does not work with it ### System Info ```py System Information ------------------ > OS: Linux > OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024 > Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.83 > langchain_openai: 0.1.14 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
Agents and GraphQL- 401 Client Error: Unauthorized for url: https://streaming.bitquery.io/eap
https://api.github.com/repos/langchain-ai/langchain/issues/23881/comments
0
2024-07-04T16:50:05Z
2024-07-05T06:37:30Z
https://github.com/langchain-ai/langchain/issues/23881
2,391,210,682
23,881
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace # This part works like a charm llm = HuggingFaceEndpoint( name="mistral", endpoint_url=classifier_agent_config.endpoint_url, task="text-generation", **classifier_agent_config.generation_config ) # This part raises error chat_llm = ChatHuggingFace(llm=llm, verbose=True) ``` ### Error Message and Stack Trace (if applicable) --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) \.venv\Lib\site-packages\huggingface_hub\utils\_errors.py:304, in hf_raise_for_status(response, endpoint_name) 303 try: --> 304 response.raise_for_status() 305 except HTTPError as e: \.venv\Lib\site-packages\requests\models.py:1024, in Response.raise_for_status(self) 1023 if http_error_msg: -> 1024 raise HTTPError(http_error_msg, response=self) HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/whoami-v2 The above exception was the direct cause of the following exception: HfHubHTTPError Traceback (most recent call last) \.venv\Lib\site-packages\huggingface_hub\hf_api.py:1397, in HfApi.whoami(self, token) 1396 try: -> 1397 hf_raise_for_status(r) 1398 except HTTPError as e: \.venv\Lib\site-packages\huggingface_hub\utils\_errors.py:371, in hf_raise_for_status(response, endpoint_name) 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 370 # as well (request id and/or server error message) --> 371 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/whoami-v2 (Request ID: Root=1-66869944-058281c36301f9472614deeb;255a1059-9b6e-47ab-bcfc-3c0bced7baa0) Invalid username or password. The above exception was the direct cause of the following exception: HTTPError Traceback (most recent call last) Cell In[31], line 1 ----> 1 chat_llm = ChatHuggingFace(llm=llm) \.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py:169, in ChatHuggingFace.__init__(self, **kwargs) 165 super().__init__(**kwargs) 167 from transformers import AutoTokenizer # type: ignore[import] --> 169 self._resolve_model_id() 171 self.tokenizer = ( 172 AutoTokenizer.from_pretrained(self.model_id) 173 if self.tokenizer is None 174 else self.tokenizer 175 ) \.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py:295, in ChatHuggingFace._resolve_model_id(self) 291 """Resolve the model_id from the LLM's inference_server_url""" 293 from huggingface_hub import list_inference_endpoints # type: ignore[import] --> 295 available_endpoints = list_inference_endpoints("*") 296 if _is_huggingface_hub(self.llm) or ( 297 hasattr(self.llm, "repo_id") and self.llm.repo_id 298 ): 299 self.model_id = self.llm.repo_id \.venv\Lib\site-packages\huggingface_hub\hf_api.py:7081, in HfApi.list_inference_endpoints(self, namespace, token) 7079 # Special case: list all endpoints for all namespaces the user has access to 7080 if namespace == "*": -> 7081 user = self.whoami(token=token) 7083 # List personal endpoints first 7084 endpoints: List[InferenceEndpoint] = list_inference_endpoints(namespace=self._get_namespace(token=token)) \.venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 111 if check_use_auth_token: 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --> 114 return fn(*args, **kwargs) \.venv\Lib\site-packages\huggingface_hub\hf_api.py:1399, in HfApi.whoami(self, token) 1397 hf_raise_for_status(r) 1398 except HTTPError as e: -> 1399 raise HTTPError( 1400 "Invalid user token. If you didn't pass a user token, make sure you " 1401 "are properly logged in by executing `huggingface-cli login`, and " 1402 "if you did pass a user token, double-check it's correct.", 1403 request=e.request, 1404 response=e.response, 1405 ) from e 1406 return r.json() HTTPError: Invalid user token. If you didn't pass a user token, make sure you are properly logged in by executing `huggingface-cli login`, and if you did pass a user token, double-check it's correct. ### Description * I am trying to use langchain_huggingface library to connect to a TGI instance locally and as expected it connects and I am able to infer the same. But when converting HuggingFaceEndpoint to ChatHuggingFace, it raises error requesting user token to be provided. ### System Info langchain-huggingface = "0.0.3" langchain-core=="0.2.11" platform: windows (TGI is run in a ec2 linux instance) Python 3.12.2
LangChain x HuggingFace - Using ChatHuggingFace requires hf token for local TGI using locally saved model
https://api.github.com/repos/langchain-ai/langchain/issues/23872/comments
3
2024-07-04T13:03:19Z
2024-07-06T08:31:02Z
https://github.com/langchain-ai/langchain/issues/23872
2,390,826,853
23,872
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/templates/openai-functions-tool-retrieval-agent/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: In [this documentation page](https://python.langchain.com/v0.2/docs/templates/openai-functions-tool-retrieval-agent/#usage), the line "This template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval)." refers to a link that gives page not found. ### Idea or request for content: It would be beneficial to provide a working link to a proper example of a use case of this template, since the rest of the page's documentation is quite sparse.
DOC: <Issue related to /v0.2/docs/templates/openai-functions-tool-retrieval-agent/> URL in Documentation Gives Page Not Found
https://api.github.com/repos/langchain-ai/langchain/issues/23870/comments
0
2024-07-04T12:37:47Z
2024-07-17T15:06:41Z
https://github.com/langchain-ai/langchain/issues/23870
2,390,775,230
23,870
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python import os from langchain_community.tools.tavily_search import TavilySearchResults from langchain_community.chat_models import ChatZhipuAI from langchain_core.messages import HumanMessage # .环境变量 os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_API_KEY"] = "balabala" os.environ["ZHIPUAI_API_KEY"] = "balabala" os.environ["TAVILY_API_KEY"]="balabala" llm=ChatZhipuAI(model="glm-4") search = TavilySearchResults(max_results=2) tools = [search] llm_with_tools=llm.bind_tools(tools) response=llm_with_tools.invoke([HumanMessage(content="hello")]) print(response.content) pass ``` ### Error Message and Stack Trace (if applicable) Exception has occurred: NotImplementedError exception: no description File "E:\GitHub\langchain\1\agent_1.py", line 33, in <module> llm_with_tools=llm.bind_tools(tools) ^^^^^^^^^^^^^^^^^^^^^ NotImplementedError: ### Description the code in "langchain_core\language_models\chat_models.py.BaseChatModel.bind_tools" is incomplete, as show below" ```python def bind_tools( self, tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]], **kwargs: Any, ) -> Runnable[LanguageModelInput, BaseMessage]: raise NotImplementedError() ``` According to the ChatOpenAI, the code should be: ```python def bind_tools( self, tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]], *, tool_choice: Optional[ Union[dict, str, Literal["auto", "none", "required", "any"], bool] ] = None, **kwargs: Any, ) -> Runnable[LanguageModelInput, BaseMessage]: """Bind tool-like objects to this chat model. Assumes model is compatible with OpenAI tool-calling API. Args: tools: A list of tool definitions to bind to this chat model. Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation. tool_choice: Which tool to require the model to call. Options are: name of the tool (str): calls corresponding tool; "auto": automatically selects a tool (including no tool); "none": does not call a tool; "any" or "required": force at least one tool to be called; True: forces tool call (requires `tools` be length 1); False: no effect; or a dict of the form: {"type": "function", "function": {"name": <<tool_name>>}}. **kwargs: Any additional parameters to pass to the :class:`~langchain.runnable.Runnable` constructor. """ formatted_tools = [convert_to_openai_tool(tool) for tool in tools] if tool_choice: if isinstance(tool_choice, str): # tool_choice is a tool/function name if tool_choice not in ("auto", "none", "any", "required"): tool_choice = { "type": "function", "function": {"name": tool_choice}, } # 'any' is not natively supported by OpenAI API. # We support 'any' since other models use this instead of 'required'. if tool_choice == "any": tool_choice = "required" elif isinstance(tool_choice, bool): tool_choice = "required" elif isinstance(tool_choice, dict): tool_names = [ formatted_tool["function"]["name"] for formatted_tool in formatted_tools ] if not any( tool_name == tool_choice["function"]["name"] for tool_name in tool_names ): raise ValueError( f"Tool choice {tool_choice} was specified, but the only " f"provided tools were {tool_names}." ) else: raise ValueError( f"Unrecognized tool_choice type. Expected str, bool or dict. " f"Received: {tool_choice}" ) kwargs["tool_choice"] = tool_choice return super().bind(tools=formatted_tools, **kwargs) ``` After replacing this part, the code is running well. ### System Info Python 3.12.3 langchain==0.2.6 langchain-chroma==0.1.2 langchain-community==0.2.6 langchain-core==0.2.10 langchain-huggingface==0.0.3 langchain-openai==0.1.14 langchain-text-splitters==0.2.2 langserve==0.2.2 langsmith==0.1.82
Loss of function in component ChatZhipuAI
https://api.github.com/repos/langchain-ai/langchain/issues/23868/comments
8
2024-07-04T11:38:22Z
2024-07-11T03:20:44Z
https://github.com/langchain-ai/langchain/issues/23868
2,390,662,856
23,868
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_community.embeddings import GPT4AllEmbeddings embeddings = GPT4AllEmbeddings() ### Error Message and Stack Trace (if applicable) KeyError Traceback (most recent call last) [<ipython-input-6-542601a9aeec>](https://localhost:8080/#) in <cell line: 1>() ----> 1 x = GPT4AllEmbeddings() 2 frames [/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/gpt4all.py](https://localhost:8080/#) in validate_environment(cls, values) 37 38 values["client"] = Embed4All( ---> 39 model_name=values["model_name"], 40 n_threads=values.get("n_threads"), 41 device=values.get("device"), KeyError: 'model_name' ### Description It can not find `model_name` from values. ### System Info langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.11 langchain-text-splitters==0.2.2
Community: Keyerror `model_name` for GPT4AllEmbeddings
https://api.github.com/repos/langchain-ai/langchain/issues/23863/comments
3
2024-07-04T10:44:12Z
2024-07-05T18:09:02Z
https://github.com/langchain-ai/langchain/issues/23863
2,390,551,046
23,863
[ "langchain-ai", "langchain" ]
### URL https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.graph.Graph.html ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: The code of the remove_node in https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/graph.py is as follow ```python def remove_node(self, node: Node) -> None: """Remove a node from the graphm and all edges connected to it.""" self.nodes.pop(node.id) self.edges = [ edge for edge in self.edges if edge.source != node.id and edge.target != node.id ] ``` graph is spelled graphm ### Idea or request for content: ```python def remove_node(self, node: Node) -> None: """Remove a node from the **graph** and all edges connected to it.""" self.nodes.pop(node.id) self.edges = [ edge for edge in self.edges if edge.source != node.id and edge.target != node.id ] ``` Should replace *graphm* with *graph*
DOC: remove_node typo in runnables/ graph
https://api.github.com/repos/langchain-ai/langchain/issues/23861/comments
1
2024-07-04T09:52:45Z
2024-07-05T17:07:02Z
https://github.com/langchain-ai/langchain/issues/23861
2,390,451,274
23,861
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code for content_part in cast(List[Dict], message.content): if content_part.get("type") == "text": content += f"\n{content_part['text']}" elif content_part.get("type") == "image_url": image_url = None temp_image_url = content_part.get("image_url") if isinstance(temp_image_url, str): image_url = content_part["image_url"] elif ( isinstance(temp_image_url, dict) and "url" in temp_image_url ): image_url = temp_image_url else: raise ValueError( "Only string image_url or dict with string 'url' " "inside content parts are supported." ) image_url_components = image_url.split(",") # Support data:image/jpeg;base64,<image> format # and base64 strings if len(image_url_components) > 1: images.append(image_url_components[1]) else: images.append(image_url_components[0]) ### Error Message and Stack Trace (if applicable) File "/Users/workspace/langchain-demo/venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 154, in _convert_messages_to_ollama_messages image_url_components = image_url.split(",") ^^^^^^^^^^^^^^^ AttributeError: 'dict' object has no attribute 'split' ### Description modify the code as follows: elif isinstance(temp_image_url, dict) and 'url' in temp_image_url: image_url = temp_image_url['url']" ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020 > Python Version: 3.11.3 (v3.11.3:f3909b8bc8, Apr 4 2023, 20:12:10) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.1.52 > langchain: 0.1.17 > langchain_community: 0.0.36 > langsmith: 0.1.83 > langchain_chatchat: 0.3.0.20240625.1 > langchain_experimental: 0.0.58 > langchain_openai: 0.0.6 > langchain_text_splitters: 0.0.2 > langchainhub: 0.1.14 > langgraph: 0.0.28 Packages not installed (Not Necessarily a Problem) --------------------------------------------------
ollama.py encountered an error while retrieving images from multimodal data
https://api.github.com/repos/langchain-ai/langchain/issues/23859/comments
0
2024-07-04T09:17:56Z
2024-07-04T09:20:26Z
https://github.com/langchain-ai/langchain/issues/23859
2,390,368,372
23,859
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```javacript // ===tavily request=== { query: '"{\\"input\\":\\"2023 NBA final winner\\"}"', max_results: 1, api_key: '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' } // ===tavily response=== { query: '"{\\"input\\":\\"2023 NBA final winner\\"}"', follow_up_questions: null, answer: null, images: null, results: [], response_time: 1.28 } ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description https://github.com/langchain-ai/langchainjs/blob/1fddf296f922dcaa362a90c8fe90b4bfd84b6c3e/libs/langchain-community/src/retrievers/tavily_search_api.ts#L97 the tavily `search` api is not reliable ### System Info @langchain/community": "^0.2.13
TavilySearchResults in Agents Quick Start always return empty result
https://api.github.com/repos/langchain-ai/langchain/issues/23858/comments
0
2024-07-04T09:02:13Z
2024-07-04T09:02:58Z
https://github.com/langchain-ai/langchain/issues/23858
2,390,334,797
23,858
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: I followed the example codes on this page, and I got error as: ``` KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']" ``` LangChain version: ``` langchain 0.2.6 pyhd8ed1ab_0 conda-forge langchain-community 0.2.6 pyhd8ed1ab_0 conda-forge langchain-core 0.2.11 pyhd8ed1ab_0 conda-forge langchain-openai 0.1.14 pyhd8ed1ab_0 conda-forge langchain-text-splitters 0.2.2 pyhd8ed1ab_0 conda-forge ``` ### Idea or request for content: _No response_
DOC: <Issue related to /v0.2/docs/how_to/output_parser_fixing/>
https://api.github.com/repos/langchain-ai/langchain/issues/23856/comments
2
2024-07-04T07:16:26Z
2024-07-23T11:20:02Z
https://github.com/langchain-ai/langchain/issues/23856
2,390,132,148
23,856
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code Not applicable ### Error Message and Stack Trace (if applicable) _No response_ ### Description There is no HNSW index in the pgvector vector store: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/pgvector.py Unlike the pgembedding vectore store: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/pgembedding.py#L192 ### System Info Not applicable
No HNSW index in pgvector vector store
https://api.github.com/repos/langchain-ai/langchain/issues/23853/comments
4
2024-07-04T05:04:44Z
2024-07-20T09:22:33Z
https://github.com/langchain-ai/langchain/issues/23853
2,389,946,645
23,853
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code I use the following code to load my documents. ```python def load_documents(directory): SOURCE_DOCUMENTS_DIR = directory SOURCE_DOCUMENTS_FILTER = "**/*.txt" loader = DirectoryLoader(f"{SOURCE_DOCUMENTS_DIR}", glob=SOURCE_DOCUMENTS_FILTER, show_progress=True, use_multithreading=True) print(f"Loading {SOURCE_DOCUMENTS_DIR} directory: ", end="") data = loader.load() print(f"Splitting {len(data)} documents") return data ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description The following is a line from a text document I am loading. This is how it looks in Notepad. **Document Name: https://www.kinecta.org//about-us/executive-staff** When I load the document using DirectoryLoader (I load a list of other docs as well), and print out the doc.page_content, I get the following: page_content='Document Name: https://www.kinecta.org//about\n\nus/executive\n\nstaff\n\n' **As you can see, it converted the dashes into new line characters. Any idea what this is?** This is the code I use to load my documents. ### System Info Python 3.11 Langchain 0.1.12
DirectoryLoader converting characters randomly into new line characters?
https://api.github.com/repos/langchain-ai/langchain/issues/23849/comments
0
2024-07-04T00:37:40Z
2024-07-04T00:40:06Z
https://github.com/langchain-ai/langchain/issues/23849
2,389,708,027
23,849
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code import langchain ### Error Message and Stack Trace (if applicable) No module named 'langchain_core' ### Description Starting this morning we are getting different kind of errors while trying to use langchain in Vertex AI, not quite sure what is the root of problem, Google or Langchain. ### System Info Python 3.10.12
No module named 'langchain_core' - Langchain in Vertex AI
https://api.github.com/repos/langchain-ai/langchain/issues/23838/comments
8
2024-07-03T19:34:38Z
2024-07-03T20:46:26Z
https://github.com/langchain-ai/langchain/issues/23838
2,389,312,087
23,838
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` import langchain from langchain_openai import ChatOpenAI from langchain.chains.conversation.base import ConversationChain from langchain.memory import ConversationBufferMemory from langchain_community.cache import SQLiteCache from langchain_community.callbacks import get_openai_callback import tiktoken,sys from langchain_core.globals import set_llm_cache, get_llm_cache cache = SQLiteCache(database_path=".langchain.db") llm = ChatOpenAI(model_name='gpt-3.5-turbo', openai_api_key=sys.argv[1]) memory = ConversationBufferMemory() conversation = ConversationChain( llm=llm, memory = memory, verbose=True ) with get_openai_callback() as cb: input="Hi, my name is Andrew" tokenizer=tiktoken.get_encoding("cl100k_base") toks=len(tokenizer.encode(text=input)) print(toks) costs={"tokens used":0, "prompt tokens":0, "completion tokens":0, "successful requests":0, "cost (usd)":0} result = conversation.predict(input=input) costs['tokens used']=cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(result) print(costs) res=conversation.predict(input="What is 1+1?") costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) res=conversation.predict(input="What is my name?") costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) set_llm_cache(cache) res=conversation.predict(input="What is my name?") costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) set_llm_cache(None) res=conversation.predict(input="What is my name?") costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) costs['tokens used'] = cb.total_tokens costs['prompt tokens'] = cb.prompt_tokens costs['completion tokens'] = cb.completion_tokens costs['successful requests'] = cb.successful_requests costs['cost (used)'] = cb.total_cost print(res) print(costs) print("end") ``` ### Error Message and Stack Trace (if applicable) ``` Traceback (most recent call last): File "/home/zz/Work/zz-notebooks/autogen/src/autogen/sandbox/example_langchain.py", line 56, in <module> res=conversation.predict(input="What is my name?") File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 317, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__ return self.invoke( File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in invoke raise e File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 127, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 139, in generate return self.llm.generate_prompt( File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 681, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 538, in generate raise e File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 528, in generate self._generate_with_cache( File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 712, in _generate_with_cache llm_string = self._get_llm_string(stop=stop, **kwargs) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 455, in _get_llm_string _cleanup_llm_representation(serialized_repr, 1) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 1239, in _cleanup_llm_representation _cleanup_llm_representation(value, depth + 1) File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 1229, in _cleanup_llm_representation if serialized["type"] == "not_implemented" and "repr" in serialized: TypeError: string indices must be integers ``` ### Description The fourth call to the LLM will cause the error above. Note that this is after calling 'set_llm_cache' ### System Info langchain 0.2.6 langchain-community 0.2.6 langchain-core 0.2.11 langchain-openai 0.1.14
generate/predict fails when using sql cache
https://api.github.com/repos/langchain-ai/langchain/issues/23824/comments
13
2024-07-03T16:02:38Z
2024-07-04T19:07:34Z
https://github.com/langchain-ai/langchain/issues/23824
2,388,986,993
23,824
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader('https://m.vk.com/support?category_id=2') data = loader.load() print(data[0]) ### Error Message and Stack Trace (if applicable) Your browser is out of dateThis may cause VK to work slowly or experience errors.Update your browser or install one of the following:ChromeOperaFirefox ### Description I'm trying to parse all data from "'https://m.vk.com/support" (including info from sublinks, to construct RAG). But see the output with the content: "Your browser is out of dateThis may cause VK to work slowly or experience errors.Update your browser or install one of the following:ChromeOperaFirefox" ### System Info langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.11 langchain-text-splitters==0.2.2 windows python 3.10
Get warning about browser instead of real info in WebBaseLoader
https://api.github.com/repos/langchain-ai/langchain/issues/23813/comments
1
2024-07-03T13:33:51Z
2024-07-04T10:44:27Z
https://github.com/langchain-ai/langchain/issues/23813
2,388,652,128
23,813
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangGraph/LangChain rather than my code. - [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question. ### Example Code ```python Following below example with locally hosted llama3 70B instruct model with ChatOpenAI https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/ Simlar issue with the following example: from langchain_core.tools import tool from langgraph.graph import MessageGraph from langgraph.prebuilt import ToolNode, tools_condition @tool def divide(a: float, b: float) -> int: """Return a / b.""" return a / b llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0).bind( response_format={"type": "json_object"} ) tools = [divide] graph_builder = MessageGraph() graph_builder.add_node("tools", ToolNode(tools)) graph_builder.add_node("chatbot", llm.bind_tools(tools)) graph_builder.add_edge("tools", "chatbot") graph_builder.add_conditional_edges( ... "chatbot", tools_condition ... ) graph_builder.set_entry_point("chatbot") graph = graph_builder.compile() graph.invoke([("user", "What's 329993 divided by 13662?")]) ``` ### Error Message and Stack Trace (if applicable) ```shell BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'tools'), 'msg': 'Extra inputs are not permitted', 'input': [{'type': 'function', 'function': {'name': 'get_weather', 'description': 'Use this to get weather information.', 'parameters': {'type': 'object', 'properties': {'city': {'enum': ['nyc', 'sf'], 'type': 'string'}}, 'required': ['city']}}}], 'url': 'https://errors.pydantic.dev/2.7/v/extra_forbidden'}]", 'type': 'BadRequestError', 'param': None, 'code': 400} ``` ### Description I have tried instantiating ChatOpenAI as follows: llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0) llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0).bind( response_format={"type": "json_object"} ) ### System Info Meta's llama 3 70B Instruct locally hosted on vllm. ChatOpenAI works fine for other application for example RAG and LCEL
BadRequestError with vllm locally hosted Llama3 70B Model
https://api.github.com/repos/langchain-ai/langchain/issues/23814/comments
2
2024-07-03T12:26:40Z
2024-07-04T06:06:13Z
https://github.com/langchain-ai/langchain/issues/23814
2,388,682,742
23,814
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from dotenv import load_dotenv from langchain_core.runnables import Runnable, RunnableConfig, chain from langchain_core.tracers.context import tracing_v2_enabled from phoenix.trace.langchain import LangChainInstrumentor LangChainInstrumentor().instrument() load_dotenv() @chain def inner_chain(input): print("inner chain") return {"inner": input} @chain def outer_chain(input): print("outer chain") return inner_chain.invoke(input={"inner": "foo_sync"}) @chain async def outer_chain_async(input): print("outer chain async") return await inner_chain.ainvoke(input={"inner": "foo_async"}) async def main_async(): # call async the outsider that inside has a sync call await outer_chain.ainvoke(input={"outer": "foo"}) # call async the outsider that inside has a async call await outer_chain_async.ainvoke(input={"outer": "foo_async_outer"}) def main(): outer_chain.invoke(input={"outer": "foo"}) if __name__ == "__main__": with tracing_v2_enabled(project_name="test"): # call sync main() # call async import asyncio asyncio.run(main_async()) ``` ### Error Message and Stack Trace (if applicable) The inner chain should be attached as child to the outer chain. Instead the async calls are shown as independent traces. Arize-phoenix traces: ![image](https://github.com/langchain-ai/langchain/assets/11597393/3e4e7029-b53b-40d0-bb8d-7f03b444962a) Langsmith traces: ![image](https://github.com/langchain-ai/langchain/assets/11597393/8d9448d6-fb13-4f85-adeb-918688421ce7) ### Description I am analysing the traces generated with `invoke` and `ainvoke` function on a chain that is innerly calling another chain. For the sync run, the inner chain is properly traced under the outer chain. Instead when the `ainvoke` is called on the outer chain, the inner chain is traced as separate. I would expect that both `invoke` and `ainvoke` produce the same traces. I think the problem might be in the `run_in_executor` function that is called in the `Runnable.ainvoke`, that may be missing some parameters to the self.invoke about the context of the parent chain. I tested it with two different tracing solutions: - arize-phoenix - langsmith And the issue exists with both of them, so I think the issue is on the side of langchain or opentelemetry. Environment requirements: ```text langchain arize-phoenix python-dotenv langsmith ``` Environment variables: ``` LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT="https://api.smith.langchain.com" LANGCHAIN_API_KEY="<my_key_here>" LANGCHAIN_PROJECT="test" ``` ### System Info `python -m langchain_core.sys_info`: ```text System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 > Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.2.11 > langchain: 0.2.6 > langsmith: 0.1.83 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
Nested ainvoke does not show as child in tracing
https://api.github.com/repos/langchain-ai/langchain/issues/23811/comments
2
2024-07-03T12:15:40Z
2024-07-04T07:10:03Z
https://github.com/langchain-ai/langchain/issues/23811
2,388,478,996
23,811
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain.js documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain.js rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]: """Merge two dicts, handling specific scenarios where a key exists in both dictionaries but has a value of None in 'left'. In such cases, the method uses the value from 'right' for that key in the merged dictionary. Example: If left = {"function_call": {"arguments": None}} and right = {"function_call": {"arguments": "{\n"}} then, after merging, for the key "function_call", the value from 'right' is used, resulting in merged = {"function_call": {"arguments": "{\n"}}. """ merged = left.copy() for right_k, right_v in right.items(): if right_k not in merged: merged[right_k] = right_v elif right_v is not None and merged[right_k] is None: merged[right_k] = right_v elif right_v is None: continue elif type(merged[right_k]) != type(right_v): raise TypeError( f'additional_kwargs["{right_k}"] already exists in this message,' " but with a different type." ) elif isinstance(merged[right_k], str): merged[right_k] += right_v elif isinstance(merged[right_k], dict): merged[right_k] = merge_dicts(merged[right_k], right_v) elif isinstance(merged[right_k], list): merged[right_k] = merge_lists(merged[right_k], right_v) # added this for integer elif isinstance(merged[right_k], int): merged[right_k] += right_v elif merged[right_k] == right_v: continue else: raise TypeError( f"Additional kwargs key {right_k} already exists in left dict and " f"value has unsupported type {type(merged[right_k])}." ) return merged ### Error Message and Stack Trace (if applicable) Error: Additional kwargs key prompt_token_count already exists in left dict and value has unsupported type <class 'int'>. ### Description Im trying to run my agents on langchain using gemini 1.5 pro as the LLM, but it is running into the above error, because in the merge_dict() function it is not expecting an integer instance and for prompt_token_count gemini returns it in integer form, also this happens only if left and right dict have different values of prompt_token_count. Manually I have added the fix and it works well, but requesting to kindly include it for us folks who are using gemini-1.5-pro, with other LLMs (non google LLMs) have not faced the same issue. Only with gemini. ### System Info windows and linux langchain-core==0.2.11 python
Additional kwargs key prompt_token_count already exists in left dict and value has unsupported type <class 'int'> in langchain-core/utils/_merge.py merge_dict() function when running with Google gemini-1.5-pro
https://api.github.com/repos/langchain-ai/langchain/issues/23827/comments
4
2024-07-03T12:07:04Z
2024-07-04T10:35:42Z
https://github.com/langchain-ai/langchain/issues/23827
2,389,067,343
23,827
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from langchain_community.llms import HuggingFaceEndpoint llm = HuggingFaceEndpoint(endpoint_url="https://mixtral.ai.me", huggingfacehub_api_token=<token>) ### Error Message and Stack Trace (if applicable) (can't paste because this issue is from an airgapped environment) ``` File /usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py:341, in BaseModel.__init__ ValidationError: 1 validation error for HuggingFaceEndpoint __root__ Could not authenticate with huggingface_hub. Please check your API token. (type=value_error) ``` ### Description I am trying to use HuggingFaceEndpoint in order to query my locally hosted mixtral. I have a proxy before the proxy that accepts Bearer API tokens. The tokens and API works in other places (e.g. postman) but not with langchain. ### System Info langchain==0.2.6 langchain-community==0.0.38 langchain-core==0.1.52 langchain-text-splitters==0.2.2 platform: Ubuntu 22.04 Python version: 3.10.12 The runtime is actually Nvidia's Pytorch container from the NGC catalog, tag 24.01. The environment is airgapped, and we go through a pipeline in order to bring in new library versions.
Could not authenticate with huggingface_hub.
https://api.github.com/repos/langchain-ai/langchain/issues/23808/comments
2
2024-07-03T09:51:14Z
2024-07-24T07:00:16Z
https://github.com/langchain-ai/langchain/issues/23808
2,388,190,250
23,808
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code import uuid from langchain.retrievers.multi_vector import MultiVectorRetriever from langchain.storage import InMemoryStore from langchain_community.vectorstores import Chroma from langchain_core.documents import Document from langchain_community.embeddings import GPT4AllEmbeddings model_name = "all-MiniLM-L6-v2.gguf2.f16.gguf" gpt4all_kwargs = {'allow_download': 'True'} # The vectorstore to use to index the child chunks vectorstore = Chroma( collection_name="summaries", embedding_function=GPT4AllEmbeddings( model_name=model_name, gpt4all_kwargs=gpt4all_kwargs ) ) # The storage layer for the parent documents store = InMemoryStore() id_key = "doc_id" # The retriever (empty to start) print("empty to start") retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key, ) # Add texts print("Add texts") doc_ids = [str(uuid.uuid4()) for _ in texts] summary_texts = [ Document(page_content=s, metadata={id_key: doc_ids[i]}) for i, s in enumerate(text_summaries) ] retriever.vectorstore.add_documents(summary_texts) retriever.docstore.mset(list(zip(doc_ids, texts))) # Add tables print("Add tables") table_ids = [str(uuid.uuid4()) for _ in tables] summary_tables = [ Document(page_content=s, metadata={id_key: table_ids[i]}) for i, s in enumerate(table_summaries) ] retriever.vectorstore.add_documents(summary_tables) retriever.docstore.mset(list(zip(table_ids, tables))) ### Error Message and Stack Trace (if applicable) _No response_ ### Description 在jupyter notebook中运行 retriever.vectorstore.add_documents(summary_texts) retriever.docstore.mset(list(zip(doc_ids, texts))) 时会发生kernel崩溃,并且没有出现任何报错,内存没有发生溢出 ![134d081afabba25c2401496679d81b3](https://github.com/langchain-ai/langchain/assets/166845129/80829250-402e-4e68-b22d-574ba6264064) ### System Info Package Information ------------------- > langchain_core: 0.2.10 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.82 > langchain_text_splitters: 0.2.2 > langchainhub: 0.1.20
When running cookbook/Semi_Structured_RAG.ipynb, appears kernel died
https://api.github.com/repos/langchain-ai/langchain/issues/23802/comments
0
2024-07-03T08:43:35Z
2024-07-03T08:50:29Z
https://github.com/langchain-ai/langchain/issues/23802
2,388,044,064
23,802
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code If I create a vectorstore with: ```python from langchain_community.embeddings import FakeEmbeddings from langchain_chroma.vectorstores import Chroma vectorstore = Chroma.from_documents(documents=splits, embedding = FakeEmbeddings(size=1352), collection_name="colm", persist_directory="my_dir") ``` The only way to retrieve the persisted collection from `my_dir` is: ```python vectorstore = Chroma.from_documents(documents=splits, embedding = FakeEmbeddings(size=1352), collection_name="colm") ``` OR ```python vectorstore = Chroma.from_text(text=..., embedding = FakeEmbeddings(size=1352), collection_name="colm") ``` Related issues: https://github.com/langchain-ai/langchain/issues/22361 https://github.com/langchain-ai/langchain/issues/20866 https://github.com/langchain-ai/langchain/issues/19807#issuecomment-2028610882 ### Error Message and Stack Trace (if applicable) _No response_ ### Description I want dedicated `classmethod` just like this: https://github.com/langchain-ai/langchain/blob/27aa4d38bf93f3eef7c46f65cc0d0ef3681137eb/libs/partners/qdrant/langchain_qdrant/vectorstores.py#L1351 that returns me an instance of `Chroma` without inserting any texts ### System Info langchain-chroma 0.1.1
partner[chroma]: Not able to load persisted collection without calling `from_documents`
https://api.github.com/repos/langchain-ai/langchain/issues/23797/comments
3
2024-07-03T07:44:25Z
2024-07-05T04:59:25Z
https://github.com/langchain-ai/langchain/issues/23797
2,387,920,544
23,797
[ "langchain-ai", "langchain" ]
### URL _No response_ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: **Description**: I recently discovered a very useful feature in the LangChain CLI that allows templates to be installed from a specific subdirectory within a repository using a URL fragment, like so: `git+ssh://git@github.com/norisuke3/llm.git#subdirectory=templates/japanese-speak` However, I was unable to find any documentation on this feature in the current LangChain documentation, and I had to dig into [the source code to find out how to use it](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/utils/git.py#L42). This feature is incredibly useful for managing multiple templates in a single repository and would greatly benefit other users if it were documented. **Proposed Solution**: Add a section in the documentation that explains how to install templates from a specific subdirectory within a repository using the URL fragment notation. Good place to put this description can be adding a new page under a section of Additional Resources in [this page](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)? **Example**: langchain app add "git+ssh://git@github.com/norisuke3/llm.git#subdirectory=templates/japanese-speak" **Additional Context**: This feature allows users to manage and install multiple templates from a single repository, which is a common use case for organizing LangChain templates. Including this in the documentation would improve user experience and reduce the need for source code exploration. ### Idea or request for content: _No response_
DOC: Documenting the use of subdirectory for template installation with 'langchain app add'
https://api.github.com/repos/langchain-ai/langchain/issues/23777/comments
1
2024-07-02T20:17:37Z
2024-07-02T21:11:51Z
https://github.com/langchain-ai/langchain/issues/23777
2,387,090,774
23,777
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` import os from dotenv import load_dotenv from langchain import hub from langchain.agents import AgentExecutor, create_react_agent from langchain.tools import BaseTool from langchain_openai import ChatOpenAI load_dotenv() class WeatherTool(BaseTool): def __init__(self): super().__init__( name="fetch_current_weather", description="get current weather based location", ) def _run(self, location: str): answer = "It's raining" return answer def main(): prompt = hub.pull("hwchase17/react") tools = [WeatherTool()] llm = ChatOpenAI( model="gpt-4o", temperature=0.0, openai_api_key=os.getenv("OPENAI_API_KEY") ) agent = create_react_agent(llm=llm, tools=tools, prompt=prompt) agent_executor = AgentExecutor( agent=agent, tools=tools, return_intermediate_stpes=True, handle_parsing_errors=True, memory=None, max_iterations=2, verbose=True, ) query = "What is the weather today in London?" result1 = agent_executor.invoke({"input": query}) # LangChain bug - return_intermediate_steps not being set correctly during instantiation agent_executor.return_intermediate_steps = True result2 = agent_executor.invoke({"input": query}) print(f"Keys in result1: {result1.keys()}") print(f"Keys in result2: {result2.keys()}") if __name__ == "__main__": main() ``` ### Error Message and Stack Trace (if applicable) Output is: ``` > Entering new AgentExecutor chain... To answer the question about the weather in London, I need to fetch the current weather data for that location. Action: fetch_current_weather Action Input: LondonIt's rainingI now know the final answer. Final Answer: The weather today in London is rainy. > Finished chain. > Entering new AgentExecutor chain... I need to find the current weather in London. Action: fetch_current_weather Action Input: LondonIt's rainingI now know the final answer. Final Answer: The weather today in London is raining. > Finished chain. Keys in result1: dict_keys(['input', 'output']) Keys in result2: dict_keys(['input', 'output', 'intermediate_steps']) ``` ### Description Adding `return_intermediate_steps=True` to `AgentExecutor` does not seem to work. Instead I have to set this value after instantiation. ### System Info langchain==0.2.6 langchain-openai==0.1.13 langchainhub==0.1.20 python-dotenv==1.0.1
Agent Executor's return_intermediate_steps does not have desired effect
https://api.github.com/repos/langchain-ai/langchain/issues/23760/comments
1
2024-07-02T12:06:31Z
2024-07-02T12:10:29Z
https://github.com/langchain-ai/langchain/issues/23760
2,386,082,343
23,760
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python instructions = """You are an agent designed to write and execute python code to answer questions. You have access to a python REPL, which you can use to execute python code. If you get an error, debug your code and try again. Only use the output of your code to answer the question. You might know the answer without running any code, but you should still run the code to get the answer. If it does not seem like you can write code to answer the question, just return "I don't know" as the answer. """ base_prompt = hub.pull("langchain-ai/openai-functions-template") prompt = base_prompt.partial(instructions=instructions) agent = create_openai_functions_agent(ChatOpenAI(temperature=0), tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) code=open("./code.txt", "r", encoding="utf-8").read() require=f"""I'll give you a piece of Python code based on Pytorch that trains a communication model. The model consists of five components: SemanticEncoder, ChannelEncoder, PhysicalChannel, ChannelDecoder, and SemanticDecoder. The input text is first tokenized, propagated forward through the model, and finally decoded into text using the tokenizer. Now I need you to test the BLEU of the trained model.\n\n the code is as follows:\n\n ```{code}``` """ ``` ### Error Message and Stack Trace (if applicable) ModuleNotFoundError("No module named 'nltk'")I don't have access to the NLTK library to calculate the BLEU score. You can run the code on your local machine with NLTK installed to get the BLEU score for the trained model. ### Description I would like to know what libraries pythonPERLTool supports. I saw in the official documentation that you can use torch. Does this come with it? Does it support other common libraries ### System Info langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.10 langchain-experimental==0.0.62 langchain-openai==0.1.13 langchain-text-splitters==0.2.2 langchainhub==0.1.20
pythonPERL library question
https://api.github.com/repos/langchain-ai/langchain/issues/23759/comments
0
2024-07-02T11:40:32Z
2024-07-02T11:43:07Z
https://github.com/langchain-ai/langchain/issues/23759
2,386,029,879
23,759
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python """Standard LangChain interface tests""" from typing import Type import pytest from langchain_core.language_models import BaseChatModel from langchain_standard_tests.unit_tests import ChatModelUnitTests from langchain_upstage import ChatUpstage class TestUpstageStandard(ChatModelUnitTests): @pytest.fixture def chat_model_class(self) -> Type[BaseChatModel]: return ChatUpstage @pytest.fixture def chat_model_params(self) -> dict: return { "model": "solar-1-mini-chat", } ``` ### Error Message and Stack Trace (if applicable) ``` Spawning shell within /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11 ➜ upstage git:(SDR-22) emulate bash -c '. /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/bin/activate' (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install Installing dependencies from lock file Package operations: 36 installs, 0 updates, 0 removals - Installing typing-extensions (4.12.2) - Installing annotated-types (0.7.0) - Installing certifi (2024.6.2) - Installing charset-normalizer (3.3.2) - Installing h11 (0.14.0) - Installing idna (3.7) - Installing pydantic-core (2.20.0) - Installing sniffio (1.3.1) - Installing urllib3 (2.2.2) - Installing anyio (4.4.0) - Installing httpcore (1.0.5) - Installing jsonpointer (3.0.0) - Installing orjson (3.10.5) - Installing pydantic (2.8.0) - Installing requests (2.32.3) - Installing distro (1.9.0) - Installing filelock (3.15.4) - Installing fsspec (2024.6.1) - Installing httpx (0.27.0) - Installing jsonpatch (1.33) - Installing langsmith (0.1.83) - Installing packaging (24.1) - Installing pyyaml (6.0.1) - Installing regex (2024.5.15) - Installing tenacity (8.4.2) - Installing tqdm (4.66.4) - Installing huggingface-hub (0.23.4) - Installing langchain-core (0.2.10 aa16553) - Installing mypy-extensions (1.0.0) - Installing openai (1.35.7) - Installing tiktoken (0.7.0) - Installing langchain-openai (0.1.13 aa16553) - Installing mypy (0.991) - Installing pypdf (4.2.0) - Installing tokenizers (0.19.1) - Installing types-requests (2.32.0.20240622) Installing the current project: langchain-upstage (0.1.7) (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with test Installing dependencies from lock file Package operations: 19 installs, 0 updates, 0 removals - Installing mdurl (0.1.2) - Installing iniconfig (2.0.0) - Installing markdown-it-py (3.0.0) - Installing pluggy (1.5.0) - Installing pygments (2.18.0) - Installing six (1.16.0) - Installing numpy (1.26.4) - Installing pytest (7.4.4) - Installing python-dateutil (2.9.0.post0) - Installing rich (13.7.1) - Installing typing-inspect (0.9.0) - Installing watchdog (4.0.1) - Installing docarray (0.32.1) - Installing freezegun (1.5.1) - Installing langchain-standard-tests (0.1.1 aa16553) - Installing pytest-asyncio (0.21.2) - Installing pytest-mock (3.14.0) - Installing pytest-watcher (0.3.5) - Installing syrupy (4.6.1) Installing the current project: langchain-upstage (0.1.7) (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with integration_test Group(s) not found: integration_test (via --with) (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with integration_tests Group(s) not found: integration_tests (via --with) (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) hx pyproject.toml (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with test_integration Installing dependencies from lock file Package operations: 1 install, 0 updates, 0 removals - Installing pillow (10.4.0) Installing the current project: langchain-upstage (0.1.7) (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) make test poetry run pytest tests/unit_tests/ ================================================================================================================================================== test session starts =================================================================================================================================================== platform darwin -- Python 3.11.7, pytest-7.4.4, pluggy-1.5.0 rootdir: /Users/juhyung/upstage/projects/langchain-upstage/libs/upstage configfile: pyproject.toml plugins: syrupy-4.6.1, anyio-4.4.0, asyncio-0.21.2, mock-3.14.0 asyncio: mode=Mode.AUTO collected 39 items tests/unit_tests/test_chat_models.py ............... [ 38%] tests/unit_tests/test_chat_models_standard.py FFEEEE [ 53%] tests/unit_tests/test_embeddings.py .... [ 64%] tests/unit_tests/test_groundedness_check.py . [ 66%] tests/unit_tests/test_imports.py . [ 69%] tests/unit_tests/test_layout_analysis.py .......... [ 94%] tests/unit_tests/test_secrets.py .. [100%] ========================================================================================================================================================= ERRORS ========================================================================================================================================================= _____________________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_bind_tool_pydantic ______________________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f086a50> @pytest.fixture def model(self) -> BaseChatModel: return self.chat_model_class( > **{**self.standard_chat_model_params, **self.chat_model_params} ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError _______________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_with_structured_output[Person] ________________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f0902d0> @pytest.fixture def model(self) -> BaseChatModel: return self.chat_model_class( > **{**self.standard_chat_model_params, **self.chat_model_params} ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError _______________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_with_structured_output[schema1] _______________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f090610> @pytest.fixture def model(self) -> BaseChatModel: return self.chat_model_class( > **{**self.standard_chat_model_params, **self.chat_model_params} ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError _______________________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_standard_params _______________________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f090f90> @pytest.fixture def model(self) -> BaseChatModel: return self.chat_model_class( > **{**self.standard_chat_model_params, **self.chat_model_params} ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError ======================================================================================================================================================== FAILURES ======================================================================================================================================================== _____________________________________________________________________________________________________________________________________________ TestUpstageStandard.test_init ______________________________________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f054950> def test_init(self) -> None: model = self.chat_model_class( > **{**self.standard_chat_model_params, **self.chat_model_params} ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:87: TypeError ________________________________________________________________________________________________________________________________________ TestUpstageStandard.test_init_streaming _________________________________________________________________________________________________________________________________________ self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f085710> def test_init_streaming( self, ) -> None: model = self.chat_model_class( > **{ **self.standard_chat_model_params, **self.chat_model_params, "streaming": True, } ) E TypeError: 'method' object is not a mapping /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:95: TypeError ================================================================================================================================================== slowest 5 durations =================================================================================================================================================== 0.54s call tests/unit_tests/test_chat_models.py::test_upstage_tokenizer 0.34s call tests/unit_tests/test_chat_models.py::test_upstage_tokenizer_get_num_tokens 0.12s call tests/unit_tests/test_groundedness_check.py::test_initialization 0.08s call tests/unit_tests/test_chat_models.py::test_upstage_model_param 0.04s call tests/unit_tests/test_chat_models.py::test_initialization ================================================================================================================================================ short test summary info ================================================================================================================================================= FAILED tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_init - TypeError: 'method' object is not a mapping FAILED tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_init_streaming - TypeError: 'method' object is not a mapping ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_bind_tool_pydantic - TypeError: 'method' object is not a mapping ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_with_structured_output[Person] - TypeError: 'method' object is not a mapping ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_with_structured_output[schema1] - TypeError: 'method' object is not a mapping ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_standard_params - TypeError: 'method' object is not a mapping ========================================================================================================================================= 2 failed, 33 passed, 4 errors in 5.57s ========================================================================================================================================= make: *** [test] Error 1 (langchain-upstage-py3.11) ➜ upstage git:(SDR-22) ``` also lint fails ``` Run make lint_tests poetry run ruff . poetry run ruff format tests --diff 16 files already formatted poetry run ruff --select I tests mkdir .mypy_cache_test; poetry run mypy tests --cache-dir .mypy_cache_test tests/unit_tests/test_chat_models_standard.py:[14](https://github.com/langchain-ai/langchain-upstage/actions/runs/9756872538/job/26928071498?pr=9#step:11:15): error: Signature of "chat_model_class" incompatible with supertype "ChatModelTests" [override] tests/unit_tests/test_chat_models_standard.py:18: error: Signature of "chat_model_params" incompatible with supertype "ChatModelTests" [override] tests/integration_tests/test_chat_models_standard.py:14: error: Signature of "chat_model_class" incompatible with supertype "ChatModelTests" [override] tests/integration_tests/test_chat_models_standard.py:18: error: Signature of "chat_model_params" incompatible with supertype "ChatModelTests" [override] Found 4 errors in 2 files (checked [16](https://github.com/langchain-ai/langchain-upstage/actions/runs/9756872538/job/26928071498?pr=9#step:11:17) source files) make: *** [Makefile:31: lint_tests] Error 1 Error: Process completed with exit code 2. ``` ### Description i use ``` langchain-standard-tests = { git = "https://github.com/langchain-ai/langchain.git", subdirectory = "libs/standard-tests" } ``` ### System Info it fails on every version of python <img width="328" alt="image" src="https://github.com/langchain-ai/langchain/assets/20140126/96fd6de5-5d7d-481a-8127-bca853964cf5">
something is wrong in standard_test
https://api.github.com/repos/langchain-ai/langchain/issues/23755/comments
1
2024-07-02T07:52:18Z
2024-07-02T14:34:01Z
https://github.com/langchain-ai/langchain/issues/23755
2,385,511,056
23,755
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code model = ChatFireworks(model=model_name) parser = PydanticOutputParser(pydantic_object=pydantic) prompt = ChatPromptTemplate.from_messages([ ("system", "Answer the user query. Wrap the output in json tags\n{format_instructions}"), ("human", "{query}"), ]).partial(format_instructions=parser.get_format_instructions()) chain = prompt | model | parser try: output = chain.invoke({"query": input}) except (OutputParserException, InvalidRequestError) as e: output = f"An error occurred: {e}" ### Error Message and Stack Trace (if applicable) _No response_ ### Description An error occurred: {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': 'jinja template rendering failed. System role not supported'}} ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 > Python Version: 3.12.4 (main, Jun 21 2024, 11:46:08) [Clang 15.0.0 (clang-1500.3.9.4)] Package Information ------------------- > langchain_core: 0.2.9 > langchain: 0.2.5 > langchain_community: 0.2.5 > langsmith: 0.1.81 > langchain_fireworks: 0.1.3 > langchain_openai: 0.1.8 > langchain_text_splitters: 0.2.1 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Pydantc output parser not working with gemma fireworks ai
https://api.github.com/repos/langchain-ai/langchain/issues/23754/comments
2
2024-07-02T07:15:07Z
2024-07-03T05:54:48Z
https://github.com/langchain-ai/langchain/issues/23754
2,385,434,234
23,754
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code retriever.vectorstore.add_documents(summary_docs) will crash (list index out of range ) # Helper function to add documents to the vectorstore and docstore def add_documents(retriever, doc_summaries, doc_contents): doc_ids = [str(uuid.uuid4()) for _ in doc_contents] summary_docs = [ Document(page_content=s, metadata={id_key: doc_ids[i]}) for i, s in enumerate(doc_summaries) ] retriever.vectorstore.add_documents(summary_docs) retriever.docstore.mset(list(zip(doc_ids, doc_contents))) ### Error Message and Stack Trace (if applicable) _No response_ ### Description run Multi_modal_RAG.ipynb step by step ### System Info langchain 0.1.9 langchain-chroma 0.1.1 langchain-community 0.0.24 langchain-core 0.1.27 langchain-experimental 0.0.52 langchain-google-genai 0.0.9 langchain-openai 0.0.7 langchain-text-splitters 0.2.1 langchainhub 0.1.2
Multi_modal_RAG.ipynb run IndexError: list index out of range
https://api.github.com/repos/langchain-ai/langchain/issues/23746/comments
0
2024-07-02T02:22:47Z
2024-07-02T02:25:21Z
https://github.com/langchain-ai/langchain/issues/23746
2,385,055,305
23,746
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` import langchain.agents import AgentExecutor import langchain.agents as lc_agents def fetch_config_from_header(config: Dict[str, Any], req: Request) -> Dict[str, Any]: config = config.copy() configurable = config.get("configurable", {}) if "x-model-name" in req.headers: configurable["model_name"] = req.headers["x-model-name"] else: raise HTTPException(401, "No model name provided") if "x-api-key" in req.headers: configurable["default_headers"] = { "Content-Type":"application/json", "api-key": req.headers["x-api-key"] } else: raise HTTPException(401, "No API key provided") if "x-model-kwargs" in req.headers: configurable["model_kwargs"] = json.loads(req.headers["x-model-kwargs"]) else: raise HTTPException(401, "No model arguments provided") configurable["openai_api_base"] = f"https://someendpoint.com/{req.headers['x-model-name']}" config["configurable"] = configurable return config chat_model = ChatOpenAI( model_name = "some_model", model_kwargs = {}, default_headers = {}, openai_api_key = "placeholder", openai_api_base = "placeholder").configurable_fields( model_name = ConfigurableField(id="model_name"), model_kwargs = ConfigurableField(id="model_kwargs"), default_headers = ConfigurableField(id="default_headers"), openai_api_base = ConfigurableField(id="openai_api_base"), ) agent = lc_agents.tool_calling_agent.base.create_tool_calling_agent(chat_model, tools, prompt_template) runnable = AgentExecutor(agent=agent, tools=tools) add_routes( app, runnable.with_types(input_type=InputChat), path="/some_agent", per_req_config_modifier=fetch_config_from_header, ) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description Ideally when we set a field to be configurable, it should be updated accordingly when new configurable values are given by per_req_config_modifier. However, none of the configurable variables such as temperature, openai_api_base, default_headers, etc. are passed to the final client. Some of the related values from certain functions ``` # returned value of config in fetch_config_from_header() {'configurable': {'model_name': 'some_model', 'default_headers': {'Content-Type': 'application/json', 'api-key': 'some_api_key'}, 'model_kwargs': {'user': 'some_user'}, 'openai_api_base': 'https://someendpoint.com/some_model', 'temperature': 0.6} # values of cast_to, opts in openai's _base_client.py AsyncAPIClient.post() cast_to: <class 'openai.types.chat.chat_completion.ChatCompletion'> opts: method='post' url='/chat/completions' params={} headers=NOT_GIVEN max_retries=NOT_GIVEN timeout=NOT_GIVEN files=None idempotency_key=None post_parser=NOT_GIVEN json_data={'messages': [{'content': 'some_content', 'role': 'system'}], 'model': 'default_model', 'n': 1, 'stream': False, 'temperature': 0.7} extra_json=None ``` ### System Info ``` langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.10 langchain-experimental==0.0.62 langchain-openai==0.1.13 langchain-text-splitters==0.2.2 langgraph==0.1.5 langserve==0.2.2 langsmith==0.1.82 openai==1.35.7 platform = linux python version = 3.12.4 ```
ConfigurableFields does not works for agent
https://api.github.com/repos/langchain-ai/langchain/issues/23745/comments
4
2024-07-02T02:15:08Z
2024-07-02T15:09:58Z
https://github.com/langchain-ai/langchain/issues/23745
2,385,048,568
23,745
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code This is a spam issue. ### Error Message and Stack Trace (if applicable) _No response_ ### Description Spam! ### System Info no
Spam issue
https://api.github.com/repos/langchain-ai/langchain/issues/23720/comments
3
2024-07-01T15:27:12Z
2024-07-17T15:36:13Z
https://github.com/langchain-ai/langchain/issues/23720
2,384,138,131
23,720
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain.agents import AgentExecutor, create_react_agent from langchain.memory import ConversationBufferMemory from langchain_core.tools import Tool from langchain_core.prompts import PromptTemplate from langchain_google_genai import ChatGoogleGenerativeAI from langchain_community.tools import WikipediaQueryRun from langchain_community.utilities import WikipediaAPIWrapper from langchain_community.tools import YouTubeSearchTool youtube = YouTubeSearchTool() wiki = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper()) tools = [ Tool( name="youtube", func=youtube.run, description="Helps in getting youtube videos", ), Tool( name="wiki", func=wiki.run, description="Useful to search about a popular entity", ) ] tool_names = ["youtube","wiki"] template = '''Answer the following questions as best you can. You have access to the following tools: {tools} Begin! Question: {input} Thought:{agent_scratchpad} Action: the action to take, should be one of [{tool_names}] ''' prompt = PromptTemplate (input_variables = ["tools","tool_names","input"] , template = template) memory = ConversationBufferMemory(memory_key="chat_history") llm= ChatGoogleGenerativeAI (model="chat-bison@002") agent=create_react_agent(llm=llm, tools=tools, prompt = prompt) agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory) agent_chain.invoke({"input": "Tell me something about USA"}) ### Error Message and Stack Trace (if applicable) > Entering new AgentExecutor chain... --------------------------------------------------------------------------- InvalidArgument Traceback (most recent call last) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:178, in _chat_with_retry.<locals>._chat_with_retry(**kwargs) 177 try: --> 178 return generation_method(**kwargs) 179 # Do not retry for these errors. File ~/opt/anaconda3/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py:1122, in GenerativeServiceClient.stream_generate_content(self, request, model, contents, retry, timeout, metadata) 1121 # Send the request. -> 1122 response = rpc( 1123 request, 1124 retry=retry, 1125 timeout=timeout, 1126 metadata=metadata, 1127 ) 1129 # Done; return the response. File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py:131, in _GapicCallable.__call__(self, timeout, retry, compression, *args, **kwargs) 129 kwargs["compression"] = compression --> 131 return wrapped_func(*args, **kwargs) File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:293, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs) 290 sleep_generator = exponential_sleep_generator( 291 self._initial, self._maximum, multiplier=self._multiplier 292 ) --> 293 return retry_target( 294 target, 295 self._predicate, 296 sleep_generator, 297 timeout=self._timeout, 298 on_error=on_error, 299 ) File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:153, in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs) 151 except Exception as exc: 152 # defer to shared logic for handling errors --> 153 _retry_error_helper( 154 exc, 155 deadline, 156 sleep, 157 error_list, 158 predicate, 159 on_error, 160 exception_factory, 161 timeout, 162 ) 163 # if exception not raised, sleep before next attempt File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_base.py:212, in _retry_error_helper(exc, deadline, next_sleep, error_list, predicate_fn, on_error_fn, exc_factory_fn, original_timeout) 207 final_exc, source_exc = exc_factory_fn( 208 error_list, 209 RetryFailureReason.NON_RETRYABLE_ERROR, 210 original_timeout, 211 ) --> 212 raise final_exc from source_exc 213 if on_error_fn is not None: File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:144, in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs) 143 try: --> 144 result = target() 145 if inspect.isawaitable(result): File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/timeout.py:120, in TimeToDeadlineTimeout.__call__.<locals>.func_with_timeout(*args, **kwargs) 118 kwargs["timeout"] = max(0, self._timeout - time_since_first_attempt) --> 120 return func(*args, **kwargs) File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/grpc_helpers.py:174, in _wrap_stream_errors.<locals>.error_remapped_callable(*args, **kwargs) 173 except grpc.RpcError as exc: --> 174 raise exceptions.from_grpc_error(exc) from exc InvalidArgument: 400 Request contains an invalid argument. The above exception was the direct cause of the following exception: ChatGoogleGenerativeAIError Traceback (most recent call last) Cell In[30], line 70 66 agent=create_react_agent(llm=llm, tools=tools, prompt = prompt) 68 agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory) ---> 70 agent_chain.invoke({"input": "Tell me something about USA"}) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info: File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) 164 except BaseException as e: File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1433, in AgentExecutor._call(self, inputs, run_manager) 1431 # We now enter the agent loop (until it returns something). 1432 while self._should_continue(iterations, time_elapsed): -> 1433 next_step_output = self._take_next_step( 1434 name_to_tool_map, 1435 color_mapping, 1436 inputs, 1437 intermediate_steps, 1438 run_manager=run_manager, 1439 ) 1440 if isinstance(next_step_output, AgentFinish): 1441 return self._return( 1442 next_step_output, intermediate_steps, run_manager=run_manager 1443 ) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1130 def _take_next_step( 1131 self, 1132 name_to_tool_map: Dict[str, BaseTool], (...) 1136 run_manager: Optional[CallbackManagerForChainRun] = None, 1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1138 return self._consume_next_step( -> 1139 [ 1140 a 1141 for a in self._iter_next_step( 1142 name_to_tool_map, 1143 color_mapping, 1144 inputs, 1145 intermediate_steps, 1146 run_manager, 1147 ) 1148 ] 1149 ) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in <listcomp>(.0) 1130 def _take_next_step( 1131 self, 1132 name_to_tool_map: Dict[str, BaseTool], (...) 1136 run_manager: Optional[CallbackManagerForChainRun] = None, 1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1138 return self._consume_next_step( -> 1139 [ 1140 a 1141 for a in self._iter_next_step( 1142 name_to_tool_map, 1143 color_mapping, 1144 inputs, 1145 intermediate_steps, 1146 run_manager, 1147 ) 1148 ] 1149 ) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1167, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1164 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 1166 # Call the LLM to see what to do. -> 1167 output = self.agent.plan( 1168 intermediate_steps, 1169 callbacks=run_manager.get_child() if run_manager else None, 1170 **inputs, 1171 ) 1172 except OutputParserException as e: 1173 if isinstance(self.handle_parsing_errors, bool): File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:398, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs) 390 final_output: Any = None 391 if self.stream_runnable: 392 # Use streaming to make sure that the underlying LLM is invoked in a 393 # streaming (...) 396 # Because the response from the plan is not a generator, we need to 397 # accumulate the output into final output and return that. --> 398 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): 399 if final_output is None: 400 final_output = chunk File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2882, in RunnableSequence.stream(self, input, config, **kwargs) 2876 def stream( 2877 self, 2878 input: Input, 2879 config: Optional[RunnableConfig] = None, 2880 **kwargs: Optional[Any], 2881 ) -> Iterator[Output]: -> 2882 yield from self.transform(iter([input]), config, **kwargs) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2869, in RunnableSequence.transform(self, input, config, **kwargs) 2863 def transform( 2864 self, 2865 input: Iterator[Input], 2866 config: Optional[RunnableConfig] = None, 2867 **kwargs: Optional[Any], 2868 ) -> Iterator[Output]: -> 2869 yield from self._transform_stream_with_config( 2870 input, 2871 self._transform, 2872 patch_config(config, run_name=(config or {}).get("run_name") or self.name), 2873 **kwargs, 2874 ) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1867, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs) 1865 try: 1866 while True: -> 1867 chunk: Output = context.run(next, iterator) # type: ignore 1868 yield chunk 1869 if final_output_supported: File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2831, in RunnableSequence._transform(self, input, run_manager, config, **kwargs) 2828 else: 2829 final_pipeline = step.transform(final_pipeline, config) -> 2831 for output in final_pipeline: 2832 yield output File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1163, in Runnable.transform(self, input, config, **kwargs) 1160 final: Input 1161 got_first_val = False -> 1163 for ichunk in input: 1164 # The default implementation of transform is to buffer input and 1165 # then call stream. 1166 # It'll attempt to gather all input into a single chunk using 1167 # the `+` operator. 1168 # If the input is not addable, then we'll assume that we can 1169 # only operate on the last chunk, 1170 # and we'll iterate until we get to the last chunk. 1171 if not got_first_val: 1172 final = ichunk File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:4784, in RunnableBindingBase.transform(self, input, config, **kwargs) 4778 def transform( 4779 self, 4780 input: Iterator[Input], 4781 config: Optional[RunnableConfig] = None, 4782 **kwargs: Any, 4783 ) -> Iterator[Output]: -> 4784 yield from self.bound.transform( 4785 input, 4786 self._merge_configs(config), 4787 **{**self.kwargs, **kwargs}, 4788 ) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1181, in Runnable.transform(self, input, config, **kwargs) 1178 final = ichunk 1180 if got_first_val: -> 1181 yield from self.stream(final, config, **kwargs) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs) 258 except BaseException as e: 259 run_manager.on_llm_error( 260 e, 261 response=LLMResult( 262 generations=[[generation]] if generation else [] 263 ), 264 ) --> 265 raise e 266 else: 267 run_manager.on_llm_end(LLMResult(generations=[[generation]])) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs) 243 generation: Optional[ChatGenerationChunk] = None 244 try: --> 245 for chunk in self._stream(messages, stop=stop, **kwargs): 246 if chunk.message.id is None: 247 chunk.message.id = f"run-{run_manager.run_id}" File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:833, in ChatGoogleGenerativeAI._stream(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, **kwargs) 811 def _stream( 812 self, 813 messages: List[BaseMessage], (...) 822 **kwargs: Any, 823 ) -> Iterator[ChatGenerationChunk]: 824 request = self._prepare_request( 825 messages, 826 stop=stop, (...) 831 generation_config=generation_config, 832 ) --> 833 response: GenerateContentResponse = _chat_with_retry( 834 request=request, 835 generation_method=self.client.stream_generate_content, 836 **kwargs, 837 metadata=self.default_metadata, 838 ) 839 for chunk in response: 840 _chat_result = _response_to_result(chunk, stream=True) File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:196, in _chat_with_retry(generation_method, **kwargs) 193 except Exception as e: 194 raise e --> 196 return _chat_with_retry(**kwargs) File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw) 287 @functools.wraps(f) 288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any: --> 289 return self(f, *args, **kw) File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs) 377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 378 while True: --> 379 do = self.iter(retry_state=retry_state) 380 if isinstance(do, DoAttempt): 381 try: File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state) 312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) 313 if not (is_explicit_retry or self.retry(retry_state)): --> 314 return fut.result() 316 if self.after is not None: 317 self.after(retry_state) File ~/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py:449, in Future.result(self, timeout) 447 raise CancelledError() 448 elif self._state == FINISHED: --> 449 return self.__get_result() 451 self._condition.wait(timeout) 453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: File ~/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self) 399 if self._exception: 400 try: --> 401 raise self._exception 402 finally: 403 # Break a reference cycle with the exception in self._exception 404 self = None File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs) 380 if isinstance(do, DoAttempt): 381 try: --> 382 result = fn(*args, **kwargs) 383 except BaseException: # noqa: B902 384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:190, in _chat_with_retry.<locals>._chat_with_retry(**kwargs) 187 raise ValueError(error_msg) 189 except google.api_core.exceptions.InvalidArgument as e: --> 190 raise ChatGoogleGenerativeAIError( 191 f"Invalid argument provided to Gemini: {e}" 192 ) from e 193 except Exception as e: 194 raise e ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 Request contains an invalid argument. ### Description I also tried with different langchain wrapper classes for google models. Also tried with different models. When I tried the same with 'GoogleGenerativeAI' class, I get the following error AttributeError Traceback (most recent call last) [<ipython-input-7-a2422220c061>](https://ogjiokqjqun-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240627-060121_RC00_647236509#) in <cell line: 70>() 68 agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory) 69 ---> 70 agent_chain.invoke({"input": "Tell me something about SRK"}) 25 frames [/usr/local/lib/python3.10/dist-packages/langchain_google_genai/llms.py](https://ogjiokqjqun-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240627-060121_RC00_647236509#) in _completion_with_retry(prompt, is_gemini, stream, **kwargs) 82 try: 83 if is_gemini: ---> 84 return llm.client.generate_content( 85 contents=prompt, 86 stream=stream, AttributeError: module 'google.generativeai' has no attribute 'generate_content' ### System Info python version 3.11.8 langchain -0.2.5 langchain_community-0.2.5 langchain_core-0.2.9 langchain-google-gen-ai-1.0.7
'ChatGoogleGenerativeAI wrapper class doesn't work with google chat models such as 'chat-bison@002
https://api.github.com/repos/langchain-ai/langchain/issues/23714/comments
0
2024-07-01T12:16:49Z
2024-07-01T15:31:26Z
https://github.com/langchain-ai/langchain/issues/23714
2,383,693,372
23,714
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_openai import AzureChatOpenAI from langchain_community.callbacks import get_openai_callback from langchain_core.messages import HumanMessage, SystemMessage from langchain_core.prompts import ChatPromptTemplate def chat_completion( temperature: int = 0): try: chat = AzureChatOpenAI( azure_deployment=MODEL, azure_endpoint=AZURE_OPENAI_ENDPOINT, api_key=AZURE_OPENAI_API_KEY, openai_api_version=API_VERSION, model_version=1106, temperature=temperature, max_tokens=MAX_TOKENS, ) messages = [SystemMessage(content="You are a wonderful assistant"), HumanMessage(content="Write a haiku about the sea")] prompt = ChatPromptTemplate.from_messages(messages) runnable = prompt | chat with get_openai_callback() as cb: res = runnable.invoke({}) print( f"Total tokens: {cb.total_tokens}") print(f"Total cost: {cb.total_cost}") print(f"Haiku: {res.content}") except Exception as err: print(str(err)) ### Error Message and Stack Trace (if applicable) _No response_ ### Description The get_openai_callback() callback does not have updated models for Azure in the file: "\langchain_community\callbacks\openai_info.py" So when making a request the total_cost returned is 0.0. I know that the error is solved by adding the models to the dictionary: "MODEL_COST_PER_1K_TOKENS". But when deploying in docker for example it would be a tedious task to be modifying the file. Would it be possible to add the models for Azure? https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/ Models: "gpt-35-turbo-1106": 0.001, "gpt-35-turbo-1106-completion": 0.002, "gpt-35-turbo-0125": 0.0005, "gpt-35-turbo-0125-completion": 0.0015, ### System Info langchain==0.2.6 langchain-community==0.2.6 langchain-core==0.2.10 langchain-openai==0.1.8 langchain-text-splitters==0.2.0 langsmith==0.1.82
The get_openai_callback() function is not updated for Azure models.
https://api.github.com/repos/langchain-ai/langchain/issues/23713/comments
2
2024-07-01T12:12:51Z
2024-07-14T11:51:05Z
https://github.com/langchain-ai/langchain/issues/23713
2,383,684,487
23,713
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangGraph/LangChain rather than my code. - [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question. ### Example Code ```python import asyncio from traceback import print_stack from typing import Sequence from langchain_core.chat_history import BaseChatMessageHistory from langchain_core.language_models.fake_chat_models import FakeChatModel from langchain_core.messages import BaseMessage, HumanMessage from langchain_core.runnables.history import RunnableWithMessageHistory class SyncFunctionCalledWithinAsyncContextError(Exception): pass class TestChatMessageHistory(BaseChatMessageHistory): def __init__(self) -> None: self._messages = [ HumanMessage(content='all good') ] @property def messages(self) -> list[BaseMessage]: print_stack() raise SyncFunctionCalledWithinAsyncContextError async def aget_messages(self) -> list[BaseMessage]: return self._messages def add_messages(self, messages: Sequence[BaseMessage]) -> None: print_stack() raise SyncFunctionCalledWithinAsyncContextError async def aadd_messages(self, messages: Sequence[BaseMessage]) -> None: self._messages.extend(messages) def clear(self) -> None: print_stack() raise SyncFunctionCalledWithinAsyncContextError chat = FakeChatModel() runnable_with_history = RunnableWithMessageHistory( chat, get_session_history=lambda session_id: TestChatMessageHistory(), ) async def main() -> None: result = await runnable_with_history.ainvoke( [HumanMessage(content='hello?')], {'configurable': {'session_id': 'dummy'}}, ) print(result) if __name__ == '__main__': asyncio.run(main()) ``` ### Error Message and Stack Trace (if applicable) ```shell File "bug.py", line 57, in <module> asyncio.run(main()) File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete self.run_forever() File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 607, in run_forever self._run_once() File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once handle._run() File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/tracers/base.py", line 388, in _end_trace await self._on_run_update(run) File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/tracers/root_listeners.py", line 106, in _on_run_update await acall_func_with_variable_args(self._arg_on_end, run, self.config) File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/runnables/history.py", line 511, in _aexit_history historic_messages = config["configurable"]["message_history"].messages File "/home/user/workspace/luna/bug.py", line 24, in messages print_stack() Error in AsyncRootListenersTracer.on_llm_end callback: SyncFunctionCalledWithinAsyncContextError() ``` ### Description * I am trying to use `RunnableWithMessageHistory` in async context * I am expecting that only async methods of my `ChatMessageHistory` backend will be used * However, even being in async context and calling `ainvoke`, I still see **sync** `ChatMessageHistory.messages` property called * Expected behaviour: instead of sync `.messages` property, an async method `.aget_messages()` should be called # Why this happens? Inside `langchain_core/runnables/history.py`: ```python class RunnableWithMessageHistory(RunnableBindingBase): # ... def _exit_history(self, run: Run, config: RunnableConfig) -> None: hist: BaseChatMessageHistory = config["configurable"]["message_history"] # Get the input messages inputs = load(run.inputs) input_messages = self._get_input_messages(inputs) # If historic messages were prepended to the input messages, remove them to # avoid adding duplicate messages to history. if not self.history_messages_key: historic_messages = config["configurable"]["message_history"].messages input_messages = input_messages[len(historic_messages) :] # Get the output messages output_val = load(run.outputs) output_messages = self._get_output_messages(output_val) hist.add_messages(input_messages + output_messages) async def _aexit_history(self, run: Run, config: RunnableConfig) -> None: hist: BaseChatMessageHistory = config["configurable"]["message_history"] # Get the input messages inputs = load(run.inputs) input_messages = self._get_input_messages(inputs) # If historic messages were prepended to the input messages, remove them to # avoid adding duplicate messages to history. if not self.history_messages_key: historic_messages = config["configurable"]["message_history"].messages # <----------------- !!! input_messages = input_messages[len(historic_messages) :] # Get the output messages output_val = load(run.outputs) output_messages = self._get_output_messages(output_val) await hist.aadd_messages(input_messages + output_messages) ``` Async version `_aexit_history` has been copied from sync `_exit_history`, thus a developer probably forgot to replace sync version for messages retrieval to async one. I think that the solution should be in replacing ```python historic_messages = config["configurable"]["message_history"].messages ``` with ```python historic_messages = await config["configurable"]["message_history"].aget_messages() ``` ### System Info System Information ------------------ > OS: Linux > OS Version: langchain-ai/langgraph#1 SMP Mon, 16 Jan 2023 13:59:21 +0000 > Python Version: 3.11.4 (main, Aug 17 2023, 14:57:18) [GCC 13.2.1 20230801] Package Information ------------------- > langchain_core: 0.2.10 > langchain: 0.2.6 > langchain_community: 0.2.6 > langsmith: 0.1.82 > langchain_groq: 0.1.5 > langchain_text_splitters: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve
Within async context, `RunnableWithMessageHistory` calls sync `.messages` property instead of async `.aget_messages()` method
https://api.github.com/repos/langchain-ai/langchain/issues/23716/comments
0
2024-07-01T09:33:35Z
2024-07-01T18:33:06Z
https://github.com/langchain-ai/langchain/issues/23716
2,384,013,658
23,716
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from langchain_core.messages import SystemMessage, merge_message_runs chain = SystemMessage(content="Hello, World!") + SystemMessage(content=["foo", "bar"]) runnable = chain | merge_message_runs() runnable.invoke(input={}) ``` ### Error Message and Stack Trace (if applicable) ```text Traceback (most recent call last): File "/Users/JP-Ellis/mwe/mwe.py", line 5, in <module> runnable.invoke(input={}) File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2507, in invoke input = step.invoke(input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3985, in invoke return self._call_with_config( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1599, in _call_with_config context.run( File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3853, in _invoke output = call_func_with_variable_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 460, in merge_message_runs messages = convert_to_messages(messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 268, in convert_to_messages return [_convert_to_message(m) for m in messages] ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 235, in _convert_to_message _message = _create_message_from_message_type(message_type_str, template) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 204, in _create_message_from_message_type raise ValueError( ValueError: Unexpected message type: messages. Use one of 'human', 'user', 'ai', 'assistant', or 'system'. ``` ### Description ## Summary The `merge_message_runs` function is incompatible with messages composed together using the LangChain Expression Language (LCEL). ## Description The examples and tests for `merge_message_runs` verify that the logic is sound on a sequence of messages: ```python [ SystemMessage(content=...), SystemMessage(content=...), ] ``` However, if the messages are instead composed together using the LCEL: ```python ( SystemMessage(content=...) + SystemMessage(content=...) ) # <-- Type ChatPromptValue ``` The internal logic used by `merge_message_runs` of iterating over the messages in the (assumed) sequence fails, resulting in the above error message. Some monkey-patching on my part would indicate that adjusting the `convert_to_messages` function to handle both `PromptValue` subclasses as well as sequences (or more generally, iterables) works: ```python def convert_to_messages( messages: Iterable[MessageLikeRepresentation] | PromptValue, ) -> List[BaseMessage]: if isinstance(messages, PromptValue): return [_convert_to_message(m) for m in messages.to_messages()] return [_convert_to_message(m) for m in messages] ``` I am happy to create a PR for the above if that seems like an appropriate solution. ### System Info ```console ❯ uv pip list Package Version ------------------ -------- annotated-types 0.7.0 certifi 2024.6.2 charset-normalizer 3.3.2 idna 3.7 jsonpatch 1.33 jsonpointer 3.0.0 langchain-core 0.2.10 langsmith 0.1.82 orjson 3.10.5 packaging 24.1 pydantic 2.7.4 pydantic-core 2.18.4 pyyaml 6.0.1 requests 2.32.3 tenacity 8.4.2 typing-extensions 4.12.2 urllib3 2.2.2 ❯ uname -mprs Darwin 23.5.0 arm64 arm ❯ python --version Python 3.12.4 ❯ which python /Users/JP-Ellis/mwe/.venv/bin/python ```
`merge_message_runs` incompatible with LCEL
https://api.github.com/repos/langchain-ai/langchain/issues/23706/comments
0
2024-07-01T09:27:34Z
2024-07-15T15:58:07Z
https://github.com/langchain-ai/langchain/issues/23706
2,383,322,744
23,706
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code This is the minimal code to reproduce the error: ```python import dotenv import os from langchain_community.agent_toolkits import create_sql_agent from langchain_community.utilities import SQLDatabase from langchain_mistralai import ChatMistralAI # Get api key from .env file dotenv.load_dotenv(".dev.env") api_key = str(os.getenv("MISTRAL_API_KEY")) # Create langchain database object db = SQLDatabase.from_uri("postgresql://root:root@localhost:65432/test") # Create agent llm = ChatMistralAI(model_name="mistral-small-latest", api_key=api_key) agent_executor = create_sql_agent(llm, db=db, agent_type="tool-calling", verbose=True) agent_executor.invoke("Do any correct query") ``` This is the payload of the call to the mistral API route `/chat/completions`: ```json { "messages": [ { "role": "system", "content": "You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct postgresql query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\"t know\" as the answer.\\n" }, { "role": "user", "content": "Do any correct query" }, { "role": "assistant", "content": "I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables." } ], "model": "mistral-small-latest", "tools": [ { "type": "function", "function": { "name": "sql_db_query", "description": "Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column "xxxx" in "field list", use sql_db_schema to query the correct table fields.", "parameters": { "type": "object", "properties": { "query": { "description": "A detailed and correct SQL query.", "type": "string" } }, "required": [ "query" ] } } }, { "type": "function", "function": { "name": "sql_db_schema", "description": "Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3", "parameters": { "type": "object", "properties": { "table_names": { "description": "A comma-separated list of the table names for which to return the schema. Example input: \"table1, table2, table3\"", "type": "string" } }, "required": [ "table_names" ] } } }, { "type": "function", "function": { "name": "sql_db_list_tables", "description": "Input is an empty string, output is a comma-separated list of tables in the database.", "parameters": { "type": "object", "properties": { "tool_input": { "description": "An empty string", "default": "", "type": "string" } } } } }, { "type": "function", "function": { "name": "sql_db_query_checker", "description": "Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!", "parameters": { "type": "object", "properties": { "query": { "description": "A detailed and SQL query to be checked.", "type": "string" } }, "required": [ "query" ] } } } ], "stream": true } ``` ### Error Message and Stack Trace (if applicable) ``` python test.py > Entering new SQL Agent Executor chain... Traceback (most recent call last): .../test.py", line 18, in <module> agent_executor.invoke("Do any correct query") .../lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke raise e .../lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) .../lib/python3.11/site-packages/langchain/agents/agent.py", line 1433, in _call next_step_output = self._take_next_step( ^^^^^^^^^^^^^^^^^^^^^ .../lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in _take_next_step [ .../lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in <listcomp> [ .../lib/python3.11/site-packages/langchain/agents/agent.py", line 1167, in _iter_next_step output = self.agent.plan( ^^^^^^^^^^^^^^^^ .../lib/python3.11/site-packages/langchain/agents/agent.py", line 515, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2882, in stream yield from self.transform(iter([input]), config, **kwargs) .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2869, in transform yield from self._transform_stream_with_config( .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1867, in _transform_stream_with_config chunk: Output = context.run(next, iterator) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2831, in _transform for output in final_pipeline: .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1163, in transform for ichunk in input: .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4784, in transform yield from self.bound.transform( .../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1181, in transform yield from self.stream(final, config, **kwargs) .../lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in stream raise e .../lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream for chunk in self._stream(messages, stop=stop, **kwargs): .../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 523, in _stream for chunk in self.completion_with_retry( .../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 391, in iter_sse _raise_on_error(event_source.response) .../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 131, in _raise_on_error raise httpx.HTTPStatusError( httpx.HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_error","param":null,"code":null} ``` ### Description I'm following [this guide](https://python.langchain.com/v0.1/docs/use_cases/sql/agents/#setup) to implement a SQL agent with the `langchain_community.agent_toolkits.create_sql_agent` function but instead of using OpenAI I want to use the Mistral API. When I try to implement this agent with mistral I get the error you can see above. The mistral chat completion api doesn't expect the last message of the chat to be an assistant message unless the prefix feature is enabled. I don't know what is the expected behavior of this agent so I can't tell if it's an agent issue, or a mistral client issue. ### System Info ```bash # python -m langchain_core.sys_info System Information ------------------ > OS: Linux > OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 > Python Version: 3.11.6 (main, Nov 22 2023, 18:29:18) [GCC 9.4.0] Package Information ------------------- > langchain_core: 0.2.7 > langchain: 0.2.5 > langchain_community: 0.2.5 > langsmith: 0.1.77 > langchain_experimental: 0.0.60 > langchain_mistralai: 0.1.8 > langchain_text_splitters: 0.2.1 > langserve: 0.2.2 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph ```
create_sql_agent with ChatMistralAI causes this error:"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant"
https://api.github.com/repos/langchain-ai/langchain/issues/23703/comments
0
2024-07-01T08:53:21Z
2024-07-01T08:55:57Z
https://github.com/langchain-ai/langchain/issues/23703
2,383,245,268
23,703
[ "langchain-ai", "langchain" ]
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage, SystemMessage llm = ChatOpenAI(model_name="gpt-4-0314", streaming=True) messages = [ SystemMessage(content="You're a helpful assistant"), HumanMessage(content="What is the purpose of model regularization?"), ] llm(messages) ``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description BaseChatModel.generate supports caching, but `.stream` method doesn't ([source](https://github.com/langchain-ai/langchain/blob/9604cb833b9cb9d04a0eb60754e68402ab2d4b3c/libs/core/langchain_core/language_models/chat_models.py#L281)). This creates the need for workarounds like https://github.com/langchain-ai/langchain/issues/20782. ### System Info ``` System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 > Python Version: 3.11.7 (main, Jan 16 2024, 12:02:24) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.2.7 > langchain: 0.2.5 > langchain_community: 0.2.5 > langsmith: 0.1.77 > langchain_anthropic: 0.1.13 > langchain_aws: 0.1.6 > langchain_openai: 0.1.7 > langchain_text_splitters: 0.2.0 Packages not installed (Not Necessarily a Problem) -------------------------------------------------- The following packages were not found: > langgraph > langserve ```
BaseChatModel.stream method not supporting caching
https://api.github.com/repos/langchain-ai/langchain/issues/23701/comments
0
2024-07-01T08:47:21Z
2024-07-01T08:49:55Z
https://github.com/langchain-ai/langchain/issues/23701
2,383,232,429
23,701
[ "langchain-ai", "langchain" ]
### URL https://python.langchain.com/v0.2/docs/tutorials/chatbot/ ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: In the section about message history, there's a sentence: > This `session_id` is used to distinguish between separate conversations, and should be passed in as part of the config when calling the new chain (we'll show how to do that. ### Idea or request for content: There's a missing parenthesis at the end of the sentence which should be added.
DOC: missing parenthesis
https://api.github.com/repos/langchain-ai/langchain/issues/23687/comments
0
2024-06-30T13:55:07Z
2024-06-30T13:57:34Z
https://github.com/langchain-ai/langchain/issues/23687
2,382,267,174
23,687