issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/toolkits/sql_database/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hello everyone,
I think my issue is more about something missing in the doc than a bug.
Feel free to tell me if I did wrong.
In the documentation, there is a great disclaimer: "The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create a SQL users without write permissions."
However, there is no information on the minimal permissions needed for an user.
Currently, I have a script working perfectly with an admin account but I get the following error with an user that have only:
* Read access on MyView
* Read definition
I can request manually the view but with LangChain, I get a "include_tables {MyView} not found in database".
Again, it's working with an admin account.
But I have the schema defined and the view_support set to true.
### Idea or request for content:
A redirection under the disclaimer to explain what kind of rights the "include_tables" need. | DOC: Minimal permissions needed to work with SQL Server | https://api.github.com/repos/langchain-ai/langchain/issues/24675/comments | 0 | 2024-07-25T16:00:51Z | 2024-07-25T16:03:26Z | https://github.com/langchain-ai/langchain/issues/24675 | 2,430,412,390 | 24,675 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`agents.openai_assistant.base.OpenAIAssistantRunnable` has code like
```python
required_tool_call_ids = {
tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
}
```
See https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.11/libs/langchain/langchain/agents/openai_assistant/base.py#L497.
`required_action` is an optional field on OpenAI's `Run` entity. See https://github.com/openai/openai-python/blob/v1.37.0/src/openai/types/beta/threads/run.py#L161.
This results in an error when `run.required_action` is `None`, which does sometimes occur.
### Error Message and Stack Trace (if applicable)
AttributeError: 'NoneType' object has no attribute 'submit_tool_outputs'
```
/SITE_PACKAGES/langchain/agents/openai_assistant/base.py:497 in _parse_intermediate_steps
495:
run = self._wait_for_run(last_action.run_id, last_action.thread_id)
496:
required_tool_call_ids = {
497:
tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
498:
}
499:
tool_outputs = [
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:312 in invoke
310:
# Being run within AgentExecutor and there are tool outputs to submit.
311:
if self.as_agent and input.get("intermediate_steps"):
312:
tool_outputs = self._parse_intermediate_steps(
313:
input["intermediate_steps"]
314:
)
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:347 in invoke
345:
except BaseException as e:
346:
run_manager.on_chain_error(e)
347:
raise e
348:
try:
349:
response = self._get_response(run)
/SITE_PACKAGES/langchain_core/runnables/base.py:854 in stream
852:
The output of the Runnable.
853:
"""
854:
yield self.invoke(input, config, **kwargs)
855:
856:
async def astream(
/SITE_PACKAGES/langchain/agents/agent.py:580 in plan
578:
# Because the response from the plan is not a generator, we need to
579:
# accumulate the output into final output and return that.
580:
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
581:
if final_output is None:
582:
final_output = chunk
/SITE_PACKAGES/langchain/agents/agent.py:1346 in _iter_next_step
1344:
1345:
# Call the LLM to see what to do.
1346:
output = self.agent.plan(
1347:
intermediate_steps,
1348:
callbacks=run_manager.get_child() if run_manager else None,
/SITE_PACKAGES/langchain/agents/agent.py:1318 in <listcomp>
1316:
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
return self._consume_next_step(
1318:
[
1319:
a
1320:
for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1318 in _take_next_step
1316:
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
return self._consume_next_step(
1318:
[
1319:
a
1320:
for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1612 in _call
1610:
# We now enter the agent loop (until it returns something).
1611:
while self._should_continue(iterations, time_elapsed):
1612:
next_step_output = self._take_next_step(
1613:
name_to_tool_map,
1614:
color_mapping,
/SITE_PACKAGES/langchain/chains/base.py:156 in invoke
154:
self._validate_inputs(inputs)
155:
outputs = (
156:
self._call(inputs, run_manager=run_manager)
157:
if new_arg_supported
158:
else self._call(inputs)
/SITE_PACKAGES/langchain/chains/base.py:166 in invoke
164:
except BaseException as e:
165:
run_manager.on_chain_error(e)
166:
raise e
167:
run_manager.on_chain_end(outputs)
168:
/SITE_PACKAGES/langchain_core/runnables/base.py:5057 in invoke
5055:
**kwargs: Optional[Any],
5056:
) -> Output:
5057:
return self.bound.invoke(
5058:
input,
5059:
self._merge_configs(config),
PROJECT_ROOT/assistants/[openai_native_assistant.py](https://github.com/Shopximity/astrology/tree/master/PROJECT_ROOT/assistants/openai_native_assistant.py#L583):583 in _run
581:
metadata=get_contextvars()
582:
) as manager:
583:
result = agent_executor.invoke(run_args, config=dict(callbacks=manager))
```
### Description
`OpenAIAssistantRunnable._parse_intermediate_steps` assumes that every OpenAI `run` will have a `required_action`, but that is not correct.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Jan 2 2024, 08:56:15) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.81
> langchain_anthropic: 0.1.19
> langchain_exa: 0.1.0
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | agents.openai_assistant.base.OpenAIAssistantRunnable assumes existence of an Optional field | https://api.github.com/repos/langchain-ai/langchain/issues/24673/comments | 1 | 2024-07-25T15:46:25Z | 2024-07-25T19:43:33Z | https://github.com/langchain-ai/langchain/issues/24673 | 2,430,366,029 | 24,673 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I've tried this code on 2 platforms(JetBrains Datalore Online and Replit), and they both give me the same error.
```py
# -*- coding: utf-8 -*-
# Some API KEY and model name
GROQ_API_KEY = "MY_GROQ_KEY"# I have filled this, no problem in this
llm_name = "llama3-groq-70b-8192-tool-use-preview"
# Import
from langchain_groq import ChatGroq
from langchain_core.messages import AIMessage, SystemMessage, HumanMessage
from langchain_core.chat_history import (
BaseChatMessageHistory,
InMemoryChatMessageHistory,
)
from langchain_core.runnables.history import RunnableWithMessageHistory
# Chat History Module
store = {}
# The exactly same code in the tutorial
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
model = ChatGroq(
model = llm_name,
temperature = 0.5,
max_tokens = 1024,
stop_sequences = None,
api_key = GROQ_API_KEY
)
with_message_history = RunnableWithMessageHistory(model, get_session_history)
# Session ID
config = {"configurable": {"session_id": "abc"}}
model.invoke([HumanMessage(content = "Hi! My name's Kevin.")])
# Stream: I fail in this
for chunk in with_message_history.stream(
[HumanMessage(content = "What's my name?")],
config = config,
):
print(chunk.content, end = '')
print()
print("Done!")
# Invoke: This works well just as I want
response = with_message_history.invoke(
[HumanMessage(content="Hi! I'm Bob")],
config=config,
)
print(response.content)# This works
```
### Error Message and Stack Trace (if applicable)
Your name is Kevin.
Done!
Error in RootListenersTracer.on_chain_end callback: ValueError()
Error in callback coroutine: ValueError()
### Description
* I use code in Langchain official tutorials (https://python.langchain.com/v0.2/docs/tutorials/chatbot/#prompt-templates) with few modifications.
* In stream mode, it outputs the correct response, but with some error under it.
### System Info
The first service I tried: (JetBrains Datalore Online)
```
System Information
------------------
> OS: Linux
> OS Version: #40~20.04.1-Ubuntu SMP Mon Apr 24 00:21:13 UTC 2023
> Python Version: 3.8.12 (default, Jun 27 2024, 14:42:59)
[GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langsmith: 0.1.93
> langchain_groq: 0.1.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
The second service I tried (Replit):
```
System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP Fri Jun 14 18:48:45 UTC 2024
> Python Version: 3.10.14 (main, Mar 19 2024, 21:46:16) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_groq: 0.1.6
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Model with history work well on `invoke`, but not well in `stream` (many parts exactly same to official tutorial `Build a Chatbot`) | https://api.github.com/repos/langchain-ai/langchain/issues/24660/comments | 8 | 2024-07-25T09:25:28Z | 2024-08-07T12:51:34Z | https://github.com/langchain-ai/langchain/issues/24660 | 2,429,478,416 | 24,660 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
llm =build_llm(load_model_from="azure")
type(llm)# Outputs: langchain_community.chat_models.azureml_endpoint.AzureMLChatOnlineEndpoint
llm.invoke("Hallo") # Outputs: BaseMessage(content='Hallo! Wie kann ich Ihnen helfen?', type='assistant', id='run-f606d912-b21f-4c0c-861d-9338fa001724-0')
from backend_functions.langgraph_rag_workflow import create_workflow_app
from backend_functions.rag_functions import serialize_documents
from langchain_core.messages import HumanMessage
import json
question = "Hello, who are you?"
thread_id = "id_1"
model_type_for_astream_event = "chat_model"
chain = create_workflow_app(retriever=retriever, model=llm)
input_message = HumanMessage(content=question)
config = {
"configurable": {"thread_id": thread_id}, #for every user, a different thread_id should be selected
}
#print(f"Updated State from previous question: {chain.get_state(config).values}")
async for event in chain.astream_events(
#{"messages": [input_message]},
{"messages": question}, #test für azure
version="v1",
config=config
):
print(event)
if event["event"] == f"on_{model_type_for_astream_event}_start" and event.get("metadata", {}).get("langgraph_node") == "generate":
print("Stream started...")
if model_type_for_astream_event == "llm":
prompt_length = len(event["data"]["input"]["prompts"][0])
else:
prompt_length= len(event["data"]["input"]["messages"][0][0].content)
print(f'data: {json.dumps({"type": "prompt_length_characters", "content": prompt_length})}\n\n')
print(f'data: {json.dumps({"type": "prompt_length_tokens", "content": prompt_length / 4})}\n\n')
if event["event"] == f"on_{model_type_for_astream_event}_stream" and event.get("metadata", {}).get("langgraph_node") == "generate":
if model_type_for_astream_event == "llm":
chunks = event["data"]['chunk']
else:
chunks = event["data"]['chunk'].content
print(f'data: {json.dumps({"type": "chunk", "content": chunks})}\n\n')
elif event["event"] == "on_chain_end" and event.get("metadata", {}).get("langgraph_node") == "format_docs" and event["name"] == "format_docs":
retrieved_docs = event["data"]["input"]["raw_docs"]
serialized_docs = serialize_documents(retrieved_docs)
print(f'data: {{"type": "docs", "content": {serialized_docs}}}\n\n')
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
APIStatusError Traceback (most recent call last)
[/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 49 line 1
[12](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=11) config = {
[13](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=12) "configurable": {"thread_id": thread_id}, #for every user, a different thread_id should be selected
[14](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=13) }
[15](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=14) #print(f"Updated State from previous question: {chain.get_state(config).values}")
---> [16](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=15) async for event in chain.astream_events(
[17](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=16) #{"messages": [input_message]},
[18](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=17) {"messages": question}, #test für azure
[19](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=18) version="v1",
[20](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=19) config=config
[21](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=20) ):
[22](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=21) print(event)
[23](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=22) if event["event"] == f"on_{model_type_for_astream_event}_start" and event.get("metadata", {}).get("langgraph_node") == "generate":
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246), in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
1241 raise NotImplementedError(
1242 'Only versions "v1" and "v2" of the schema is currently supported.'
1243 )
1245 async with aclosing(event_stream):
-> 1246 async for event in event_stream:
1247 yield event
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:778](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:778), in _astream_events_implementation_v1(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
774 root_name = config.get("run_name", runnable.get_name())
776 # Ignoring mypy complaint about too many different union combinations
777 # This arises because many of the argument types are unions
--> 778 async for log in _astream_log_implementation( # type: ignore[misc]
779 runnable,
780 input,
781 config=config,
782 stream=stream,
783 diff=True,
784 with_streamed_output_list=True,
785 **kwargs,
786 ):
787 run_log = run_log + log
789 if not encountered_start_event:
790 # Yield the start event for the root runnable.
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:670](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:670), in _astream_log_implementation(runnable, input, config, stream, diff, with_streamed_output_list, **kwargs)
667 finally:
668 # Wait for the runnable to finish, if not cancelled (eg. by break)
669 try:
--> 670 await task
671 except asyncio.CancelledError:
672 pass
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:624](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:624), in _astream_log_implementation.<locals>.consume_astream()
621 prev_final_output: Optional[Output] = None
622 final_output: Optional[Output] = None
--> 624 async for chunk in runnable.astream(input, config, **kwargs):
625 prev_final_output = final_output
626 if final_output is None:
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1336](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1336), in Pregel.astream(self, input, config, stream_mode, output_keys, input_keys, interrupt_before, interrupt_after, debug)
1333 del fut, task
1335 # panic on failure or timeout
-> 1336 _panic_or_proceed(done, inflight, step)
1337 # don't keep futures around in memory longer than needed
1338 del done, inflight, futures
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1540](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1540), in _panic_or_proceed(done, inflight, step)
1538 inflight.pop().cancel()
1539 # raise the exception
-> 1540 raise exc
1542 if inflight:
1543 # if we got here means we timed out
1544 while inflight:
1545 # cancel all pending tasks
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/retry.py:117](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/retry.py:117), in arun_with_retry(task, retry_policy, stream)
115 # run the task
116 if stream:
--> 117 async for _ in task.proc.astream(task.input, task.config):
118 pass
119 else:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3278](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3278), in RunnableSequence.astream(self, input, config, **kwargs)
3275 async def input_aiter() -> AsyncIterator[Input]:
3276 yield input
-> 3278 async for chunk in self.atransform(input_aiter(), config, **kwargs):
3279 yield chunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3261](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3261), in RunnableSequence.atransform(self, input, config, **kwargs)
3255 async def atransform(
3256 self,
3257 input: AsyncIterator[Input],
3258 config: Optional[RunnableConfig] = None,
3259 **kwargs: Optional[Any],
3260 ) -> AsyncIterator[Output]:
-> 3261 async for chunk in self._atransform_stream_with_config(
3262 input,
3263 self._atransform,
3264 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
3265 **kwargs,
3266 ):
3267 yield chunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2160](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2160), in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
2158 while True:
2159 if accepts_context(asyncio.create_task):
-> 2160 chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2161 py_anext(iterator), # type: ignore[arg-type]
2162 context=context,
2163 )
2164 else:
2165 chunk = cast(Output, await py_anext(iterator))
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:258](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:258), in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
246 async def tap_output_aiter(
247 self, run_id: UUID, output: AsyncIterator[T]
248 ) -> AsyncIterator[T]:
249 """Tap an output async iterator to stream its values to the log.
250
251 Args:
(...)
256 T: The output value.
257 """
--> 258 async for chunk in output:
259 # root run is handled in .astream_log()
260 if run_id != self.root_id:
261 # if we can't find the run silently ignore
262 # eg. because this run wasn't included in the log
263 if key := self._key_map_by_run_id.get(run_id):
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3231](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3231), in RunnableSequence._atransform(self, input, run_manager, config, **kwargs)
3229 else:
3230 final_pipeline = step.atransform(final_pipeline, config)
-> 3231 async for output in final_pipeline:
3232 yield output
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1313](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1313), in Runnable.atransform(self, input, config, **kwargs)
1310 final: Input
1311 got_first_val = False
-> 1313 async for ichunk in input:
1314 # The default implementation of transform is to buffer input and
1315 # then call stream.
1316 # It'll attempt to gather all input into a single chunk using
1317 # the `+` operator.
1318 # If the input is not addable, then we'll assume that we can
1319 # only operate on the last chunk,
1320 # and we'll iterate until we get to the last chunk.
1321 if not got_first_val:
1322 final = ichunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1331](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1331), in Runnable.atransform(self, input, config, **kwargs)
1328 final = ichunk
1330 if got_first_val:
-> 1331 async for output in self.astream(final, config, **kwargs):
1332 yield output
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:874](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:874), in Runnable.astream(self, input, config, **kwargs)
856 async def astream(
857 self,
858 input: Input,
859 config: Optional[RunnableConfig] = None,
860 **kwargs: Optional[Any],
861 ) -> AsyncIterator[Output]:
862 """
863 Default implementation of astream, which calls ainvoke.
864 Subclasses should override this method if they support streaming output.
(...)
872 The output of the Runnable.
873 """
--> 874 yield await self.ainvoke(input, config, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langgraph/utils.py:117](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/utils.py:117), in RunnableCallable.ainvoke(self, input, config, **kwargs)
115 kwargs["config"] = config
116 if sys.version_info >= (3, 11):
--> 117 ret = await asyncio.create_task(
118 self.afunc(input, **kwargs), context=context
119 )
120 else:
121 ret = await self.afunc(input, **kwargs)
File [~/Documents/GitHub/fastapi_rag_demo/backend_functions/langgraph_rag_workflow.py:264](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/Documents/GitHub/fastapi_rag_demo/backend_functions/langgraph_rag_workflow.py:264), in create_workflow_app.<locals>.generate(state)
262 system_message = state["system_prompt"]
263 state["prompt_length"] = len(system_message)
--> 264 response = await model.ainvoke([SystemMessage(content=system_message)] + messages)
265 state["generation"] = response
266 if isinstance(model, OllamaLLM):
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:291](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:291), in BaseChatModel.ainvoke(self, input, config, stop, **kwargs)
282 async def ainvoke(
283 self,
284 input: LanguageModelInput,
(...)
288 **kwargs: Any,
289 ) -> BaseMessage:
290 config = ensure_config(config)
--> 291 llm_result = await self.agenerate_prompt(
292 [self._convert_input(input)],
293 stop=stop,
294 callbacks=config.get("callbacks"),
295 tags=config.get("tags"),
296 metadata=config.get("metadata"),
297 run_name=config.get("run_name"),
298 run_id=config.pop("run_id", None),
299 **kwargs,
300 )
301 return cast(ChatGeneration, llm_result.generations[0][0]).message
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713), in BaseChatModel.agenerate_prompt(self, prompts, stop, callbacks, **kwargs)
705 async def agenerate_prompt(
706 self,
707 prompts: List[PromptValue],
(...)
710 **kwargs: Any,
711 ) -> LLMResult:
712 prompt_messages = [p.to_messages() for p in prompts]
--> 713 return await self.agenerate(
714 prompt_messages, stop=stop, callbacks=callbacks, **kwargs
715 )
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:673](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:673), in BaseChatModel.agenerate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
660 if run_managers:
661 await asyncio.gather(
662 *[
663 run_manager.on_llm_end(
(...)
671 ]
672 )
--> 673 raise exceptions[0]
674 flattened_outputs = [
675 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item, union-attr]
676 for res in results
677 ]
678 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) # type: ignore[union-attr]
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846), in BaseChatModel._agenerate_with_cache(self, messages, stop, run_manager, **kwargs)
827 if (
828 type(self)._astream != BaseChatModel._astream
829 or type(self)._stream != BaseChatModel._stream
(...)
843 ),
844 ):
845 chunks: List[ChatGenerationChunk] = []
--> 846 async for chunk in self._astream(messages, stop=stop, **kwargs):
847 chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
848 if run_manager:
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:386](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:386), in AzureMLChatOnlineEndpoint._astream(self, messages, stop, run_manager, **kwargs)
383 params = {"stream": True, "stop": stop, "model": None, **kwargs}
385 default_chunk_class = AIMessageChunk
--> 386 async for chunk in await async_client.chat.completions.create(
387 messages=message_dicts, **params
388 ):
389 if not isinstance(chunk, dict):
390 chunk = chunk.dict()
File [~/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:1159](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:1159), in AsyncCompletions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
1128 @required_args(["messages", "model"], ["messages", "model", "stream"])
1129 async def create(
1130 self,
(...)
1157 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1158 ) -> ChatCompletion | AsyncStream[ChatCompletionChunk]:
-> 1159 return await self._post(
1160 "[/chat/completions](https://file+.vscode-resource.vscode-cdn.net/chat/completions)",
1161 body=await async_maybe_transform(
1162 {
1163 "messages": messages,
1164 "model": model,
1165 "frequency_penalty": frequency_penalty,
1166 "function_call": function_call,
1167 "functions": functions,
1168 "logit_bias": logit_bias,
1169 "logprobs": logprobs,
1170 "max_tokens": max_tokens,
1171 "n": n,
1172 "presence_penalty": presence_penalty,
1173 "response_format": response_format,
1174 "seed": seed,
1175 "stop": stop,
1176 "stream": stream,
1177 "temperature": temperature,
1178 "tool_choice": tool_choice,
1179 "tools": tools,
1180 "top_logprobs": top_logprobs,
1181 "top_p": top_p,
1182 "user": user,
1183 },
1184 completion_create_params.CompletionCreateParams,
1185 ),
1186 options=make_request_options(
1187 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
1188 ),
1189 cast_to=ChatCompletion,
1190 stream=stream or False,
1191 stream_cls=AsyncStream[ChatCompletionChunk],
1192 )
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1790](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1790), in AsyncAPIClient.post(self, path, cast_to, body, files, options, stream, stream_cls)
1776 async def post(
1777 self,
1778 path: str,
(...)
1785 stream_cls: type[_AsyncStreamT] | None = None,
1786 ) -> ResponseT | _AsyncStreamT:
1787 opts = FinalRequestOptions.construct(
1788 method="post", url=path, json_data=body, files=await async_to_httpx_files(files), **options
1789 )
-> 1790 return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1493](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1493), in AsyncAPIClient.request(self, cast_to, options, stream, stream_cls, remaining_retries)
1484 async def request(
1485 self,
1486 cast_to: Type[ResponseT],
(...)
1491 remaining_retries: Optional[int] = None,
1492 ) -> ResponseT | _AsyncStreamT:
-> 1493 return await self._request(
1494 cast_to=cast_to,
1495 options=options,
1496 stream=stream,
1497 stream_cls=stream_cls,
1498 remaining_retries=remaining_retries,
1499 )
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1584](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1584), in AsyncAPIClient._request(self, cast_to, options, stream, stream_cls, remaining_retries)
1581 await err.response.aread()
1583 log.debug("Re-raising status error")
-> 1584 raise self._make_status_error_from_response(err.response) from None
1586 return await self._process_response(
1587 cast_to=cast_to,
1588 options=options,
(...)
1591 stream_cls=stream_cls,
1592 )
APIStatusError: Error code: 424 - {'detail': 'Not Found'}
### Description
Hi,
I want to use a Model from Azure ML in my Langgraph Pipeline. The provided code works for several model loaders like OllamaLLM or ChatGroq. However I am getting an error if I switch to an Azure model loaded with: AzureMLChatOnlineEndpoint. General responses work with it, but not the `astream_events`.
When running the code with a Azure LLM I am getting this error: `APIStatusError: Error code: 424 - {'detail': 'Not Found'}`.
I observed the events in astream_events and saw that the event "on_chat_model_start" starts but in the next step "on_chat_model_end" occurs and the genration is ofd type None. I tried `model_type_for_astream_event = "chat_model"` and `model_type_for_astream_event = "llm"`
I think this is a bug or do I have an error on my implementation?
### System Info
langchain 0.2.7 pypi_0 pypi
langchain-chroma 0.1.0 pypi_0 pypi
langchain-community 0.2.7 pypi_0 pypi
langchain-core 0.2.23 pypi_0 pypi
langchain-experimental 0.0.63 pypi_0 pypi
langchain-groq 0.1.5 pypi_0 pypi
langchain-huggingface 0.0.3 pypi_0 pypi
langchain-ollama 0.1.0 pypi_0 pypi
langchain-openai 0.1.7 pypi_0 pypi
langchain-postgres 0.0.3 pypi_0 pypi
langchain-text-splitters 0.2.1 pypi_0 pypi | Astream Events not working for AzureMLChatOnlineEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/24659/comments | 2 | 2024-07-25T08:59:18Z | 2024-07-25T15:59:17Z | https://github.com/langchain-ai/langchain/issues/24659 | 2,429,422,432 | 24,659 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
#Step 1
```
import os
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_qdrant import Qdrant, FastEmbedSparse, RetrievalMode
embeddings = HuggingFaceEmbeddings(model_name='OrdalieTech/Solon-embeddings-large-0.1', model_kwargs={"device": "cuda"})
sparse_embeddings = FastEmbedSparse(model_name="Qdrant/bm25")
vectordb = Qdrant.from_texts(
texts=texts,
embedding=embeddings,
sparse_embedding=sparse_embeddings,
sparse_vector_name="sparse-vector"
path=os.path.join(os.getcwd(), 'manuscrits_biblissima_vectordb'),
collection_name="manuscrits_biblissima",
retrieval_mode=RetrievalMode.HYBRID,
)
```
#Step 2
```
model_kwargs = {"device": "cuda"}
embeddings = HuggingFaceEmbeddings(
model_name='OrdalieTech/Solon-embeddings-large-0.1',
model_kwargs=model_kwargs
)
sparse_embeddings = FastEmbedSparse(
model_name="Qdrant/bm25",
model_kwargs=model_kwargs,
)
qdrant = QdrantVectorStore.from_existing_collection(
collection_name="manuscrits_biblissima",
path=os.path.join(os.getcwd(), 'manuscrits_biblissima_vectordb'),
retrieval_mode=RetrievalMode.HYBRID,
embedding=embeddings,
sparse_embedding=sparse_embeddings,
sparse_vector_name="sparse-vector"
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/local/eferra01/data/get_ref_llama3_70B_gguf.py", line 101, in <module>
qdrant = QdrantVectorStore.from_existing_collection(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 286, in from_existing_collection
return cls(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 87, in __init__
self._validate_collection_config(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 937, in _validate_collection_config
cls._validate_collection_for_sparse(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 1022, in _validate_collection_for_sparse
raise QdrantVectorStoreError(
langchain_qdrant.qdrant.QdrantVectorStoreError: Existing Qdrant collection manuscrits_biblissima does not contain sparse vectors named None. If you want to recreate the collection, set force_recreate parameter to True.
```
### Description
I first create a qdrant database (#Step 1).
Then, in another script, to do RAG, I try to load the database (#Step 2).
However, I have the error above.
I named the sparse vectors when creating the database (Step 1) and took care to mention this name when loading the database for the RAG, (Step 2) but it doesn't seem to have been taken into account...
### System Info
langchain-qdrant==0.1.3
OS : Linux
OS Version : Linux dgx 6.1.0-18-amd64 https://github.com/langchain-ai/langchain/pull/1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
Python Version : 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) \n[GCC 12.3.0] | sparse vectors name unknown | https://api.github.com/repos/langchain-ai/langchain/issues/24658/comments | 2 | 2024-07-25T08:20:55Z | 2024-07-25T10:54:11Z | https://github.com/langchain-ai/langchain/issues/24658 | 2,429,342,236 | 24,658 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Introduced by https://github.com/langchain-ai/langchain/commit/70761af8cfdcbe35e4719e1f358c735765efb020 - aiohttp has not `verify` parameter https://github.com/langchain-ai/langchain/blame/master/libs/community/langchain_community/utilities/requests.py (line 65 & others) causing the application to crash in async context.
### Error Message and Stack Trace (if applicable)
### Description
See above, can hardly be more descriptive. You need to replace `verify` by `verify_ssl`.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon, 15 Jul 2024 09:23:08 +0000
> Python Version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_cli: 0.0.24
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.63
> langchain_mongodb: 0.1.7
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.1.11
> langserve: 0.2.1 | [Regression] SSL verification for requests wrapper crashes for async requests | https://api.github.com/repos/langchain-ai/langchain/issues/24654/comments | 0 | 2024-07-25T07:42:21Z | 2024-07-25T15:09:23Z | https://github.com/langchain-ai/langchain/issues/24654 | 2,429,267,518 | 24,654 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_experimental.llms.ollama_functions import OllamaFunctions
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\KALYAN\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain_experimental\llms\ollama_functions.py", line 44, in <module>
from langchain_core.utils.pydantic import is_basemodel_instance, is_basemodel_subclass
ImportError: cannot import name 'is_basemodel_instance' from 'langchain_core.utils.pydantic' (C:\Users\<Profile>\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain_core\utils\pydantic.py)
```
### Description
I'm trying to use langchain for tooling in Ollama, but I'm encountering an ImportError when attempting to initialize the Ollama Functions module. The error states that is_basemodel_instance cannot be imported from langchain_core.utils.pydantic.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_experimental: 0.0.63
> langchain_fireworks: 0.1.4
> langchain_groq: 0.1.4
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Unable to Initialize the Ollama Functions Module Due to ImportError in Langchain Core Utils | https://api.github.com/repos/langchain-ai/langchain/issues/24652/comments | 1 | 2024-07-25T05:09:11Z | 2024-08-08T04:20:18Z | https://github.com/langchain-ai/langchain/issues/24652 | 2,429,035,203 | 24,652 |
[
"langchain-ai",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Please write a comprehensive title after the 'DOC: ' prefix>AttributeError: 'RunnableSequence' object has no attribute 'predict_and_parse' | https://api.github.com/repos/langchain-ai/langchain/issues/24651/comments | 1 | 2024-07-25T04:49:14Z | 2024-07-26T01:44:14Z | https://github.com/langchain-ai/langchain/issues/24651 | 2,428,992,744 | 24,651 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class VectorStoreCreator:
"""
A class to create a vector store from documents.
Methods
-------
create_vectorstore(documents, embed_model, filepath):
Creates a vector store from a set of documents using the provided embedding model.
"""
@staticmethod
def create_vectorstore(documents, embed_model, collection_name):
"""
Creates a vector store from a set of documents using the provided embedding model.
This function utilizes the Chroma library to create a vector store, which is a
data structure that facilitates efficient similarity searches over the document
embeddings. Optionally, a persistent directory and collection name can be specified
for storing the vector store on disk.
Parameters
----------
documents : list
A list of documents to be embedded and stored.
embed_model : object
The embedding model used to convert documents into embeddings.
filepath : str
The file path for persisting the vector store.
Returns
-------
object
A Chroma vector store instance containing the document embeddings.
"""
try:
# Create the vector store using Chroma
vectorstore = Chroma.from_texts(
texts=documents,
embedding=embed_model,
# persist_directory=f"chroma_db_{filepath}",
collection_name=f"{collection_name}"
)
logger.info("Vector store created successfully.")
return vectorstore
except Exception as e:
logger.error(f"An error occurred during vector store creation: {str(e)}")
return None
@staticmethod
def create_collection(file_name):
"""
Create a sanitized collection name from the given file name.
This method removes non-alphanumeric characters from the file name and truncates it to a maximum of 36 characters to form the collection name.
Args:
file_name (str): The name of the file from which to create the collection name.
Returns:
str: The sanitized and truncated collection name.
Raises:
Exception: If an error occurs during the collection name creation process, it logs the error.
"""
try:
collection_name = re.compile(r'[^a-zA-Z0-9]').sub('', file_name)[:36]
logger.info(f"A collection name created for the filename: {file_name} as {collection_name}")
return collection_name
except Exception as e:
logger.error(f"An errro occured during the collection name creation : {str(e)}")
@staticmethod
def delete_vectorstore(collection_name):
"""
Delete the specified vector store collection.
This method deletes a collection in the vector store identified by the collection name.
Args:
collection_name (str): The name of the collection to delete.
Returns:
None: This method does not return a value.
Raises:
Exception: If an error occurs during the deletion process, it logs the error.
"""
try:
Chroma.delete_collection()
return None
except Exception as e:
logger.error(f"An error occured during vector store deletion:{str(e)}")
return None
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to delete the collection while using the chroma. But actually it's not working. Could anyone help me to fix this issues.
```
class VectorStoreCreator:
"""
A class to create a vector store from documents.
Methods
-------
create_vectorstore(documents, embed_model, filepath):
Creates a vector store from a set of documents using the provided embedding model.
"""
@staticmethod
def create_vectorstore(documents, embed_model, collection_name):
"""
Creates a vector store from a set of documents using the provided embedding model.
This function utilizes the Chroma library to create a vector store, which is a
data structure that facilitates efficient similarity searches over the document
embeddings. Optionally, a persistent directory and collection name can be specified
for storing the vector store on disk.
Parameters
----------
documents : list
A list of documents to be embedded and stored.
embed_model : object
The embedding model used to convert documents into embeddings.
filepath : str
The file path for persisting the vector store.
Returns
-------
object
A Chroma vector store instance containing the document embeddings.
"""
try:
# Create the vector store using Chroma
vectorstore = Chroma.from_texts(
texts=documents,
embedding=embed_model,
# persist_directory=f"chroma_db_{filepath}",
collection_name=f"{collection_name}"
)
logger.info("Vector store created successfully.")
return vectorstore
except Exception as e:
logger.error(f"An error occurred during vector store creation: {str(e)}")
return None
@staticmethod
def create_collection(file_name):
"""
Create a sanitized collection name from the given file name.
This method removes non-alphanumeric characters from the file name and truncates it to a maximum of 36 characters to form the collection name.
Args:
file_name (str): The name of the file from which to create the collection name.
Returns:
str: The sanitized and truncated collection name.
Raises:
Exception: If an error occurs during the collection name creation process, it logs the error.
"""
try:
collection_name = re.compile(r'[^a-zA-Z0-9]').sub('', file_name)[:36]
logger.info(f"A collection name created for the filename: {file_name} as {collection_name}")
return collection_name
except Exception as e:
logger.error(f"An errro occured during the collection name creation : {str(e)}")
@staticmethod
def delete_vectorstore(collection_name):
"""
Delete the specified vector store collection.
This method deletes a collection in the vector store identified by the collection name.
Args:
collection_name (str): The name of the collection to delete.
Returns:
None: This method does not return a value.
Raises:
Exception: If an error occurs during the deletion process, it logs the error.
"""
try:
Chroma.delete_collection()
return None
except Exception as e:
logger.error(f"An error occured during vector store deletion:{str(e)}")
return None
```
### System Info
langchain==0.1.10 | Delete collection for chroma not Working. | https://api.github.com/repos/langchain-ai/langchain/issues/24650/comments | 1 | 2024-07-25T04:38:42Z | 2024-08-10T12:57:30Z | https://github.com/langchain-ai/langchain/issues/24650 | 2,428,975,672 | 24,650 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Setup:
```
from typing import Any, Dict, List, Optional
from langchain.chat_models import ChatOpenA
from langchain_core.callbacks.base import BaseCallbackHandler, BaseCallbackManager
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["question"],
template="Answer this question: {question}",
)
model = prompt | ChatOpenAI(temperature=0) | StrOutputParser()
from typing import Any, Dict, List, Optional
from langchain_core.callbacks.base import (
AsyncCallbackHandler,
BaseCallbackHandler,
BaseCallbackManager,
)
class CustomCallbackHandler(BaseCallbackHandler):
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> None:
print("chain_start")
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
print("chain_end")
```
Invoking with a list of callbacks => chain events print three times per each.
```
model.invoke("Hi", config={"callbacks": [CustomCallbackHandler()]})
# > Output:
# chain_start
# chain_start
# chain_end
# chain_start
# chain_end
# chain_end
# 'Hello! How can I assist you today?'
```
Invoking with a callback manager => chain events print only once
```
model.invoke("Hi", config={"callbacks": BaseCallbackManager([CustomCallbackHandler()])})
# > Output:
# chain_start
# chain_end
# 'Hello! How can I assist you today?'
```
### Error Message and Stack Trace (if applicable)
NA
### Description
When passing callbacks to the runnable's `.invoke` method, there are two ways to do that:
1. Pass as a list: `model.invoke("Hi", config={"callbacks": [CustomCallbackHandler()]})`
2. Pass as a callback manager: `model.invoke("Hi", config={"callbacks": BaseCallbackManager([CustomCallbackHandler()])})`
However, the behavior is different between two. The former triggers the handler more times then the latter.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #70~20.04.1-Ubuntu SMP Fri Jun 14 15:42:13 UTC 2024
> Python Version: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.10
> langchain_community: 0.0.38
> langsmith: 0.1.93
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Callbacks called different times when passed in a list or callback manager. | https://api.github.com/repos/langchain-ai/langchain/issues/24642/comments | 7 | 2024-07-25T00:55:28Z | 2024-07-30T01:33:54Z | https://github.com/langchain-ai/langchain/issues/24642 | 2,428,719,527 | 24,642 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/get_started/introduction/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: Page Navigation link references (href); Page's navigation links at the bottom incorrectly references the same page instead of the next. | https://api.github.com/repos/langchain-ai/langchain/issues/24627/comments | 0 | 2024-07-24T20:42:18Z | 2024-07-24T20:44:48Z | https://github.com/langchain-ai/langchain/issues/24627 | 2,428,436,331 | 24,627 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_experimental.graph_transformers.llm import create_simple_model
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
temperature=0,
model_name="gpt-4o-mini-2024-07-18"
)
schema = create_simple_model(
node_labels = ["Person", "Organization"],
rel_types = ["KNOWS", "EMPLOYED_BY"],
llm_type = llm._llm_type # openai-chat
)
print(schema.schema_json(indent=4))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The `_Graph` pydantic model generated from `create_simple_model` (which `LLMGraphTransformer` uses when allowed nodes and relationships are provided) does not constrain the relationships (source and target types, relationship type), and the node and relationship properties with enums when using ChatOpenAI.
One can see this by outputting the json schema from the `_Graph` schema and seeing `enum` missing from all but `SimpleNode.type`.
**The issue is that when calling `optional_enum_field` throughout `create_simple_model` the `llm_type` parameter is not passed in except for when creating node type. Passing it into each call fixes the issue.**
```json
{
"title": "DynamicGraph",
"description": "Represents a graph document consisting of nodes and relationships.",
"type": "object",
"properties": {
"nodes": {
"title": "Nodes",
"description": "List of nodes",
"type": "array",
"items": {
"$ref": "#/definitions/SimpleNode"
}
},
"relationships": {
"title": "Relationships",
"description": "List of relationships",
"type": "array",
"items": {
"$ref": "#/definitions/SimpleRelationship"
}
}
},
"definitions": {
"SimpleNode": {
"title": "SimpleNode",
"type": "object",
"properties": {
"id": {
"title": "Id",
"description": "Name or human-readable unique identifier.",
"type": "string"
},
"type": {
"title": "Type",
"description": "The type or label of the node.. Available options are ['Person', 'Organization']",
"enum": [
"Person",
"Organization"
],
"type": "string"
}
},
"required": [
"id",
"type"
]
},
"SimpleRelationship": {
"title": "SimpleRelationship",
"type": "object",
"properties": {
"source_node_id": {
"title": "Source Node Id",
"description": "Name or human-readable unique identifier of source node",
"type": "string"
},
"source_node_type": {
"title": "Source Node Type",
"description": "The type or label of the source node.. Available options are ['Person', 'Organization']",
"type": "string"
},
"target_node_id": {
"title": "Target Node Id",
"description": "Name or human-readable unique identifier of target node",
"type": "string"
},
"target_node_type": {
"title": "Target Node Type",
"description": "The type or label of the target node.. Available options are ['Person', 'Organization']",
"type": "string"
},
"type": {
"title": "Type",
"description": "The type of the relationship.. Available options are ['KNOWS', 'EMPLOYED_BY']",
"type": "string"
}
},
"required": [
"source_node_id",
"source_node_type",
"target_node_id",
"target_node_type",
"type"
]
}
}
}
```
### System Info
```bash
> pip freeze | grep langchain
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-experimental==0.0.62
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
```
platform: wsl2 windows
Python 3.10.14 | graph_transformers.llm.py create_simple_model not constraining relationships with enums when using OpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/24615/comments | 0 | 2024-07-24T16:27:18Z | 2024-07-24T16:30:04Z | https://github.com/langchain-ai/langchain/issues/24615 | 2,428,013,260 | 24,615 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Defining the Bing Search Tool
from langchain_community.utilities import BingSearchAPIWrapper
from langchain_community.tools.bing_search import BingSearchResults
import os
BING_SUBSCRIPTION_KEY = os.getenv("BING_SUBSCRIPTION_KEY")
api_wrapper = BingSearchAPIWrapper(bing_subscription_key = BING_SUBSCRIPTION_KEY, bing_search_url = 'https://api.bing.microsoft.com/v7.0/search')
bing_tool = BingSearchResults(api_wrapper=api_wrapper)
# Defining the Agent elements
from langchain.agents import AgentExecutor
from langchain_openai import AzureChatOpenAI
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.utils import convert_to_secret_str
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain import hub
instructions = """You are an assistant."""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)
llm = AzureChatOpenAI(
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=convert_to_secret_str(os.getenv("AZURE_OPENAI_API_KEY")), # type: ignore
api_version=os.getenv("AZURE_OPENAI_API_VERSION"), # type: ignore
temperature=0,
)
bing_tools = [bing_tool]
bing_llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in bing_tools])
# Defining the Agent
from langchain_core.runnables import RunnablePassthrough, RunnableSequence
bing_agent = RunnableSequence(
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
),
# RunnablePassthrough()
prompt,
bing_llm_with_tools,
OpenAIToolsAgentOutputParser(),
)
# Defining the Agent Executor
bing_agent_executor = AgentExecutor(
agent=bing_agent,
tools=bing_tools,
verbose=True,
)
# Calling the Agent Executor
bing_agent_executor.invoke({"input":"tell me about the last version of angular"})
```
### Error Message and Stack Trace (if applicable)
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable
```
{
"name": "TypeError",
"message": "Object of type CallbackManagerForToolRun is not JSON serializable",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[31], line 1
----> 1 bing_agent_executor.invoke({\"input\":\"tell me about the last version of angular\"})
3 print(\"done\")
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1612, in AgentExecutor._call(self, inputs, run_manager)
1610 # We now enter the agent loop (until it returns something).
1611 while self._should_continue(iterations, time_elapsed):
-> 1612 next_step_output = self._take_next_step(
1613 name_to_tool_map,
1614 color_mapping,
1615 inputs,
1616 intermediate_steps,
1617 run_manager=run_manager,
1618 )
1619 if isinstance(next_step_output, AgentFinish):
1620 return self._return(
1621 next_step_output, intermediate_steps, run_manager=run_manager
1622 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1309 def _take_next_step(
1310 self,
1311 name_to_tool_map: Dict[str, BaseTool],
(...)
1315 run_manager: Optional[CallbackManagerForChainRun] = None,
1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317 return self._consume_next_step(
-> 1318 [
1319 a
1320 for a in self._iter_next_step(
1321 name_to_tool_map,
1322 color_mapping,
1323 inputs,
1324 intermediate_steps,
1325 run_manager,
1326 )
1327 ]
1328 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in <listcomp>(.0)
1309 def _take_next_step(
1310 self,
1311 name_to_tool_map: Dict[str, BaseTool],
(...)
1315 run_manager: Optional[CallbackManagerForChainRun] = None,
1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317 return self._consume_next_step(
-> 1318 [
1319 a
1320 for a in self._iter_next_step(
1321 name_to_tool_map,
1322 color_mapping,
1323 inputs,
1324 intermediate_steps,
1325 run_manager,
1326 )
1327 ]
1328 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1346, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1343 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1345 # Call the LLM to see what to do.
-> 1346 output = self.agent.plan(
1347 intermediate_steps,
1348 callbacks=run_manager.get_child() if run_manager else None,
1349 **inputs,
1350 )
1351 except OutputParserException as e:
1352 if isinstance(self.handle_parsing_errors, bool):
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:580, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
572 final_output: Any = None
573 if self.stream_runnable:
574 # Use streaming to make sure that the underlying LLM is invoked in a
575 # streaming
(...)
578 # Because the response from the plan is not a generator, we need to
579 # accumulate the output into final output and return that.
--> 580 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
581 if final_output is None:
582 final_output = chunk
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3253, in RunnableSequence.stream(self, input, config, **kwargs)
3247 def stream(
3248 self,
3249 input: Input,
3250 config: Optional[RunnableConfig] = None,
3251 **kwargs: Optional[Any],
3252 ) -> Iterator[Output]:
-> 3253 yield from self.transform(iter([input]), config, **kwargs)
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3240, in RunnableSequence.transform(self, input, config, **kwargs)
3234 def transform(
3235 self,
3236 input: Iterator[Input],
3237 config: Optional[RunnableConfig] = None,
3238 **kwargs: Optional[Any],
3239 ) -> Iterator[Output]:
-> 3240 yield from self._transform_stream_with_config(
3241 input,
3242 self._transform,
3243 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
3244 **kwargs,
3245 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:2053, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
2051 try:
2052 while True:
-> 2053 chunk: Output = context.run(next, iterator) # type: ignore
2054 yield chunk
2055 if final_output_supported:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3202, in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
3199 else:
3200 final_pipeline = step.transform(final_pipeline, config)
-> 3202 for output in final_pipeline:
3203 yield output
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1271, in Runnable.transform(self, input, config, **kwargs)
1268 final: Input
1269 got_first_val = False
-> 1271 for ichunk in input:
1272 # The default implementation of transform is to buffer input and
1273 # then call stream.
1274 # It'll attempt to gather all input into a single chunk using
1275 # the `+` operator.
1276 # If the input is not addable, then we'll assume that we can
1277 # only operate on the last chunk,
1278 # and we'll iterate until we get to the last chunk.
1279 if not got_first_val:
1280 final = ichunk
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:5264, in RunnableBindingBase.transform(self, input, config, **kwargs)
5258 def transform(
5259 self,
5260 input: Iterator[Input],
5261 config: Optional[RunnableConfig] = None,
5262 **kwargs: Any,
5263 ) -> Iterator[Output]:
-> 5264 yield from self.bound.transform(
5265 input,
5266 self._merge_configs(config),
5267 **{**self.kwargs, **kwargs},
5268 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1289, in Runnable.transform(self, input, config, **kwargs)
1286 final = ichunk
1288 if got_first_val:
-> 1289 yield from self.stream(final, config, **kwargs)
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:365, in BaseChatModel.stream(self, input, config, stop, **kwargs)
358 except BaseException as e:
359 run_manager.on_llm_error(
360 e,
361 response=LLMResult(
362 generations=[[generation]] if generation else []
363 ),
364 )
--> 365 raise e
366 else:
367 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:345, in BaseChatModel.stream(self, input, config, stop, **kwargs)
343 generation: Optional[ChatGenerationChunk] = None
344 try:
--> 345 for chunk in self._stream(messages, stop=stop, **kwargs):
346 if chunk.message.id is None:
347 chunk.message.id = f\"run-{run_manager.run_id}\"
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:513, in BaseChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
505 def _stream(
506 self,
507 messages: List[BaseMessage],
(...)
510 **kwargs: Any,
511 ) -> Iterator[ChatGenerationChunk]:
512 kwargs[\"stream\"] = True
--> 513 payload = self._get_request_payload(messages, stop=stop, **kwargs)
514 default_chunk_class: Type[BaseMessageChunk] = AIMessageChunk
515 if self.include_response_headers:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in BaseChatOpenAI._get_request_payload(self, input_, stop, **kwargs)
601 if stop is not None:
602 kwargs[\"stop\"] = stop
603 return {
--> 604 \"messages\": [_convert_message_to_dict(m) for m in messages],
605 **self._default_params,
606 **kwargs,
607 }
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in <listcomp>(.0)
601 if stop is not None:
602 kwargs[\"stop\"] = stop
603 return {
--> 604 \"messages\": [_convert_message_to_dict(m) for m in messages],
605 **self._default_params,
606 **kwargs,
607 }
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:198, in _convert_message_to_dict(message)
196 message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
197 if message.tool_calls or message.invalid_tool_calls:
--> 198 message_dict[\"tool_calls\"] = [
199 _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
200 ] + [
201 _lc_invalid_tool_call_to_openai_tool_call(tc)
202 for tc in message.invalid_tool_calls
203 ]
204 elif \"tool_calls\" in message.additional_kwargs:
205 message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:199, in <listcomp>(.0)
196 message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
197 if message.tool_calls or message.invalid_tool_calls:
198 message_dict[\"tool_calls\"] = [
--> 199 _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
200 ] + [
201 _lc_invalid_tool_call_to_openai_tool_call(tc)
202 for tc in message.invalid_tool_calls
203 ]
204 elif \"tool_calls\" in message.additional_kwargs:
205 message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:1777, in _lc_tool_call_to_openai_tool_call(tool_call)
1771 def _lc_tool_call_to_openai_tool_call(tool_call: ToolCall) -> dict:
1772 return {
1773 \"type\": \"function\",
1774 \"id\": tool_call[\"id\"],
1775 \"function\": {
1776 \"name\": tool_call[\"name\"],
-> 1777 \"arguments\": json.dumps(tool_call[\"args\"]),
1778 },
1779 }
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
226 # cached encoder
227 if (not skipkeys and ensure_ascii and
228 check_circular and allow_nan and
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:200, in JSONEncoder.encode(self, o)
196 return encode_basestring(o)
197 # This doesn't pass the iterator directly to ''.join() because the
198 # exceptions aren't as detailed. The list call should be roughly
199 # equivalent to the PySequence_Fast that ''.join() would do.
--> 200 chunks = self.iterencode(o, _one_shot=True)
201 if not isinstance(chunks, (list, tuple)):
202 chunks = list(chunks)
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:258, in JSONEncoder.iterencode(self, o, _one_shot)
253 else:
254 _iterencode = _make_iterencode(
255 markers, self.default, _encoder, self.indent, floatstr,
256 self.key_separator, self.item_separator, self.sort_keys,
257 self.skipkeys, _one_shot)
--> 258 return _iterencode(o, 0)
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:180, in JSONEncoder.default(self, o)
161 def default(self, o):
162 \"\"\"Implement this method in a subclass such that it returns
163 a serializable object for ``o``, or calls the base implementation
164 (to raise a ``TypeError``).
(...)
178
179 \"\"\"
--> 180 raise TypeError(f'Object of type {o.__class__.__name__} '
181 f'is not JSON serializable')
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable"
}
```
### Description
I'm trying use the Bing Search tool in an Agent Executor.
The search tool itself works, even the agent works, the problem is when I use it in an Agent Executor.
The same issue occurs when using the Google Search tool from the langchain-google-community package
```python
from langchain_google_community import GoogleSearchAPIWrapper, GoogleSearchResults
google_tool = GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper())
```
Instead, it **does not** occur with DuckDuckGo
```python
from langchain_community.tools import DuckDuckGoSearchResults
duckduckgo_tool = DuckDuckGoSearchResults()
```
### System Info
From `python -m langchain_core.sys_info`
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Jun 27 2024, 21:37:40) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_google_community: 1.0.7
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | Agent Executor using some specific search tools is causing an error | https://api.github.com/repos/langchain-ai/langchain/issues/24614/comments | 4 | 2024-07-24T16:10:03Z | 2024-08-04T06:27:06Z | https://github.com/langchain-ai/langchain/issues/24614 | 2,427,980,563 | 24,614 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
for s in graph.stream(
{
"messages": [
HumanMessage(content="Code hello world and print it to the terminal")
]
}
):
if "__end__" not in s:
print(s)
print("----")
```
### Error Message and Stack Trace (if applicable)
```shell
TypeError('Object of type CallbackManagerForToolRun is not JSON serializable')Traceback (most recent call last):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 946, in stream
_panic_or_proceed(done, inflight, loop.step)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 1347, in _panic_or_proceed
raise exc
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\executor.py", line 60, in done
task.result()
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\retry.py", line 25, in run_with_retry
task.proc.invoke(task.input, task.config)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2873, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\utils.py", line 102, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arthur.lachini\AppData\Local\Temp\ipykernel_8788\519499601.py", line 3, in agent_node
result = agent.invoke(state)
^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1612, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1318, in _take_next_step
[
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1346, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3202, in _transform
for output in final_pipeline:
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1271, in transform
for ichunk in input:
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 5264, in transform
yield from self.bound.transform(
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 365, in stream
raise e
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 345, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 513, in _stream
payload = self._get_request_payload(messages, stop=stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 604, in _get_request_payload
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 199, in _convert_message_to_dict
_lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 1777, in _lc_tool_call_to_openai_tool_call
"arguments": json.dumps(tool_call["args"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable
```
### Description
I tried to replicate the tutorial in my local machine, but the coder function does not works as it suposed to. The ressearcher function works just fine and can do multiple consecutive researchers but as soone as the coder agent is called, it breakes the function. I've annexed prints of the langsmith dashboard to provide further insight on the error.



### System Info
Windows 10
Python 3.12.4
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
asttokens==2.4.1
attrs==23.2.0
certifi==2024.7.4
charset-normalizer==3.3.2
colorama==0.4.6
comm==0.2.2
contourpy==1.2.1
cycler==0.12.1
dataclasses-json==0.6.7
debugpy==1.8.2
decorator==5.1.1
distro==1.9.0
executing==2.0.1
fonttools==4.53.1
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
idna==3.7
ipykernel==6.29.5
ipython==8.26.0
jedi==0.19.1
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
kiwisolver==1.4.5
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
langchainhub==0.1.20
langgraph==0.1.10
langsmith==0.1.93
marshmallow==3.21.3
matplotlib==3.9.1
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
numpy==1.26.4
openai==1.37.0
orjson==3.10.6
packaging==24.1
parso==0.8.4
pillow==10.4.0
platformdirs==4.2.2
prompt_toolkit==3.0.47
psutil==6.0.0
pure_eval==0.2.3
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
pywin32==306
PyYAML==6.0.1
pyzmq==26.0.3
regex==2024.5.15
requests==2.32.3
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.31
stack-data==0.6.3
tenacity==8.5.0
tiktoken==0.7.0
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-requests==2.32.0.20240712
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.2.2
wcwidth==0.2.13
yarl==1.9.4 | TypeError('Object of type CallbackManagerForToolRun is not JSON serializable') on Coder agent | https://api.github.com/repos/langchain-ai/langchain/issues/24621/comments | 11 | 2024-07-24T14:35:00Z | 2024-08-07T12:30:47Z | https://github.com/langchain-ai/langchain/issues/24621 | 2,428,311,547 | 24,621 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was running the code in the "How to use the output-fixing parser" page. After running the last line of code `new_parser.parse(misformatted)` instead of fixing it and returning the correct output, it gives an error:
```
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Idea or request for content:
_No response_ | DOC: Running Output-fixing parser example code results in an error | https://api.github.com/repos/langchain-ai/langchain/issues/24600/comments | 1 | 2024-07-24T10:04:19Z | 2024-07-25T21:58:22Z | https://github.com/langchain-ai/langchain/issues/24600 | 2,427,134,422 | 24,600 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def dumpd(obj: Any) -> Any:
"""Return a json dict representation of an object."""
#result = json.loads(dumps(obj))
_id: List[str] = []
try:
if hasattr(obj, "__name__"):
_id = [*obj.__module__.split("."), obj.__name__]
elif hasattr(obj, "__class__"):
_id = [*obj.__class__.__module__.split("."), obj.__class__.__name__]
except Exception:
pass
result = {
"lc": 1,
"type": "not_implemented",
"id": _id,
"repr": None,
}
name = getattr(obj, "name", None)
if name:
result['name'] = name
return result
```
### Error Message and Stack Trace (if applicable)
None
### Description
dumpd is much too slow. For a complex chain like ours, this costs extra 1s per request. We replace it based on to_json_not_implemented. Please fix it formally. At least use Serializable.to_json() when possible.
In the original code, we use `Serializable.to_json()` or `to_json_not_implemented` to get a json dict, then dump it as json_str, then load it to get the original json dict. Why? This seems quite redundant. **Just use to_json_not_implemented or Serializable.to_json() will be much faster**. It is not difficult to code a special Serializable.to_json() that only gives str json_dict | dumpd costs extra 1s per invoke | https://api.github.com/repos/langchain-ai/langchain/issues/24599/comments | 0 | 2024-07-24T08:52:39Z | 2024-07-25T07:01:08Z | https://github.com/langchain-ai/langchain/issues/24599 | 2,426,969,368 | 24,599 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.pydantic_v1 import BaseModel
from langchain_community.embeddings import QianfanEmbeddingsEndpoint
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
import os
from langchain_community.llms import QianfanLLMEndpoint
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
# 定义向量模型
embeddings = QianfanEmbeddingsEndpoint(
qianfan_ak='****',
qianfan_sk='****',
chunk_size= 16,
model="Embedding-V1"
)
### Error Message and Stack Trace (if applicable)
USER_AGENT environment variable not set, consider setting it to identify your requests.
Traceback (most recent call last):
File "C:\Users\ISSUSER\PycharmProjects\pythonProject\LangChainRetrievalChain.py", line 23, in <module>
embeddings = QianfanEmbeddingsEndpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ISSUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanEmbeddingsEndpoint
qianfan_ak
str type expected (type=type_error.str)
qianfan_sk
str type expected (type=type_error.str)
### Description
qianfan_ak='****', check is ok
qianfan_sk='****', check is ok
### System Info
C:\Users\ISSUSER>pip list
Package Version
------------------------ --------
aiohttp 3.9.5
aiolimiter 1.1.0
aiosignal 1.3.1
annotated-types 0.7.0
attrs 23.2.0
bce-python-sdk 0.9.17
beautifulsoup4 4.12.3
bs4 0.0.2
certifi 2024.7.4
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
comtypes 1.4.5
dataclasses-json 0.6.7
dill 0.3.8
diskcache 5.6.3
frozenlist 1.4.1
future 1.0.0
greenlet 3.0.3
idna 3.7
jsonpatch 1.33
jsonpointer 3.0.0
langchain 0.2.9
langchain-community 0.2.7
langchain-core 0.2.21
langchain-text-splitters 0.2.2
langsmith 0.1.92
markdown-it-py 3.0.0
marshmallow 3.21.3
mdurl 0.1.2
multidict 6.0.5
multiprocess 0.70.16
mypy-extensions 1.0.0
numpy 1.26.4
orjson 3.10.6
packaging 24.1
pip 24.1.2
prompt_toolkit 3.0.47
pycryptodome 3.20.0
pydantic 2.8.2
pydantic_core 2.20.1
Pygments 2.18.0
python-dotenv 1.0.1
PyYAML 6.0.1
qianfan 0.4.1.2
requests 2.32.3
rich 13.7.1
shellingham 1.5.4
six 1.16.0
soupsieve 2.5
SQLAlchemy 2.0.31
tenacity 8.5.0
typer 0.12.3
typing_extensions 4.12.2
typing-inspect 0.9.0
uiautomation 2.0.20
urllib3 2.2.2
validators 0.33.0
wcwidth 0.2.13
yarl 1.9.4 | QianfanEmbeddingsEndpoint error in LangChain 0.2.9 | https://api.github.com/repos/langchain-ai/langchain/issues/24590/comments | 0 | 2024-07-24T01:28:50Z | 2024-07-24T01:31:22Z | https://github.com/langchain-ai/langchain/issues/24590 | 2,426,398,316 | 24,590 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = S3DirectoryLoader(bucket=s3_bucket_name, prefix=s3_prefix)
try:
documents = loader.load()
logging.info(f"size of the loaded documents {len(documents)}")
except Exception as e:
logging.info(f"error loading documents: {e}")
### Error Message and Stack Trace (if applicable)
Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.
doc = loader.load()
^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/s3_directory.py", line 139, in load
docs.extend(loader.load())
^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/s3_file.py", line 135, in _get_elements
return partition(filename=file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/unstructured/partition/auto.py", line 389, in partition
raise ValueError(
ValueError: Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.
### Description
My S3 bucket has a single folder, this folder contains json files.
Bucket name: "abc-bc-name"
Prefix: "output"
file content is json
{
"abc": "This is a text json file",
"source": "https://asf.test/4865422_f4866011606d84f50d10e60e0b513b7",
"correlation_id": "4865422_f4866011606d84f50d10e60e0b513b7"
}
### System Info
langchain==0.2.10
langchain-cli==0.0.25
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
macOS
Python 3.12.0 | Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output while using langchain S3DirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/24588/comments | 3 | 2024-07-24T00:00:20Z | 2024-08-02T23:38:10Z | https://github.com/langchain-ai/langchain/issues/24588 | 2,426,320,642 | 24,588 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
# This part works as expected
llm = HuggingFaceEndpoint(endpoint_url="http://127.0.0.1:8080")
# This part raises huggingface_hub.errors.LocalTokenNotFoundError
chat_llm = ChatHuggingFace(llm=llm)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
.venv/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py", line 320, in __init__
self._resolve_model_id()
.venv/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py", line 458, in _resolve_model_id
available_endpoints = list_inference_endpoints("*")
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 7081, in list_inference_endpoints
user = self.whoami(token=token)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1390, in whoami
headers=self._build_hf_headers(
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 8448, in _build_hf_headers
return build_hf_headers(
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 124, in build_hf_headers
token_to_send = get_token_to_send(token)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 158, in get_token_to_send
raise LocalTokenNotFoundError(
huggingface_hub.errors.LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
### Description
- I am trying to use `langchain_huggingface` library to connect to a TGI instance served locally. The problem is when wrapping a `HuggingFaceEndpoint` into `ChatHuggingFace`, it raises error requesting user token to be provided when it shouldn't be necessary a token when the model has already being downloaded and is serving locally.
- There is a similar issue #23872 but the fix they mentioned doesn't work because adding the `model_id` parameter to the `ChatHuggingFace` doesn't avoid falling in the following case:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
from transformers import AutoTokenizer # type: ignore[import]
self._resolve_model_id() # ---> Even when providing the model_id it will enter here
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
...
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return
else: # This is the case we are in when _is_huggingface_endpoint() is True
endpoint_url = self.llm.endpoint_url
available_endpoints = list_inference_endpoints("*") # ---> This line raises the error if we don't provide the hf token
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
I was able to solve the issue by modifying the constructor method so when providing the `model_id` it doesn't resolve it:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
from transformers import AutoTokenizer # type: ignore[import]
self.model_id or self._resolve_model_id() # ---> Not a good solution because if model_id is invalid then the tokenizer instantiation will fail only if the tokinizer is not provided and also won't check other hf_hub inference cases
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
```
I imagine there is a better way to solve this, for example by adding some logic to check if the `endpoint_url` is a valid ip to request or if it is served with TGI or simply by checking if it's localhost:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return
elif _is_huggingface_endpoint(self.llm): # ---> New case added to check url
... # Take the following code with a grain of salt
if is_tgi_hosted(self.llm.endpoint_url):
if not self.model_id and not self.tokenizer:
raise ValueError("You must provide valid model id or a valid tokenizer")
return
...
endpoint_url = self.llm.endpoint_url
else: # ---> New last case in which no valid huggingface interface was provided
raise TypeError("llm must be `HuggingFaceTextGenInference`, `HuggingFaceEndpoint`, `HuggingFaceHub`, or `HuggingFacePipeline`.")
available_endpoints = list_inference_endpoints("*")
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #126-Ubuntu SMP Mon Jul 1 10:14:24 UTC 2024
> Python Version: 3.10.14 (main, Jul 18 2024, 23:22:54) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langchain_community: 0.2.9
> langsmith: 0.1.93
> langchain_google_community: 1.0.7
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2 | langchain-huggingface: Using ChatHuggingFace requires hf token for local TGI using localhost HuggingFaceEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/24571/comments | 3 | 2024-07-23T19:49:50Z | 2024-07-24T13:41:56Z | https://github.com/langchain-ai/langchain/issues/24571 | 2,426,003,836 | 24,571 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import OpenAIEmbeddings
from langchain_qdrant import QdrantVectorStore
openai_api_key = ''
qdrant_api_key = ''
qdrant_url = ''
qdrant_collection = ''
query = ''
embeddings = OpenAIEmbeddings(api_key=openai_api_key, )
qdrant = QdrantVectorStore.from_existing_collection(
embedding=embeddings,
url=qdrant_url,
api_key=qdrant_api_key,
collection_name=qdrant_collection,
)
retriever = qdrant.as_retriever()
print(retriever.invoke(query)[0])
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/alexanderschmidt/Projects/qdrant_issue/main.py", line 10, in <module>
qdrant = QdrantVectorStore.from_existing_collection(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 286, in from_existing_collection
return cls(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 87, in __init__
self._validate_collection_config(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 924, in _validate_collection_config
cls._validate_collection_for_dense(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 978, in _validate_collection_for_dense
vector_config = vector_config[vector_name] # type: ignore
TypeError: 'VectorParams' object is not subscriptable
### Description
I am not able to get Qdrant as_retriver working and always receiving the error message:
TypeError: 'VectorParams' object is not subscriptable
### System Info
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Thu Dec 21 02:29:41 PST 2023; root:xnu-10002.81.5~11/RELEASE_ARM64_T8122
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langsmith: 0.1.93
> langchain_openai: 0.1.17
> langchain_qdrant: 0.1.2
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | TypeError: 'VectorParams' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/24558/comments | 6 | 2024-07-23T15:49:15Z | 2024-07-25T13:15:59Z | https://github.com/langchain-ai/langchain/issues/24558 | 2,425,545,329 | 24,558 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_ollama import ChatOllama
MODEL_NAME = "some_local_model"
MODEL_API_BASE_URL = "http://<some_host>:11434"
# there is no possibility to supply base_url
# as it is done in `from langchain_community.llms.ollama import Ollama` package
llm = ChatOllama(model=MODEL_NAME)
```
### Error Message and Stack Trace (if applicable)
Since the underlying `ollama` client ends up using `localhost` the API call fails with connection refused
### Description
I am trying to use the partner package langchain_ollama. My ollama server is running on another machine. The API does not provide a way to specify the `base_url`
The `from langchain_community.llms.ollama import Ollama` does provide that support
### System Info
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-experimental==0.0.62
langchain-ollama==0.1.0
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
langchainhub==0.1.20 | ChatOllama & Ollama from langchain_ollama partner package does not provide support to pass base_url | https://api.github.com/repos/langchain-ai/langchain/issues/24555/comments | 8 | 2024-07-23T15:26:20Z | 2024-07-28T18:25:59Z | https://github.com/langchain-ai/langchain/issues/24555 | 2,425,496,515 | 24,555 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_milvus.vectorstores import Milvus
from langchain.schema import Document
from langchain_community.embeddings import OllamaEmbeddings
URI = "<mymilvusURI>"
# Initialize embedding function
embedding_function = embeddings_model = OllamaEmbeddings(
model="<model>",
base_url="<myhostedURL>"
)
# Milvus vector store initialization parameters
collection_name = "example_collection"
# Initialize the Milvus vector store
milvus_store = Milvus(
embedding_function=embedding_function,
collection_name=collection_name,
connection_args={"uri": URI}
drop_old=True, # Set to True if you want to drop the old collection if it exists
auto_id=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There appears to be an issue with the Milvus vector store implementation where the collection is not being created during initialization. This occurs because the `_create_collection` method is never called when initializing the `Milvus` class without providing embeddings.
1. When initializing `Milvus()` without providing embeddings, the `_init` method is called from `__init__`.
2. In `_init`, the collection creation is conditional on `embeddings` being provided:
```python
if embeddings is not None:
self._create_collection(embeddings, metadatas)
Am i missing something here?
### System Info
linux
python 3.10.12
| Milvus Vector Store: Collection Not Created During Initialization | https://api.github.com/repos/langchain-ai/langchain/issues/24554/comments | 0 | 2024-07-23T14:16:09Z | 2024-07-23T14:18:42Z | https://github.com/langchain-ai/langchain/issues/24554 | 2,425,334,524 | 24,554 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import List, Tuple
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from my_embeddings import my_embeddings
vectorStore = Chroma(
collection_name="products",
embedding_function=my_embeddings
persist_directory="./database",
)
# these two functions should give the same result, but the relevance scores are different
def get_similar_docs1(sku: str, count: int) -> List[Tuple[Document, float]]:
base_query = vectorStore.get(ids=sku).get("documents")[0]
return vectorStore.similarity_search_with_relevance_scores(query=base_query, k=(count + 1))[1:]
def get_similar_docs2(sku: str, count: int) -> List[Tuple[Document, float]]:
base_vector = vectorStore.get(ids=sku, include=["embeddings"]).get("embeddings")[0]
return vectorStore.similarity_search_by_vector_with_relevance_scores(embedding=base_vector, k=(count + 1))[1:]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am writing a function, that finds ```count``` number of the most simillar document to the document with id ```sku```.
I started with the first function and it works as expected. I then tried to rewrite the function, so that it retrieves the embedding vector so I do not have to calculate it again. This returns same documents as the first function (also in the same order), but the relevance scores are completely different. Firstly, it seems that the most relevant result now has the lowest relevance score, but even if I do ```(1 - score)``` I do not get the same score as in the first function.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #38-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 7 15:25:01 UTC 2024
> Python Version: 3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.1.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Chroma - wrong relevance scores. | https://api.github.com/repos/langchain-ai/langchain/issues/24545/comments | 1 | 2024-07-23T11:24:29Z | 2024-07-23T11:46:17Z | https://github.com/langchain-ai/langchain/issues/24545 | 2,424,952,624 | 24,545 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_core.prompts import PromptTemplate
from langchain_huggingface.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
### Error Message and Stack Trace (if applicable)
ImportError: cannot import name 'AutoModelForCausalLM' from partially initialized module 'transformers' (most likely due to a circular import) (~\venv2\Lib\site-packages\transformers\__init__.py)
### Description
I created a virtual environment "venv2". And after run the command `pip install langchain_huggingface`, I can't import AutoModelForCausalLM from transformers.
### System Info
annotated-types==0.7.0
certifi==2024.7.4
charset-normalizer==3.3.2
colorama==0.4.6
filelock==3.15.4
fsspec==2024.6.1
huggingface-hub==0.24.0
idna==3.7
intel-openmp==2021.4.0
Jinja2==3.1.4
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
langchain-core==0.2.22
langchain-huggingface==0.0.3
langsmith==0.1.93
MarkupSafe==2.1.5
mkl==2021.4.0
mpmath==1.3.0
networkx==3.3
numpy==1.26.4
orjson==3.10.6
packaging==24.1
pillow==10.4.0
pydantic==2.8.2
pydantic_core==2.20.1
PyYAML==6.0.1
regex==2024.5.15
requests==2.32.3
safetensors==0.4.3
scikit-learn==1.5.1
scipy==1.14.0
sentence-transformers==3.0.1
sympy==1.13.1
tbb==2021.13.0
tenacity==8.5.0
threadpoolctl==3.5.0
tokenizers==0.19.1
torch==2.3.1
tqdm==4.66.4
transformers==4.42.4
typing_extensions==4.12.2
urllib3==2.2.2 | ImportError: cannot import name 'AutoModelForCausalLM' from partially initialized module 'transformers' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/24542/comments | 0 | 2024-07-23T09:54:16Z | 2024-07-23T09:59:00Z | https://github.com/langchain-ai/langchain/issues/24542 | 2,424,769,491 | 24,542 |
[
"langchain-ai",
"langchain"
] | ### URL
https://js.langchain.com/v0.2/docs/integrations/retrievers/vectorstore
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
With the new version of doc V 0.2 in langchain JS its getting hard to find the exact infomation regarding the stuff developer are looking for. The version V0.1 was pretty handly and it contained the description of all the retrievers and everything. But finding the context in V0.2 is very difficult. Please update the content or website to make it handy.
Else the overall functionality is awesome
### Idea or request for content:
I am mainly focusing to improve the description part of every aspects on langchain V0.2 | DOC: Need improvement in the langchain js docs v0.2 | https://api.github.com/repos/langchain-ai/langchain/issues/24540/comments | 0 | 2024-07-23T09:52:49Z | 2024-07-23T09:53:25Z | https://github.com/langchain-ai/langchain/issues/24540 | 2,424,766,130 | 24,540 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain_google_genai raise ImportError
### Error Message and Stack Trace (if applicable)
ImportError Traceback (most recent call last)
[<ipython-input-34-26070003cb78>](https://localhost:8080/#) in <cell line: 6>()
4 # !pip install --upgrade langchain
5 # from langchain_google_genai import GoogleGenerativeAI
----> 6 import langchain_google_genai# import GoogleGenerativeAI
7
8 # llm = ChatGoogleGenerativeAI(model="gemini-pro")
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/__init__.py](https://localhost:8080/#) in <module>
57
58 from langchain_google_genai._enums import HarmBlockThreshold, HarmCategory
---> 59 from langchain_google_genai.chat_models import ChatGoogleGenerativeAI
60 from langchain_google_genai.embeddings import GoogleGenerativeAIEmbeddings
61 from langchain_google_genai.genai_aqa import (
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in <module>
54 )
55 from langchain_core.language_models import LanguageModelInput
---> 56 from langchain_core.language_models.chat_models import BaseChatModel, LangSmithParams
57 from langchain_core.messages import (
58 AIMessage,
ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py)
### Description
I am am trying to use the GoogleGenerativeAI wrapper for a project of mine.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langchain_community: 0.0.38
> langsmith: 0.1.93
> langchain_google_genai: 1.0.8
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.2 | ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models'(import langchain_google_genai) in collab environment | https://api.github.com/repos/langchain-ai/langchain/issues/24533/comments | 6 | 2024-07-23T07:19:19Z | 2024-08-05T18:12:04Z | https://github.com/langchain-ai/langchain/issues/24533 | 2,424,456,171 | 24,533 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from datetime import date
import requests
from langchain_community.utilities import SerpAPIWrapper
from langchain_core.output_parsers import StrOutputParser
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
serpapi_api_key = "xxxxxxxxxx"
api_key = "sk-xxxxxxxxx"
api_url = "https://ai-yyds.com/v1"
llm = ChatOpenAI(base_url=api_url, api_key=api_key, model_name="gpt-4")
prompt = hub.pull("hwchase17/openai-functions-agent")
print(prompt.messages)
@tool
def search(text: str):
"""This tool is only used when real-time information needs to be searched. The search returns only the first 3 items"""
serp = SerpAPIWrapper(serpapi_api_key=serpapi_api_key)
response = serp.run(text)
print(type(response))
content = ""
if type(response) is list:
for item in response:
content += str(item["title"]) + "\n"
else:
content = response
return content
@tool
def time() -> str:
"""Return today's date and use it for any questions related to today's date.
The input should always be an empty string, and this function will always return today's date. Any mathematical operation on a date should occur outside of this function"""
return str(date.today())
@tool
def weather(city: str):
"""When you need to check the weather, you can use this tool, which returns the weather conditions for the day, tomorrow, and the day after tomorrow"""
url = "https://api.seniverse.com/v3/weather/daily.json?key=SrlXSW6OX9PssfOJ1&location=beijing&language=zh-Hans&unit=c&start=0"
response = requests.get(url)
data = response.json()
if not data or len(data['results']) == 0:
return None
daily = data['results'][0]["daily"]
content = ""
res = []
for day in daily:
info = {"city": city, "date": day["date"], "info": day["text_day"], "temperature_high": day["high"],
"temperature_low": day["low"]}
content += f"{city} date:{day['date']} info:{day['text_day']} maximum temperature:{day['high']} minimum temperature:{day['low']}\n"
res.append(info)
return content
tools = [time, weather, search]
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response1 = agent_executor.invoke({"input": "What's the weather like in Shanghai tomorrow"})
print(response1)
```
### Error Message and Stack Trace (if applicable)
```
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in <listcomp>
[
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform
yield from self._transform_stream_with_config(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3200, in _transform
for output in final_pipeline:
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1270, in transform
for ichunk in input:
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 5262, in transform
yield from self.bound.transform(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1288, in transform
yield from self.stream(final, config, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 360, in stream
raise e
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 340, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_openai/chat_models/base.py", line 520, in _stream
response = self.client.create(**payload)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: 20240723144941715017377sn10oSMg) (request id: 2024072306494157956522013257597)", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
```
### Description
Execute the above code, sometimes it returns normally, sometimes it reports an error
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: 20240723111146966761056DQSQiv7T) (request id: 2024072303114683478387128512399)", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
### System Info
platform: Mac
python: 3.8
> langchain_core: 0.2.22
> langchain: 0.2.9
> langchain_community: 0.2.9
> langsmith: 0.1.90
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
openai 1.35.13
| openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null | https://api.github.com/repos/langchain-ai/langchain/issues/24531/comments | 3 | 2024-07-23T06:52:24Z | 2024-07-24T10:28:41Z | https://github.com/langchain-ai/langchain/issues/24531 | 2,424,402,189 | 24,531 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain.globals import set_debug
set_debug(True)
prompt = PromptTemplate(template="user:{text}", input_variables=["text"])
model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model
chain.invoke({"text": "hello"})
### Error Message and Stack Trace (if applicable)
[llm/start] [chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: user:hello"
]
}
### Description
Issue 1: Even when using custom prompt, "Human: " is added to all of my prompts, which have been messing up my outputs.
Issue 2 (possible, unverfied): This has me thinking that "\n AI:" is added to the prompt, which is in line with how my llm are reacting. For example, if I end the prompt with "\nSummary:\n" sometimes the AI would repeat "summary" unless explicitly told not to.
### System Info
langchain==0.2.10
langchain-aws==0.1.6
langchain-community==0.2.5
langchain-core==0.2.22
langchain-experimental==0.0.61
langchain-google-genai==1.0.8
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langchain-upstage==0.1.6
langchain-weaviate==0.0.2 | "Human: " added to the prompt. | https://api.github.com/repos/langchain-ai/langchain/issues/24525/comments | 2 | 2024-07-23T01:45:40Z | 2024-07-23T23:49:40Z | https://github.com/langchain-ai/langchain/issues/24525 | 2,424,066,811 | 24,525 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from langchain_core.load import dumpd
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages([("system", "foo"), MessagesPlaceholder("bar"), ("human", "baz")])
load(dumpd(MessagesPlaceholder("bar"))) # works
load(dumpd(prompt)) # doesn't work
```
raises
```python
...
File ~/langchain/libs/core/langchain_core/load/load.py:190, in load.<locals>._load(obj)
187 if isinstance(obj, dict):
188 # Need to revive leaf nodes before reviving this node
189 loaded_obj = {k: _load(v) for k, v in obj.items()}
--> 190 return reviver(loaded_obj)
191 if isinstance(obj, list):
192 return [_load(o) for o in obj]
File ~/langchain/libs/core/langchain_core/load/load.py:78, in Reviver.__call__(self, value)
71 raise KeyError(f'Missing key "{key}" in load(secrets_map)')
73 if (
74 value.get("lc", None) == 1
75 and value.get("type", None) == "not_implemented"
76 and value.get("id", None) is not None
77 ):
---> 78 raise NotImplementedError(
79 "Trying to load an object that doesn't implement "
80 f"serialization: {value}"
81 )
83 if (
84 value.get("lc", None) == 1
85 and value.get("type", None) == "constructor"
86 and value.get("id", None) is not None
87 ):
88 [*namespace, name] = value["id"]
NotImplementedError: Trying to load an object that doesn't implement serialization: {'lc': 1, 'type': 'not_implemented', 'id': ['typing', 'List'], 'repr': 'typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]'}
``` | ChatPrompTemplate with MessagesPlaceholder ser/des broken | https://api.github.com/repos/langchain-ai/langchain/issues/24513/comments | 0 | 2024-07-22T18:45:23Z | 2024-07-22T18:47:57Z | https://github.com/langchain-ai/langchain/issues/24513 | 2,423,546,409 | 24,513 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
RunnableWithMessageHistory(AgentExecutor(agent=create_tool_calling_agent(llm_with_tools,self.tools, system_prompt))).invoke(input_prompt, config={ 'configurable': {'session_id': session_id} })
### Error Message and Stack Trace (if applicable)
Invoking: describe with {'extension': 'fallback'}
### Description
We are using a set of tools and we have prompted model through tool_calling_agent system prompt to only invoke tools from the given list, and one of the tools we use is named 'fallback', for specific questions where model is supposed to use this fallback tool with the following format:
Invoking: fallback with {'question': 'please answer the following question'}
The model uses the following and fails to respond, does anyone know why is this happening?
Invoking: describe with {'extension': 'fallback'}
### System Info
Vertex AI
Python: 3.10.12 | Tool calling agent invokes undefined tool: 'describe' | https://api.github.com/repos/langchain-ai/langchain/issues/24512/comments | 0 | 2024-07-22T18:16:04Z | 2024-07-22T18:18:40Z | https://github.com/langchain-ai/langchain/issues/24512 | 2,423,481,807 | 24,512 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use Langchain to query Azure SQL using Azure OpenAI. The code is based on the samples provided in GitHub - [Langchain to query Azure SQL using Azure OpenAI](https://github.com/Azure-Samples/SQL-AI-samples/blob/main/AzureSQLDatabase/LangChain/dbOpenAI.ipynb). I have already tested connectivity with Azure SQL using Langchain & it works. I also tested connectivity with Azure OpenAI using Langchain & it works as well. I am using the API version as 2023-08-01-preview as per the comment that "Azure OpenAI on your own data is only supported by the 2023-08-01-preview API version." Referred this [link](https://github.com/Azure-Samples/openai/blob/main/Basic_Samples/Chat/chat_with_your_own_data.ipynb).
After I create the SQL agent & execute the invoke method, it fails returning internal server error & return code as 500.
```python
import os
from sqlalchemy.engine.url import URL
from langchain_community.utilities import SQLDatabase
from langchain_openai.chat_models import AzureChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain_community.agent_toolkits.sql.base import create_sql_agent, SQLDatabaseToolkit
from azure.identity import EnvironmentCredential, get_bearer_token_provider
from langchain.prompts.chat import ChatPromptTemplate
# Set up SQLAlchemy connection
db_config = {
'drivername': 'mssql+pyodbc',
'username': os.getenv("SQL_SERVER_USERNAME") + '@' + os.getenv("SQL_SERVER"),
'password': os.getenv("SQL_SERVER_PASSWORD"),
'host': os.getenv("SQL_SERVER_ENDPOINT"),
'port': 1433,
'database': os.getenv("SQL_SERVER_DATABASE"),
'query': {'driver': 'ODBC Driver 18 for SQL Server'}
}
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
# Authenticate using the Service Principal
token_provider = get_bearer_token_provider(
EnvironmentCredential(),
"https://cognitiveservices.azure.com/.default"
)
# Set up Azure OpenAI
llm = AzureChatOpenAI(deployment_name="my-deployment-name-gpt-35-turbo-1106", azure_ad_token_provider = token_provider, temperature=0, max_tokens=4000)
final_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a helpful AI assistant expert in querying SQL Database to find answers to user's question about SQL tables.
"""
),
("user", "{question}\n ai: "),
]
)
# Set up SQL toolkit for LangChain Agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
toolkit.get_tools()
# Initialize and run the Agent
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
streaming=True,
agent_executor_kwargs={'handle_parsing_errors':True},
)
agent_executor.invoke(final_prompt.format(
question="count the rows in the titanic table."))
```
### Error Message and Stack Trace (if applicable)
Entering new SQL Agent Executor chain...
Traceback (most recent call last):
File "test.py", line 62, in
agent_executor.invoke(final_prompt.format(
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform
yield from self._transform_stream_with_config(
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3200, in _transform
for output in final_pipeline:
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1270, in transform
for ichunk in input:
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 5262, in transform
yield from self.bound.transform(
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1288, in transform
yield from self.stream(final, config, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 360, in stream
raise e
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 340, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/user/.local/lib/python3.8/site-packages/langchain_openai/chat_models/base.py", line 489, in _stream
with self.client.create(**payload) as response:
File "/home/user/.local/lib/python3.8/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request
return self._retry_request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request
return self._retry_request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-yyy-zzz'}
### Description
* I am trying to use Langchain to query Azure SQL using Azure OpenAI
* The code is based on the samples provided in GitHub - [Langchain to query Azure SQL using Azure OpenAI](https://github.com/Azure-Samples/SQL-AI-samples/blob/main/AzureSQLDatabase/LangChain/dbOpenAI.ipynb)
* Expected result is the code to return response with Action, Observation & Thought in an iterative manner
* Actual result is an error: Internal server error, 500. The complete error log can be seen below.
### System Info
## Langchain version
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
## Platform
Windows 11
## Python version
Python 3.8.10 | Langchain SQL agent withAzure SQL & Azure OpenAI fails on invoke method returning Internal server error 500 | https://api.github.com/repos/langchain-ai/langchain/issues/24504/comments | 5 | 2024-07-22T16:39:45Z | 2024-08-10T12:38:06Z | https://github.com/langchain-ai/langchain/issues/24504 | 2,423,304,468 | 24,504 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to do a simple text summarization task and return the result in JSON format by using the local Llama-3 8B Instruct model (GGUF version) and running with CPU only. The code is as follow:
```
from langchain.chains import LLMChain
from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
# Create the prompt
template = """
Read the article and return the "release date of Llama-3" in JSON format.
If the information is not mentioned, please do not return any answer.
Article: {text}
Answer:
"""
# Text for summarization (from https://en.wikipedia.org/wiki/Llama_(language_model))
text = """
Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3, released in April 2024.
Model weights for the first version of Llama were made available to the research community under a non-commercial license, and access was granted on a case-by-case basis. Unauthorized copies of the model were shared via BitTorrent. In response, Meta AI issued DMCA takedown requests against repositories sharing the link on GitHub. Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use. Llama models are trained at different parameter sizes, typically ranging between 7B and 70B. Originally, Llama was only available as a foundation model. Starting with Llama 2, Meta AI started releasing instruction fine-tuned versions alongside foundation models.
Alongside the release of Llama 3, Meta added virtual assistant features to Facebook and WhatsApp in select regions, and a standalone website. Both services use a Llama 3 model.
"""
# Set up and run Local Llama-3 model
prompt = PromptTemplate(template=template, input_variables=["text"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path="model/llama/Meta-Llama-3-8B-Instruct.Q6_K.gguf",
n_ctx=2048, callback_manager=callback_manager, verbose=True)
chain = prompt | llm
chain.invoke(text)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
By using the code, the model could be run successfully, and the output would be nice.
```
{
"release_date": "April 2024"
}
```
However, if I input more text (adding more paragraphs in the webpage (https://en.wikipedia.org/wiki/Llama_(language_model))), the output would be bad and the model kept generating the result:
```
The release notes for LLaMA model can be found on the official website, Meta AI. Release notes are typically available after you read the answer.
LLaMA. If you cannot
it as is in. Read More
LLaMA is a "Release. Release note the "Read the article.
# Release note the "read in. Read more and more, Read the Release on "read a "a
Release in "Read the "Release
.
.
.
```
May I know if there is any solution if I would like to input a long text for summarization using local Llama-3 model?
### System Info
langchain==0.2.10
langchain_community==0.2.9
langchain_core==0.2.22
Python version 3.10.12 | Strange output when summarizing long text using local Llama-3 model with LlamaCpp | https://api.github.com/repos/langchain-ai/langchain/issues/24490/comments | 1 | 2024-07-22T06:38:45Z | 2024-07-24T10:07:15Z | https://github.com/langchain-ai/langchain/issues/24490 | 2,422,068,301 | 24,490 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I log a LangChain agent using `mlflow.pyfunc.PythonModel` wrapper. The context loading is defined as below (individual configurations omi
```python
class agentWrapper(mlflow.pyfunc.PythonModel):
# SETUP OMITTED
def _getHistory(self, session_id):
return SQLChatMessageHistory(session_id=session_id, connection_string="sqlite:///sqlite.db")
def load_context(self, context):
# 1. Configure prompt templates
self._prompt = self._build_prompt()
# 2. Configure LLM client
self._open_ai_llm = self._configureLLM(context)
# 3. Configure agent tools
self._tools = self._configure_tools(context)
# 4. Assemble the AI agent
agent = create_tool_calling_agent(
self._open_ai_llm,
self._tools,
self._prompt
)
agent_executor = AgentExecutor(
agent=agent,
tools=self._tools,
verbose=True,
max_iterations=10)
self._agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
self._getHistory,
input_messages_key="input",
history_messages_key="chat_history",
)
def predict(self, context, model_input, params):
session_id = uuid.uuid4()
if params.get('session_id'):
session_id = params['session_id']
agent_config = {
"configurable": {
"session_id": str(session_id)
}
}
raw_result = self._agent_with_chat_history.invoke({
"input" : model_input["user_query"]
}, agent_config)
unserialisable_keys = ['context', 'chat_history', 'input']
serialisable_result = {x: str(raw_result[x]) for x in raw_result if x not in unserialisable_keys}
# set return value
return serialisable_result["output"]
```
### Error Message and Stack Trace (if applicable)
```python
Error in RootListenersTracer.on_chain_end callback: ValueError('Expected str, BaseMessage, List[BaseMessage], or Tuple[BaseMessage]. Got 0 Summarise conversation history\nName: user_query, dtype: object.')
```
and then
```python
[chain:RunnableWithMessageHistory > chain:RunnableBranch] [328ms] Exiting Chain run with output:
{
"input": {
"lc": 1,
"type": "not_implemented",
"id": [
"pandas",
"core",
"series",
"Series"
],
"repr": "0 Summarise conversation history\nName: user_query, dtype: object"
},
"chat_history": [],
"output": "I'm sorry, but I don't have access to the conversation history."
}
```
### Description
When I log the agent with MlFlow and download it to the same (and two other) environments, _**the history is not being retrieved**_. I've tried SQL and `FileChatMessageHistory`, and the behaviour was the same.
I've tried moving the block with the `RunnableWithMessageHistory` initialisation to the predict function, and it didn't make any difference.
```python
_agent_with_chat_history = RunnableWithMessageHistory(
self._agent_executor,
self._getHistory,
input_messages_key="input",
history_messages_key="chat_history",
)
```
The `sqlite:///sqlite.db` file was created after I pulled the agent from the MlFlow and initialised it locally. The agent doesn't write to the history,
HOWEVER: The wrapper **works** when I test it locally via the `PythonModelContext` loader:
```python
wrapper = aiAgentWrapper()
ctx = PythonModelContext({
"embedding_model": embedding_model_tmp_path,
"vector_db": vector_db_tmp_path,
},{
'openai_deployment_name':config["open_ai_deployment_name"],
'openai_model_temperature':config["open_ai_model_temperature"],
'openai_api_version': os.environ["OPENAI_API_VERSION"]
})
input_example = {"user_query": "Summarise our conversation "}
agent_params = {
"session_id": sessionId
}
wrapper.load_context(ctx)
wrapper.predict({}, input_example, agent_params ) # <--- THIS WORKS FINE AND HISTORY IS RETRIEVED
```
```python
model_version = mlflow.pyfunc.load_model(
model.model_uri
)
input_example = {"user_query": "Summarise our conversation "}
agent_params = {
"session_id": sessionId
}
model_version.predict(input_example, params=agent_params ) # <-- this DOESNT retrieve the history
```
### System Info
Reproduced in those environments:
- Databricks / Linux / DBR 14.3 ML LTS / python=3.10.12
- Azure ML Online Endpoint / Linux / mcr.microsoft.com/azureml/mlflow-ubuntu20.04-py38-cpu-inference:20240522.v1 / Python 3.8
- Local machine / Windows 11 / Local VENV / Python=3.10.12
Env requirements (logged with the MlFlow):
```
azure-ai-ml==1.13.0
azureml-mlflow==1.54.0
python-dotenv==1.0.1
mlflow==2.10.0 (tried with 2.14, and the result was the same)
cloudpickle==2.0.0
huggingface-hub==0.22.2
faiss-cpu==1.8.0
pandas==1.5.3
langchain==0.2.1
langchain_community==0.2.1
langchain_experimental==0.0.59
langchain_openai==0.1.8
langchain-text-splitters==0.2.0
mlflow==2.10.0
pypdf==4.2.0
sentence-transformers==2.7.0
typing-extensions==4.9.0
datasets==2.20.0
``` | RunnableWithMessageHistory doesn't work after packaging with MlFlow | https://api.github.com/repos/langchain-ai/langchain/issues/24487/comments | 0 | 2024-07-22T00:46:49Z | 2024-07-22T00:50:45Z | https://github.com/langchain-ai/langchain/issues/24487 | 2,421,689,657 | 24,487 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/custom_tools/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/v0.2/docs/how_to/custom_tools/ Using any of the code for the tools on this page leads to a TypeError. For example using the code from https://python.langchain.com/v0.2/docs/how_to/custom_tools/#tool-decorator will give a TypeError: args_schema must be a subclass of pydantic BaseModel. Got: <class 'pydantic.v1.main.multiplySchema'> error. The same will happen for the rest of the functions that have been defined in the documentation.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/custom_tools/> | https://api.github.com/repos/langchain-ai/langchain/issues/24475/comments | 5 | 2024-07-20T19:26:25Z | 2024-07-22T14:21:21Z | https://github.com/langchain-ai/langchain/issues/24475 | 2,421,026,298 | 24,475 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When I try to execute a custom pandas dataframe agent (https://python.langchain.com/v0.2/docs/integrations/toolkits/pandas/)
I encounter this error:
```
"name": "BadRequestError",
"message": "Error code: 400 - {'error': {'message': \"Invalid 'messages[0].content': string too long. Expected a string with maximum length 1048576, but got a string with length 1316712 instead.\", 'type': 'invalid_request_error', 'param': 'messages[0].content', 'code': 'string_above_max_length'}}"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm expecting to run the agent.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024
> Python Version: 3.10.13 (main, Sep 11 2023, 13:21:10) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.92
> langchain_chroma: 0.1.0
> langchain_experimental: 0.0.55
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.17
> langchain_qdrant: 0.1.1
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.1.9
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Chat with pandas df string length BadRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/24473/comments | 0 | 2024-07-20T18:34:40Z | 2024-07-20T18:38:25Z | https://github.com/langchain-ai/langchain/issues/24473 | 2,421,009,811 | 24,473 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use 'gpt-4o-mini' in ChatOpenAI, code like below:
```
from langchain_openai import ChatOpenAI
OPENAI_MODEL_4oMini = "gpt-4o-mini"
chatmodel = ChatOpenAI(model=OPENAI_MODEL_4oMini, temperature=0, max_tokens=500)
```
### Error Message and Stack Trace (if applicable)
The api called successfully, but when I review openAI response:
response_metadata={‘token_usage’: …, ‘model_name’: ‘gpt-3.5-turbo-0125’, }
### Description
The openAI result shows the model_name is ‘gpt-3.5-turbo-0125’, but I pass ‘gpt-4o-mini’, why it use gpt3.5 ?
I know if there is no 'model' parameter in ChatOpenAI, it will use gpt-3.5-turbo, but I pass a model, I think if the input model is unknown, the langchain should throw an exception instead of using a different model, which may lead to different response result.
### System Info
MacOS, langchain version: 0.2.10 | Use gpt-4o-mini ChatOpenAI, but gpt-3.5-turbo-0125 used | https://api.github.com/repos/langchain-ai/langchain/issues/24461/comments | 4 | 2024-07-20T04:18:23Z | 2024-07-24T14:17:44Z | https://github.com/langchain-ai/langchain/issues/24461 | 2,420,548,548 | 24,461 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_aws import ChatBedrock
from langchain_mistralai.chat_models import ChatMistralAI
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain.text_splitter import TokenTextSplitter
from langchain_community.document_loaders import UnstructuredURLLoader
urls =["https://aws.amazon.com/message/061323/"]
loader = UnstructuredURLLoader(urls=urls)
raw_data = loader.load()
text_splitter = TokenTextSplitter(chunk_size=256, chunk_overlap=24)
documents = text_splitter.split_documents(raw_data)
llm = ChatBedrock(
model_id="mistral.mistral-large-2402-v1:0",
model_kwargs={"temperature": 0.0},
)
llm_transformer = LLMGraphTransformer(llm=llm)
graph_documents = llm_transformer.convert_to_graph_documents(documents)
graph_documents[0]
### Here is the output. Example of not working
#### GraphDocument(nodes=[], relationships=[], source=Document(metadata={'source': 'https://aws.amazon.com/......
llm2 = ChatMistralAI(model='mistral-large-latest')
llm_transformer2 = LLMGraphTransformer(llm=llm2)
graph_documents2 = llm_transformer2.convert_to_graph_documents(documents)
graph_documents2[0]
### Here is the output. Example of working
#### GraphDocument(nodes=[Node(id='Aws Lambda', type='Service'), Node(id='Northern Virginia (Us-East-1)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to build a GraphRAG application using LangChain. I am getting desired output (graph documents) when using LLMGraphTransformer with an LLM object created using ChatMistralAI. But if I try to use an LLM object created with ChatBedrock I am not getting desired output. The code itself is not failing but it is not recognizing entities (nodes) and relations. This means that I can't use the output to create a GraphDatabase. Being able to process the data via Bedrock is an absolute must for me to proceed.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue May 21 16:52:24 UTC 2024
> Python Version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_aws: 0.1.11
> langchain_experimental: 0.0.62
> langchain_mistralai: 0.1.10
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> python -m langchain_core.sys_info
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ChatBedrock not creating graph documents with LLMGraphTransformer | https://api.github.com/repos/langchain-ai/langchain/issues/24444/comments | 0 | 2024-07-19T14:24:28Z | 2024-07-19T15:18:03Z | https://github.com/langchain-ai/langchain/issues/24444 | 2,419,054,048 | 24,444 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async def inference_openai(self, user_prompt: Dict[str, str], chat_history: List[Dict[str, Any]] = []):
jolt_prompt = ChatPromptTemplate.from_messages([
("system", system),
MessagesPlaceholder("chat_history"),
("user", prompt)
]
)
model_kwargs = {
"top_p": 1.0,
"presence_penalty": 0.0}
question_answer_chain = prompt | ChatOpenAI(model="gpt-4o",
max_tokens=2048,
temperature=1.0
model_kwargs=model_kwargs)
ai_msg = await question_answer_chain.ainvoke({"input": str(question_answer_chain), "chat_history": chat_history})
ai_msg = json.loads(ai_msg.content.replace("```json", "").replace("```", ""))
return ai_msg
### Error Message and Stack Trace (if applicable)
Issues with no direct upgrade or patch:
✗ Server-Side Request Forgery (SSRF) [Medium Severity][https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-7217837] in langchain@0.2.6
introduced by langchain@0.2.6 and 1 other path(s)
No upgrade or patch available
### Description
During the snix scanning it raised a SSRF
<img width="1004" alt="vulnerability" src="https://github.com/user-attachments/assets/033f6100-88b0-4f4e-b43a-8be73796ab2f">
vulnerabilty
### System Info
macOS Sonoma 14.5 | Server-Side Request Forgery (SSRF) | https://api.github.com/repos/langchain-ai/langchain/issues/24442/comments | 2 | 2024-07-19T14:13:11Z | 2024-07-19T19:27:16Z | https://github.com/langchain-ai/langchain/issues/24442 | 2,419,025,178 | 24,442 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Code for example.py
from langchain.output_parsers import RetryOutputParser
from langchain_core.output_parsers.pydantic import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnableLambda, RunnableParallel
from langchain_core.exceptions import OutputParserException
from langchain_core.prompts import (
PromptTemplate,
)
class CustomParser(PydanticOutputParser):
def parse(self, output: str) -> dict:
raise OutputParserException("Failed to parse")
@property
def _type(self) -> str:
return "custom_parser_throw_exception"
class TestModel(BaseModel):
a: int
b: str
parser = CustomParser(pydantic_object=TestModel)
model = ChatOpenAI(temperature=0)
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=model.with_structured_output(TestModel), max_retries=3)
def parse_with_prompt(args):
completion = args['completion']
if (type(completion) is TestModel):
args = args.copy()
del args['completion']
completion = completion.json(ensure_ascii=False)
args['completion'] = completion
return retry_parser.parse_with_prompt(**args)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
completion_chain = prompt | model.with_structured_output(TestModel, include_raw=False)
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(parse_with_prompt)
print(main_chain.invoke({"query": "who is leo di caprios gf?"}))
```
I created a Custom Parser inheriting it from the `PydanticOutputParser` to force it to throw an `OutputParserException.` The code encapsulates it with the `RetryOutputParser`.
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Projects\ENV\Lib\site-packages\langchain\output_parsers\retry.py", line 90, in parse_with_prompt
return self.parser.parse(completion)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\src\example.py", line 18, in parse
raise OutputParserException("Failed to parse")
langchain_core.exceptions.OutputParserException: Failed to parse
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Projects\src\example.py", line 59, in <module>
print(main_chain.invoke({"query": "who is leo di caprios gf?"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 2824, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 4387, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 1734, in _call_with_config
context.run(
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 4243, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\src\example.py", line 44, in parse_with_prompt
return retry_parser.parse_with_prompt(**args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain\output_parsers\retry.py", line 103, in parse_with_prompt
completion = self.retry_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 2822, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 179, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 1734, in _call_with_config
context.run(
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 153, in _format_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'prompt'] Received: ['prompt', 'input']"
```
### Description
The `RetryOutputParser` is apparently a bit buggy, and it's already demanding some custom workarounds to work appropriately with Pydantic data (cf. [this issue](https://github.com/langchain-ai/langchain/issues/19145), from which I adapted the workaround code).
However, the bug I'm flagging is for a wrong-named prompt variable in the code.
What I expect: Since the parser throws the exception, I expect that the Retry Parser calls the LLM again with the prompt and the error message to perform the retry.
What is happening: It throws an error `KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'prompt'] Received: ['prompt', 'input']"`
Looking at the source code for the `RetryOutputParser` it's possible to see that indeed it's passing the completion value labeled with input.
```python
class RetryOutputParser(BaseOutputParser[T]):
#[...]
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:
"""Parse the output of an LLM call using a wrapped parser.
Args:
completion: The chain completion to parse.
prompt_value: The prompt to use to parse the completion.
Returns:
The parsed completion.
"""
retries = 0
while retries <= self.max_retries:
try:
return self.parser.parse(completion)
except OutputParserException as e:
if retries == self.max_retries:
raise e
else:
retries += 1
if self.legacy and hasattr(self.retry_chain, "run"):
completion = self.retry_chain.run(
prompt=prompt_value.to_string(),
completion=completion,
error=repr(e),
)
else:
completion = self.retry_chain.invoke(
dict(
prompt=prompt_value.to_string(),
input=completion, # <<<<<--------- WRONG NAME
)
)
raise OutputParserException("Failed to parse")
#[...]
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.21
> langchain: 0.2.9
> langchain_community: 0.2.5
> langsmith: 0.1.90
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20 | Wrong prompt variable name in the RetryOutputParser class. "innput" should be replaced by "completion" | https://api.github.com/repos/langchain-ai/langchain/issues/24440/comments | 3 | 2024-07-19T13:31:14Z | 2024-07-19T16:00:30Z | https://github.com/langchain-ai/langchain/issues/24440 | 2,418,933,473 | 24,440 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
#! /usr/bin/env python3
from langchain_community.document_loaders import PyPDFLoader
from pypdf.errors import EmptyFileError, PdfReadError, PdfStreamError
import sys
def TestOneInput(fname):
try:
loader = PyPDFLoader(fname)
loader.load_and_split()
except (EmptyFileError, PdfReadError, PdfStreamError):
pass
if __name__ == "__main__":
if len(sys.argv) < 2:
exit(1)
TestOneInput(sys.argv[1])
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/fuzz/./reproducer.py", line 19, in <module>
TestOneInput(sys.argv[1])
File "/fuzz/./reproducer.py", line 12, in TestOneInput
loader.load_and_split()
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 63, in load_and_split
docs = self.load()
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/pdf.py", line 193, in lazy_load
yield from self.parser.parse(blob)
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 125, in parse
return list(self.lazy_parse(blob))
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/parsers/pdf.py", line 102, in lazy_parse
yield from [
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/parsers/pdf.py", line 102, in <listcomp>
yield from [
File "/usr/local/lib/python3.9/dist-packages/pypdf/_page.py", line 2277, in __iter__
for i in range(len(self)):
File "/usr/local/lib/python3.9/dist-packages/pypdf/_page.py", line 2208, in __len__
return self.length_function()
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 353, in get_num_pages
self._flatten()
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
[Previous line repeated 980 more times]
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1119, in _flatten
obj = page.get_object()
File "/usr/local/lib/python3.9/dist-packages/pypdf/generic/_base.py", line 284, in get_object
return self.pdf.get_object(self)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_reader.py", line 351, in get_object
retval = self.cache_get_indirect_object(
File "/usr/local/lib/python3.9/dist-packages/pypdf/_reader.py", line 512, in cache_get_indirect_object
return self.resolved_objects.get((generation, idnum))
RecursionError: maximum recursion depth exceeded in comparison
```
### Description
Hi!
I've been fuzzing PyPDFLoader with a [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz) and found few errors that occur when using a load_and_split method. One of them is shown here. Maybe issue #22892 is similar. The question is should the user handle errors from the pypdf library or is it a bug in langchain/pypdf?
### PoC:
[crash-b26d05712a29b241ac6f9dc7fff57428ba2d1a04.pdf](https://github.com/user-attachments/files/16311638/crash-b26d05712a29b241ac6f9dc7fff57428ba2d1a04.pdf)
### System Info
System Information
------------------
> OS: Linux
> OS Version: #62~20.04.1-Ubuntu SMP Tue Nov 22 21:24:20 UTC 2022
> Python Version: 3.9.5 (default, Nov 23 2021, 15:27:38)
[GCC 9.3.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
commit 27aa4d38bf93f3eef7c46f65cc0d0ef3681137eb
pypdf==4.2.0 | Using PyPDFLoader causes a crash | https://api.github.com/repos/langchain-ai/langchain/issues/24439/comments | 5 | 2024-07-19T12:27:13Z | 2024-07-22T00:20:50Z | https://github.com/langchain-ai/langchain/issues/24439 | 2,418,769,393 | 24,439 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
prompt = PromptTemplate(
template="""<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question and give response from the context given to you as truthfully as you can.
Do not add anything from you and If you don't know the answer, just say that you don't know.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Question: {question}
Context: {context}
Chat History: {chat_history}
Answer: <|eot_id|><|start_header_id|>assistant<|end_header_id|>""",
input_variables=["question", "context", "chat_history"],
)
global memory
memory = ConversationBufferWindowMemory(k=4,
memory_key='chat_history', return_messages=True, output_key='answer')
# LLMs Using API
llm = HuggingFaceHub(repo_id='meta-llama/Meta-Llama-3-8B-Instruct', huggingfacehub_api_token=api_key", model_kwargs={
"temperature": 0.1,"max_length": 300, "max_new_tokens": 300})
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever3
)
global chain_with_memory
# Create the custom chain
chain_with_memory = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=compression_retriever,
combine_docs_chain_kwargs={"prompt": prompt},
return_source_documents=True,
)
### Error Message and Stack Trace (if applicable)
llm_reponse before guardrails {'question': 'how many F grade a student can have in bachelor', 'chat_history': [], 'answer': "<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an assistant for question-answering tasks.\n Use the following pieces of retrieved context to answer the question and give response from the context given to you as truthfully as you can.\n Do not add anything from you and If you don't know the answer, just say that you don't know.\n <|eot_id|>\n <|start_header_id|>user<|end_header_id|>\n Question: how many F grade a student can have in bachelor\n Context:
### Description
i am building a rag pipeline and it was working fine in my local environment but when i deploy it on a server the prompt template was appended at the start of my llm response. When i compare my local and server environment the only difference was on server langchain 0.2.9 and langchain-community were running while on my local setup langchain 0.2.6 was running . Any one who face the same issue or have any solution
### System Info
langchain==0.2.9
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.21
langchain-experimental==0.0.62
langchain-text-splitters==0.2.2 | complete prompt is appended at the start of my response generated by llama3 | https://api.github.com/repos/langchain-ai/langchain/issues/24437/comments | 2 | 2024-07-19T11:04:58Z | 2024-08-08T18:13:53Z | https://github.com/langchain-ai/langchain/issues/24437 | 2,418,635,380 | 24,437 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
async def get_response(collection_name, user_input):
rag_chain, retriever = await get_rag_chain(embeddings=EMBEDDINGS_MODEL, collection_name=collection_name)
response = await rag_chain.ainvoke(user_input)
response = response.content
return response
async def process_user_question(update: Update, context: CallbackContext) -> int:
user_input = update.message.text
user_id = update.effective_user.id
if user_input == "Назад":
return await show_main_menu(update, context)
await update.message.reply_text("Зачекайте, будь ласка, аналізую чинне законодавство..."
"Підготовка відповіді може тривати кілька хвилин")
collection_name = context.user_data.get('collection_name', 'default_collection')
print(collection_name)
response = await get_response(collection_name=collection_name, user_input=user_input)
log_conversation(user_id=user_id, user_input=user_input, response=response)
await update.message.reply_text(
response + "\n\nВи можете задати ще одне питання або вибрати 'Назад', щоб повернутися до головного меню.",
reply_markup=ReplyKeyboardMarkup([["Назад"]], one_time_keyboard=False))
return USER_QUESTION
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This code is working, but its not asyncronous.
the single point that takes a lot of time in all the executions is this:
response = await get_response(collection_name=collection_name, user_input=user_input)
it blocks system for all other users, so the ainvoke must be not working as expected
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.49
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ainvoke is not asynchronous | https://api.github.com/repos/langchain-ai/langchain/issues/24433/comments | 8 | 2024-07-19T09:09:50Z | 2024-07-27T19:08:25Z | https://github.com/langchain-ai/langchain/issues/24433 | 2,418,430,359 | 24,433 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_huggingface import ChatHuggingFace
from langchain_huggingface import HuggingFacePipeline
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.pydantic_v1 import BaseModel, Field
class Calculator(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
tools = [Calculator]
llm = HuggingFacePipeline.from_model_id(
model_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
device_map="auto",
pipeline_kwargs={
"max_new_tokens": 1024,
"do_sample": False,
"repetition_penalty": 1.03,
}
)
chat_model = ChatHuggingFace(llm=llm)
print(chat_model.invoke("How much is 3 multiplied by 12?"))
```
### Error Message and Stack Trace (if applicable)
Here is the output:
`
content='<|user|>\nHow much is 3 multiplied by 12?<|end|>\n<|assistant|>\n To find the product of 3 and 12, you simply multiply the two numbers together:\n\n3 × 12 = 36\n\nSo, 3 multiplied by 12 equals 36.' id='run-9270dbaa-9edd-4ca4-bb33-3dec0de34957-0'`
### Description
Hello, according to the [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) ChatHuggingFace supports tool-calling. However, when I run the example from the documentation, it returns the LLM output rather than a function call.
### System Info
langchain==0.2.9
langchain-community==0.2.7
langchain-core==0.2.21
langchain-huggingface==0.0.3
langchain-text-splitters==0.2.2
Ubuntu 22.04.3 LTS
Python 3.10.12 | Huggingface tool-calling is not working | https://api.github.com/repos/langchain-ai/langchain/issues/24430/comments | 1 | 2024-07-19T07:49:51Z | 2024-07-19T20:06:00Z | https://github.com/langchain-ai/langchain/issues/24430 | 2,418,291,232 | 24,430 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
my code is proprietary
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/username/.pycharm_helpers/pydev/pydevd.py", line 1551, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/username/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/username/code/ai/myproject/examples/llm_rule_translation_and_creation.py", line 20, in <module>
sigma_agent_executor = create_sigma_agent(sigma_vectorstore=sigma_llm.sigmadb)
File "/home/username/code/ai/myproject/myproject/llm/toolkits/base.py", line 63, in create_sigma_agent
llm_with_tools = agent_llm.bind(functions=[convert_to_openai_function(t) for t in tools])
File "/home/username/code/ai/myproject/myproject/llm/toolkits/base.py", line 63, in <listcomp>
llm_with_tools = agent_llm.bind(functions=[convert_to_openai_function(t) for t in tools])
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 278, in convert_to_openai_function
return cast(Dict, format_tool_to_openai_function(function))
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 199, in format_tool_to_openai_function
if tool.tool_call_schema:
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/tools.py", line 428, in tool_call_schema
return _create_subset_model(
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/tools.py", line 129, in _create_subset_model
if field.required and not field.allow_none
AttributeError: 'FieldInfo' object has no attribute 'required'. Did you mean: 'is_required'?
### Description
I started seeing an AttributeError after upgrading to Pydantic v2.0 while using the latest version of LangChain. The error message is:
csharp
Copy code
AttributeError: 'FieldInfo' object has no attribute 'required'. Did you mean: 'is_required'?
This issue seems related to the recent Pydantic upgrade. See the trace for more information. Downgrading Pydantic resolves the issue.
### System Info
LangChain Version: Latest
Pydantic Version: 2.20.1
Python Version: 3.10.12
Operating System: Windows with WSL Ubuntu
poetry | It Seems There's a Compatibility Issue with Pydantic v2.0: FieldInfo object has no attribute 'required' | https://api.github.com/repos/langchain-ai/langchain/issues/24427/comments | 7 | 2024-07-19T04:48:04Z | 2024-08-01T17:00:04Z | https://github.com/langchain-ai/langchain/issues/24427 | 2,417,897,185 | 24,427 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_loaders import UnstructuredMarkdownLoader
from langchain_core.documents import Document
loader = UnstructuredMarkdownLoader("./a.md")
```
### Error Message and Stack Trace (if applicable)
C:\src\myproj>python testExample1.py
C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\llms\__init__.py:549: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
`from langchain_community.llms import OpenAI`.
To install langchain-community run `pip install -U langchain-community`.
warnings.warn(
C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `AzureOpenAI` was deprecated in LangChain 0.0.10 and will be removed in 0.3.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import AzureOpenAI`.
warn_deprecated(
Traceback (most recent call last):
File "C:\src\myproj\testExample1.py", line 56, in <module>
documents += loader.load()
^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\document_loaders\base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\document_loaders\unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\document_loaders\email.py", line 68, in _get_elements
return partition_email(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\email.py", line 427, in partition_email
elements = partition_html(
^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\html\partition.py", line 107, in partition_html
document.elements,
^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\utils.py", line 187, in __get__
value = self._fget(obj)
^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 76, in elements
return list(iter_elements())
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 71, in iter_elements
for e in self._iter_elements(self._main):
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 145, in _iter_elements
yield from self._process_text_tag(tag_elem)
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 450, in _process_text_tag
element = self._parse_tag(tag_elem, include_tail_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 409, in _parse_tag
ElementCls = self._classify_text(text, tag_elem.tag)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 94, in _classify_text
if tag not in HEADING_TAGS and is_possible_narrative_text(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 80, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 276, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 225, in sentence_count
sentences = sent_tokenize(text)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 130, in _download_nltk_packages_if_not_present
download_nltk_packages()
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 88, in download_nltk_packages
urllib.request.urlretrieve(NLTK_DATA_URL, tgz_file)
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 250, in urlretrieve
tfp = open(filename, 'wb')
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\feisong\\AppData\\Local\\Temp\\tmpildcyt_d'
### Description
I am trying to use langchain to load .md, .eml files.
UnstructuredMarkdownLoader
UnstructuredEmailLoader
but got above exception.
### System Info
langchain==0.2.8
langchain-cli==0.0.25
langchain-community==0.2.7
langchain-core==0.2.19
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
Windows
Python 3.10.2 | Several unstructed loader throwing PermissionError: [Errno 13] ( unstructuredMarkdownloader , unstructruedEmailLoader .. ) | https://api.github.com/repos/langchain-ai/langchain/issues/24413/comments | 0 | 2024-07-18T19:43:34Z | 2024-07-18T20:01:42Z | https://github.com/langchain-ai/langchain/issues/24413 | 2,417,245,683 | 24,413 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Neo4jVector
from langchain_huggingface import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-mpnet-base-v2"
)
self.existing_graph_parts = Neo4jVector.from_existing_graph(
embedding=embeddings,
url=uri,
username=username,
password=password,
node_label="part",
text_node_properties=["name"],
embedding_node_property="embedding",
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "D:\graph_rag.py", line 133, in <module>
graph_rag = GraphRag()
^^^^^^^^^^
File "D:\graph_rag.py", line 44, in __init__
self.existing_graph_parts = Neo4jVector.from_existing_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\vectorstores\neo4j_vector.py", line 1431, in from_existing_graph
text_embeddings = embedding.embed_documents([el["text"] for el in data])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_huggingface\embeddings\huggingface.py", line 87, in embed_documents
embeddings = self.client.encode(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 565, in encode
if all_embeddings[0].dtype == torch.bfloat16:
~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### Description
Sorry for my poor English!
When I run the code first time, it worked well.
But when I rerun the code, it run error as above.
I think it error because all nodes has its embedding already, so when run the code in the lib below:
file: langchain_community\vectorstores\neo4j_vector.py
from line 1421
```python
while True:
fetch_query = (
f"MATCH (n:`{node_label}`) "
f"WHERE n.{embedding_node_property} IS null "
"AND any(k in $props WHERE n[k] IS NOT null) "
f"RETURN elementId(n) AS id, reduce(str='',"
"k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text "
"LIMIT 1000"
)
data = store.query(fetch_query, params={"props": text_node_properties})
text_embeddings = embedding.embed_documents([el["text"] for el in data])
```
This code will fetch some nodes which don't have embedding_node_property. Since all nodes in my neo4j already have embedding, so the data is a empty list.
Then in the code following, 0 is out of an empty list's index.
file: sentence_transformers\SentenceTransformer.py
from line 563
```python
elif convert_to_numpy:
if not isinstance(all_embeddings, np.ndarray):
if all_embeddings[0].dtype == torch.bfloat16:
all_embeddings = np.asarray([emb.float().numpy() for emb in all_embeddings])
else:
all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings])
```
That's where the error happened.
I have got the answer from the bot, but I still think it is bug which needs to be fixed!
Thanks!
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-huggingface==0.0.3
langchain-openai==0.1.10
langchain-text-splitters==0.2.2
windows 11
python3.12 | Neo4jVector doesn't work well with HuggingFaceEmbeddings when reusing the graph | https://api.github.com/repos/langchain-ai/langchain/issues/24401/comments | 7 | 2024-07-18T14:32:34Z | 2024-08-10T22:56:05Z | https://github.com/langchain-ai/langchain/issues/24401 | 2,416,594,786 | 24,401 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_experimental.sql import SQLDatabaseChain
from langchain_community.utilities import SQLDatabase
from langchain_openai import ChatOpenAI, OpenAI
OPENAI_API_KEY = "XXXXXX"
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
sql_uri = "sqlite:///phealth.db"
db = SQLDatabase.from_uri(sql_uri, include_tables=['nutrition','exercise','medication'],sample_rows_in_table_info=2)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
def retrieve_from_db(query: str) -> str:
db_context = db_chain(query)
db_context = db_context['result'].strip()
return db_context
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
From SQLDatabaseChain's output I can see the following generated query and results:
```
> Entering new SQLDatabaseChain chain...
what medications i've taken today? for user 1
SQLQuery:SELECT name, dosage, dosage_unit, administration, reason, prescription, frequency, indications, interactions
FROM medication
WHERE user_id = 1 AND date(create_time) = date('now')
LIMIT 5;
SQLResult:
Answer:Ibuprofeno, 200 mg, Oral, Pain relief, fever reduction, 0, Every 4 to 6 hours, Headache, dental pain, menstrual cramps, muscle aches, or arthritis, May interact with blood thinners, blood pressure medications, and other NSAIDs
Aspirina, 325 mg, Oral, Pain relief, fever reduction, blood thinning, 0, Every 4 to 6 hours, Headache, muscle pain, arthritis, prevention of heart attacks or stroke, May interact with blood thinners, NSAIDs, and certain antidepressants
> Finished chain.
Ibuprofeno, 200 mg, Oral, Pain relief, fever reduction, 0, Every 4 to 6 hours, Headache, dental pain, menstrual cramps, muscle aches, or arthritis, May interact with blood thinners, blood pressure medications, and other NSAIDs
Aspirina, 325 mg, Oral, Pain relief, fever reduction, blood thinning, 0, Every 4 to 6 hours, Headache, muscle pain, arthritis, prevention of heart attacks or stroke, May interact with blood thinners, NSAIDs, and certain antidepressants
```
But when running the code directly on the database (SQLite) I get no results, which is correct since no records should match:
```
sqlite> SELECT name, dosage, dosage_unit, administration, reason, prescription, frequency, indications, interactions FROM medication WHERE user_id = 1 AND date(create_time) = date('now') LIMIT 5;
sqlite>
sqlite> SELECT name, date(create_time), date('now') from medication ;
Ibuprofeno|2024-07-17|2024-07-18
Aspirina|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Rosuvast|2024-07-17|2024-07-18
```
### System Info
langchain==0.2.7
langchain-cli==0.0.25
langchain-community==0.2.6
langchain-core==0.2.21
langchain-experimental==0.0.62
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
| SQLDatabaseChain generated query returns incorrect result, and different from when the query is executed directly on the db | https://api.github.com/repos/langchain-ai/langchain/issues/24399/comments | 0 | 2024-07-18T14:16:13Z | 2024-07-18T14:18:58Z | https://github.com/langchain-ai/langchain/issues/24399 | 2,416,520,776 | 24,399 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/few_shot_examples_chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I have rebuilt the [example](https://python.langchain.com/v0.2/docs/how_to/few_shot_examples_chat/) in the documentation.
Unfortunately, I get a ValidationError. I am not the only one with this problem, so I assume that something is wrong in the documentation or Langchain.
`from langchain_community.chat_models import ChatOllama`
`model = ChatOllama(model="llama3")`
`from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate`
`examples = [ {"input": "2 🦜 2", "output": "4"}, {"input": "2 🦜 3", "output": "5"},]`
`example_prompt = ChatPromptTemplate.from_messages([("human", "{input}"), ("ai", "{output}"),])`
`few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt=example_prompt, examples=examples,)`
`print(few_shot_prompt.invoke({}).to_messages())`
--------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[4], [line 8](vscode-notebook-cell:?execution_count=4&line=8)
[1](vscode-notebook-cell:?execution_count=4&line=1) # This is a prompt template used to format each individual example.
[2](vscode-notebook-cell:?execution_count=4&line=2) example_prompt = ChatPromptTemplate.from_messages(
[3](vscode-notebook-cell:?execution_count=4&line=3) [
[4](vscode-notebook-cell:?execution_count=4&line=4) ("human", "{input}"),
[5](vscode-notebook-cell:?execution_count=4&line=5) ("ai", "{output}"),
[6](vscode-notebook-cell:?execution_count=4&line=6) ]
[7](vscode-notebook-cell:?execution_count=4&line=7) )
----> [8](vscode-notebook-cell:?execution_count=4&line=8) few_shot_prompt = FewShotChatMessagePromptTemplate(
[9](vscode-notebook-cell:?execution_count=4&line=9) example_prompt=example_prompt,
[10](vscode-notebook-cell:?execution_count=4&line=10) examples=examples,
[11](vscode-notebook-cell:?execution_count=4&line=11) )
[13](vscode-notebook-cell:?execution_count=4&line=13) print(few_shot_prompt.invoke({"Hallo"}).to_messages())
File c:\Users\\AppData\Local\miniconda3\envs\langchain\Lib\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
[339](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
[340](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:340) if validation_error:
--> [341](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:341) raise validation_error
[342](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:342) try:
[343](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:343) object_setattr(__pydantic_self__, '__dict__', values)
**ValidationError: 1 validation error for FewShotChatMessagePromptTemplate
input_variables
field required (type=value_error.missing)**
### Idea or request for content:
_No response_ | DOC: Missing input variables for FewShotChatMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/24398/comments | 3 | 2024-07-18T13:35:20Z | 2024-07-21T18:06:15Z | https://github.com/langchain-ai/langchain/issues/24398 | 2,416,403,639 | 24,398 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b
tools = [add, multiply]
llm_with_tools = OllamaFunctions(model="llama3:70b", format="json").bind_tools(tools=tools)
query = "What is 3 * 12?"
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_msg = selected_tool.invoke(tool_call)
messages.append(tool_msg)
# passing messages with (Human, AI, Tool) back to model
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
print(messages)
```
### Error Message and Stack Trace (if applicable)
```
[
HumanMessage(content='What is 3 * 12?'),
AIMessage(content='', id='run-cb967bbf-778f-49b8-80d7-a54ce8b605c1-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_739326217a574817bef06eea64439d48', 'type': 'tool_call'}]),
ToolMessage(content='36', tool_call_id='call_739326217a574817bef06eea64439d48'),
AIMessage(content='', id='run-5e04e8b2-1120-44af-bb9b-13595dd221b5-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_dcbfe846caf74b4fb4ebba1d3c660ebc', 'type': 'tool_call'}])
]
```
### Description
* When using the experimental `OllamaFunctions`, passing Tool output back as described in [the documentation](https://python.langchain.com/v0.2/docs/how_to/tool_results_pass_to_model/) does not work
* The model ignores/doesn't revive the tool related messages and thus just regenerates the first call
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1-NixOS SMP PREEMPT_DYNAMIC Thu Jun 27 11:49:15 UTC 2024
> Python Version: 3.11.9 (main, Apr 2 2024, 08:25:04) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Passing tool output back to model doesn't work for OllamaFunctions | https://api.github.com/repos/langchain-ai/langchain/issues/24396/comments | 1 | 2024-07-18T12:25:14Z | 2024-07-19T16:34:27Z | https://github.com/langchain-ai/langchain/issues/24396 | 2,416,223,973 | 24,396 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
无法调用接口

### Idea or request for content:
直接使用千帆的api可以成功,但是用langchain的接口会报错 | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/> | https://api.github.com/repos/langchain-ai/langchain/issues/24392/comments | 1 | 2024-07-18T10:31:24Z | 2024-07-21T08:33:52Z | https://github.com/langchain-ai/langchain/issues/24392 | 2,416,007,593 | 24,392 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms.moonshot import Moonshot
import os
kimi_llm = Moonshot(model="moonshot-v1-8k")
output = kimi_llm.invoke("hello")
print(output)
### Error Message and Stack Trace (if applicable)
AttributeError: 'Moonshot' object has no attribute '_client'
### Description
Moonshot 0.2.7 has problem : AttributeError: 'Moonshot' object has no attribute '_client', When I back to 0.2.6 is OK!
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Jun 4 14:43:51 UTC 2024
> Python Version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain-moonshot-chat: Installed. No version info available.
> langchain-prompt-chain: Installed. No version info available.
> langchain-prompt-template: Installed. No version info available.
> langchain-test: Installed. No version info available.
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
-------------------------------------------------- | Moonshot 0.2.7 has problem : AttributeError: 'Moonshot' object has no attribute '_client', When I back to 0.2.6 is OK! | https://api.github.com/repos/langchain-ai/langchain/issues/24390/comments | 3 | 2024-07-18T09:23:36Z | 2024-07-30T09:17:25Z | https://github.com/langchain-ai/langchain/issues/24390 | 2,415,836,333 | 24,390 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import SparkLLMTextEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import UnstructuredMarkdownLoader
import os
os.environ['TMPDIR'] = './.caches'
os.environ['TEMP'] = './.caches'
markdown_path = "./llms/doc1.md"
loader = UnstructuredMarkdownLoader(markdown_path)
documnets = loader.load()
print(loader)
```
### Error Message and Stack Trace (if applicable)
```bash
(LangChain) F:\PythonProject\LangChain>python ./llms/SparkLLMTextEmbeddings.py
Traceback (most recent call last):
File "F:\PythonProject\LangChain\llms\SparkLLMTextEmbeddings.py", line 21, in <module>
documnets = loader.load()
^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_core\document_loaders\base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_community\document_loaders\unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_community\document_loaders\markdown.py", line 45, in _get_elements
return partition_md(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\md.py", line 110, in partition_md
return partition_html(
^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\html\partition.py", line 107, in partition_html
document.elements,
^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\utils.py", line 187, in __get__
value = self._fget(obj)
^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 76, in elements
return list(iter_elements())
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 71, in iter_elements
for e in self._iter_elements(self._main):
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 145, in _iter_elements
yield from self._process_text_tag(tag_elem)
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 450, in _process_text_tag
element = self._parse_tag(tag_elem, include_tail_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 409, in _parse_tag
ElementCls = self._classify_text(text, tag_elem.tag)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 94, in _classify_text
if tag not in HEADING_TAGS and is_possible_narrative_text(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 80, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 276, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 225, in sentence_count
sentences = sent_tokenize(text)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 130, in _download_nltk_packages_if_not_present
download_nltk_packages()
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 88, in download_nltk_packages
urllib.request.urlretrieve(NLTK_DATA_URL, tgz_file)
File "C:\Users\asus\.conda\envs\LangChain\Lib\urllib\request.py", line 250, in urlretrieve
tfp = open(filename, 'wb')
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'F:\\PythonProject\\LangChain\\.caches\\tmp27mpsjp4'
```
### Description
I don't know where I went wrong
### System Info
platform windows
Python 3.12.4 | UnstructuredMarkdownLoader PermissionError: [Errno 13] Permission denied | https://api.github.com/repos/langchain-ai/langchain/issues/24388/comments | 4 | 2024-07-18T08:24:24Z | 2024-07-22T17:17:38Z | https://github.com/langchain-ai/langchain/issues/24388 | 2,415,715,932 | 24,388 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
os.environ["OPENAI_API_KEY"] = "sk-REDACTED"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Ignore SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
with open("/home/ehkim/git/testprj/code_snippet/swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://superset.mydomain.com/api/v1/security/login"
payload = {
"username": "myusername",
"password": "mypassword",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
from langchain.globals import set_debug
set_debug(True)
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers(), verify=False)
superset_agent = planner.create_openapi_agent(
swagger_api_spec,
swagger_requests_wrapper,
llm,
allowed_operations=["GET", "POST", "PUT", "DELETE", "PATCH"],
allow_dangerous_requests=True,
agent_executor_kwargs={'handle_parsing_errors':True},
handle_parsing_errors=True
)
superset_agent.run(
"""
1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, table_name: raw_esg_volume, response : {{'result' : {{'table_id': (dataset_id)}}}})
2. Retrieve the dataset information obtained in step 1. (tool: requests_get, API: /api/v1/dataset/dataset/{{dataset_id}}/, params: None)
3. Create a chart referencing the dataset obtained in step 2. The chart should plot the trend of total, online_news, and (total - online_news) values as a line chart. (tool: requests_post, API: /api/v1/chart/, database_id: 1)
4. Return the URL of the created chart. https://superset.mydomain.com/explore/?slice_id={{chart_id}}
When specifying the action, only write the tool name without any additional explanation.
"""
)
In this file, I used swagger.yaml file from https://superset.demo.datahubproject.io/api/v1/_openapi.
It's json format, so I converted it with below code.
```python
import json
import yaml
# read file
with open('swagger.json', 'r') as json_file:
json_data = json.load(json_file)
# write file
with open('swagger.yaml', 'w') as yaml_file:
yaml.dump(json_data, yaml_file, default_flow_style=False)
```
### Error Message and Stack Trace (if applicable)
There's no exception because of handle_parsing_error=True but failure to solve user's request.
The below is agent log.
```
[chain/start] [chain:AgentExecutor] Entering Chain run with input:
{
"input": "\n 1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, (syncopation) "
}
[chain/start] [chain:AgentExecutor > chain:LLMChain] Entering Chain run with input:
{
"input": "\n 1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, (syncopation) ",
"agent_scratchpad": "",
"stop": [
"\nObservation:",
"\n\tObservation:"
]
}
...
(syncopation)
(api_planner log)
(syncopation)
(api_controller log)
...
[chain/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] [2.73s] Exiting Chain run with output:
{
"text": "Action: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```"
}
[tool/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] Entering Tool run with input:
"{'requested_tool_name': "The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.", 'available_tool_names': ['requests_get', 'requests_post', 'requests_put']}"
[tool/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] [0ms] Exiting Tool run with output:
"The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put]."
[chain/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] Entering Chain run with input:
{
"input": "1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}",
"agent_scratchpad": "Action: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```\nObservation: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put].\nThought:",
"stop": [
"\nObservation:",
"\n\tObservation:"
]
}
[llm/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: You are an agent that gets a sequence of API calls and given their documentation, should execute them and return the final response.\nIf you cannot complete them and run into issues, you should explain the issue. If you're unable to resolve an API call, you can retry the API call. When interacting with API objects, you should extract ids for inputs to other API calls but ids and names for outputs returned to the User.\n\n\nHere is documentation on the API:\nBase url: https://superset.mydomain.com/\nEndpoints:\n== Docs for POST /api/v1/dataset/get_or_create/ == \nrequestBody:\n content:\n application/json:\n schema:\n properties:\n always_filter_main_dttm:\n default: false\n type: boolean\n database:\n type: integer\n external_url:\n nullable: true\n type: string\n is_managed_externally:\n nullable: true\n type: boolean\n normalize_columns:\n default: false\n type: boolean\n owners:\n items:\n type: integer\n type: array\n schema:\n maxLength: 250\n minLength: 0\n nullable: true\n type: string\n sql:\n nullable: true\n type: string\n table_name:\n maxLength: 250\n minLength: 1\n type: string\n required:\n - database\n - table_name\n type: object\n description: Dataset schema\n required: true\n\n== Docs for POST /api/v1/dataset/get_or_create/ == \nrequestBody:\n content:\n application/json:\n schema:\n properties:\n always_filter_main_dttm:\n default: false\n type: boolean\n database_id:\n description: ID of database table belongs to\n type: integer\n normalize_columns:\n default: false\n type: boolean\n schema:\n description: The schema the table belongs to\n maxLength: 250\n minLength: 0\n nullable: true\n type: string\n table_name:\n description: Name of table\n type: string\n template_params:\n description: Template params for the table\n type: string\n required:\n - database_id\n - table_name\n type: object\n required: true\nresponses:\n content:\n application/json:\n schema:\n properties:\n result:\n properties:\n table_id:\n type: integer\n type: object\n type: object\n description: The ID of the table\n\n\n\n\nHere are tools to execute requests against the API: requests_get: Use this to GET content from a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"params\" and \"output_instructions\".\nThe value of \"url\" should be a string. \nThe value of \"params\" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \nIf parameters are not needed, or not available, leave it empty.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, \nfor example the id(s) for a resource(s) that the GET request fetches.\n\nrequests_post: Use this when you want to POST to a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\nThe value of \"url\" should be a string.\nThe value of \"data\" should be a dictionary of key-value pairs you want to POST to the url.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.\nAlways use double quotes for strings in the json string.\nrequests_put: Use this when you want to PUT to a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\nThe value of \"url\" should be a string.\nThe value of \"data\" should be a dictionary of key-value pairs you want to PUT to the url.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PUT request creates.\nAlways use double quotes for strings in the json string.\n\n\nStarting below, you should follow this format:\n\nPlan: the plan of API calls to execute\nThought: you should always think about what to do\nAction: the action to take, should be one of the tools [requests_get, requests_post, requests_put]\nAction Input: the input to the action\nObservation: the output of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I am finished executing the plan (or, I cannot finish executing the plan without knowing some other information.)\nFinal Answer: the final output from executing the plan or missing information I'd need to re-plan correctly.\n\n\nBegin!\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought:\nAction: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```\nObservation: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put].\nThought:"
]
}
[llm/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain > llm:ChatOpenAI] [4.12s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```",
"response_metadata": {
"token_usage": {
"completion_tokens": 196,
"prompt_tokens": 1296,
"total_tokens": 1492
},
"model_name": "gpt-4o",
"system_fingerprint": "fp_c4e5b6fa31",
"finish_reason": "stop",
"logprobs": null
},
"type": "ai",
"id": "run-b38b50e3-b4d1-44ef-996a-76b132d46f79-0",
"usage_metadata": {
"input_tokens": 1296,
"output_tokens": 196,
"total_tokens": 1492
},
"tool_calls": [],
"invalid_tool_calls": []
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 196,
"prompt_tokens": 1296,
"total_tokens": 1492
},
"model_name": "gpt-4o",
"system_fingerprint": "fp_c4e5b6fa31"
},
"run": null
}
[chain/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] [4.12s] Exiting Chain run with output:
{
"text": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```"
}
[tool/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] Entering Tool run with input:
"{'requested_tool_name': 'Execute the corrected action using the `requests_post` tool.', 'available_tool_names': ['requests_get', 'requests_post', 'requests_put']}"
[tool/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] [0ms] Exiting Tool run with output:
"Execute the corrected action using the `requests_post` tool. is not a valid tool, try one of [requests_get, requests_post, requests_put]."
...
(loop)
...
```
### Description
I expected two things.
One is that all five operations added to the planner.create_openapi_agent function are added to api_controller, and the other is that only the tool name is correctly entered in the response in the tool name field when executing the API.
However, as observed through logs, both did not work well.
### System Info
platform : linux
python : 3.11
$ pip freeze | grep langchain
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | LangChain Agent Fails to Recognize Tool Names with Descriptions and Incomplete Operation Addition | https://api.github.com/repos/langchain-ai/langchain/issues/24382/comments | 0 | 2024-07-18T05:01:28Z | 2024-07-18T06:20:53Z | https://github.com/langchain-ai/langchain/issues/24382 | 2,415,226,049 | 24,382 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import dotenv
from langchain_openai import OpenAIEmbeddings
dotenv.load_dotenv()
embeddings = OpenAIEmbeddings(
model='text-embedding-3-large',
dimensions=1024, # assign dimensions to 1024
openai_api_base=os.getenv('OPENAI_API_BASE')
)
text = 'This is a test document.'
vector = embeddings.embed_documents([text])
print(f'the dimension of vector is {len(vector[0])}')
```
Output:
the dimension of vector is 3072
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- I'm using OpenAIEmbeddings to embed my document.
- I assign model to text-embedding-3-large and dimension to 1024
- However, the actual dimension of vector is still 3072(default with text-embedding-3-large)
- It seems that the param `dimension` is not working.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_experimental: 0.0.62
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| [Embedding] The dimensions parameter of OpenAIEmbeddings is not working | https://api.github.com/repos/langchain-ai/langchain/issues/24378/comments | 4 | 2024-07-18T02:11:48Z | 2024-07-19T01:10:57Z | https://github.com/langchain-ai/langchain/issues/24378 | 2,415,056,529 | 24,378 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import AIMessageChunk
chunks = [
AIMessageChunk(content="Hello", response_metadata={'prompt_token_count': 12, 'generation_token_count': 1, 'stop_reason': None}, id='1'),
AIMessageChunk(content="!", response_metadata={'prompt_token_count': None, 'generation_token_count': 2, 'stop_reason': None}, id='1')
]
response = AIMessageChunk("")
for chunk in chunks:
response += chunk
```
### Error Message and Stack Trace (if applicable)
TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.
### Description
Chunk addition is failing with streaming use cases that generate AIMessageChunk. This root cause seems to be a failure in the [merge_dict](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/utils/_merge.py#L6) function.
```python
from langchain_aws import ChatBedrock
chat = ChatBedrock(
model_id="meta.llama3-8b-instruct-v1:0",
streaming=True
)
response = AIMessageChunk("")
for chunk in model.stream(message):
response += chunk
```
### System Info
langchain-core = 0.2.21
### Related Issues
https://github.com/langchain-ai/langchain/issues/23891
https://github.com/langchain-ai/langchain-aws/issues/107 | AIMessageChunk merge is failing | https://api.github.com/repos/langchain-ai/langchain/issues/24377/comments | 2 | 2024-07-18T01:49:43Z | 2024-07-18T23:32:31Z | https://github.com/langchain-ai/langchain/issues/24377 | 2,415,009,829 | 24,377 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
When starting langserve with the code below and accessing it via `RemoteRunnable`, I encounter a `KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"`.
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory
from langserve import add_routes
from fastapi import FastAPI
import uvicorn
app = FastAPI()
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're an assistant who speaks in {language}. Respond in 20 words or fewer.",
),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
]
)
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
runnable = prompt | model
runnable_with_history = RunnableWithMessageHistory(
runnable,
get_session_history,
input_messages_key="input",
history_messages_key="history",
)
chain = runnable_with_history
add_routes(app, chain, path="/test")
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Code for RemoteRunnable:
```python
from langserve import RemoteRunnable
rr = RemoteRunnable("http://localhost:8000/test/")
rr.invoke(
{"language": "Italian", "input": "what's my name?"},
config={"configurable": {"session_id": "1"}},
)
```
This issue also occurs in the LangServe Playground where the input box for `language` does not appear. When sending the message as-is, it results in `KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"`.
### Error Message and Stack Trace (if applicable)
INFO: 127.0.0.1:57555 - "POST /test/invoke HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langserve/server.py", line 530, in invoke
return await api_handler.invoke(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langserve/api_handler.py", line 833, in invoke
output = await invoke_coro
^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in ainvoke
input = await step.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/branch.py", line 277, in ainvoke
output = await runnable.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2860, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 203, in ainvoke
return await self._acall_with_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1784, in _acall_with_config
output: Output = await asyncio.create_task(coro, context=context) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 159, in _aformat_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"
### Conditions Under Which the Issue Does Not Occur
#### 1. Without Using LangServe:
Running the server-side code (excluding `uvicorn.run()`) in an IPython shell with the following command does **NOT** trigger the issue:
```python
chain.invoke(
{"language": "Italian", "input": "what's my name?"},
config={"configurable": {"session_id": "1"}},
)
```
#### 2. Without Using RunnableWithMessageHistory:
Modifying the server-side code as shown below and running it in the playground does **NOT** trigger the issue:
```python
# Before:
chain = runnable_with_history
# After:
chain = runnable
```
### Conclusion
The issue seems to arise from the combination of `RunnableWithMessageHistory` and LangServe.
Any assistance or guidance on resolving this issue would be greatly appreciated.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 16 2023, 15:57:36) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.20
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.25
> langchain_experimental: 0.0.61
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langchainplus_sdk: 0.0.20
> langgraph: 0.1.8
> langserve: 0.2.2
> pydantic: 2.8.2 | KeyError with RunnableWithMessageHistory and LangServe: Missing Variable | https://api.github.com/repos/langchain-ai/langchain/issues/24370/comments | 0 | 2024-07-17T22:24:33Z | 2024-07-17T22:31:00Z | https://github.com/langchain-ai/langchain/issues/24370 | 2,414,721,623 | 24,370 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from graphs.reference_graph.prompts.code_review_prompt import code_review_prompt
from graphs.reference_graph.thinkers.hallucination_grader import hallucination_grader
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit
from langchain_community.utilities.github import GitHubAPIWrapper
llm = ChatOllama(model="deepseek-coder-v2", temperature=1)
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = toolkit.get_tools()
agent = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
prompt_chain = code_review_prompt | agent | StrOutputParser()
```
### Error Message and Stack Trace (if applicable)
```
/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Traceback (most recent call last):
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/main.py", line 58, in <module>
for output in workflowApp.stream(inputs):
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 986, in stream
_panic_or_proceed(done, inflight, step)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1540, in _panic_or_proceed
raise exc
File "/opt/homebrew/Cellar/python@3.12/3.12.4/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 72, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2822, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/utils.py", line 95, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/graphs/reference_graph/thinkers/code_reviewer.py", line 86, in generate_code_review_node
generate_chain = getGeneratePromptChain()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/graphs/reference_graph/thinkers/code_reviewer.py", line 51, in getGeneratePromptChain
github = GitHubAPIWrapper()
^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 1100, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langchain_community/utilities/github.py", line 90, in validate_environment
installation = installation[0]
~~~~~~~~~~~~^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 76, in __getitem__
self.__fetchToIndex(index)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 92, in __fetchToIndex
self._grow()
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 95, in _grow
newElements = self._fetchNextPage()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 244, in _fetchNextPage
headers, data = self.__requester.requestJsonAndCheck(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/Requester.py", line 548, in requestJsonAndCheck
return self.__check(*self.requestJson(verb, url, parameters, headers, input, self.__customConnection(url)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/Requester.py", line 609, in __check
raise self.createException(status, responseHeaders, data)
github.GithubException.GithubException: 500
```
### Description
This is a follow-up issue from this discussion: https://github.com/langchain-ai/langchain/discussions/24351
I created an app in GitHub Enterprise (GHE) and set up my env variables where I'm running my LangChain app.
```
export GITHUB_APP_ID="<app-id>"
export GITHUB_APP_PRIVATE_KEY="<path to .pem file>"
export GITHUB_REPOSITORY="<ghe-repo-url>"
```
After some debugging with Dosu and looking at the [source code for GitHubAPIWrapper](https://api.python.langchain.com/en/latest/_modules/langchain_community/utilities/github.html#GitHubAPIWrapper), it seems that the wrapper is not taking in the API URL for the GitHub Enterprise instance. Looking at the exception headers, it continues to try to hit github.com instead of my GHE instance. I can't seem to get it to do otherwise.
`_GithubException__headers: {'date': 'Wed, 17 Jul 2024 18:40:35 GMT', 'vary': 'Accept-Encoding, Accept, X-Requested-With', 'transfer-encoding': 'chunked', 'x-github-request-id': 'CED3:109409:A57DE0:1348F7E:66981022', 'server': 'github.com'}`
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_huggingface: 0.0.3
> langchain_ibm: 0.1.7
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
``` | 500 error when using GitHubAPIWrapper with GitHub Enterprise | https://api.github.com/repos/langchain-ai/langchain/issues/24367/comments | 0 | 2024-07-17T21:27:01Z | 2024-07-17T21:29:36Z | https://github.com/langchain-ai/langchain/issues/24367 | 2,414,630,610 | 24,367 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# import
from langchain_community.embeddings import OllamaEmbeddings
from sentence_transformers.util import cos_sim
import numpy as np
from numpy.testing import assert_almost_equal
# definitions
ollama_emb = OllamaEmbeddings(model='mxbai-embed-large')
# test on ollama
query = 'Represent this sentence for searching relevant passages: A man is eating a piece of bread'
docs = [
query,
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
r_1 = ollama_emb.embed_documents(docs)
# Calculate cosine similarity
similarities = cos_sim(r_1[0], r_1[1:])
print(similarities.numpy()[0])
print("to be compared to :\n [0.7920, 0.6369, 0.1651, 0.3621]")
try :
assert_almost_equal( similarities.numpy()[0], np.array([0.7920, 0.6369, 0.1651, 0.3621]),decimal=2)
print("TEST 1 : OLLAMA PASSED.")
except AssertionError:
print("TEST 1 : OLLAMA FAILED.")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
THe test is not working well.
It works with ollama directly but not with ollama under Langchain.
Also, it works well with Llamafile under Langchain.
The issue seems to be the same than here : [https://github.com/ollama/ollama/issues/4207](https://github.com/ollama/ollama/issues/4207 )
Why is it not fixed with langchain?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain_chroma: 0.1.1
> langchain_text_splitters: 0.2.2
ollama : 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | mxbai-embed-large embedding not consistent with original paper | https://api.github.com/repos/langchain-ai/langchain/issues/24357/comments | 1 | 2024-07-17T17:30:05Z | 2024-07-24T07:45:45Z | https://github.com/langchain-ai/langchain/issues/24357 | 2,414,158,892 | 24,357 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def _create_index(self) -> None:
"""Create a index on the collection"""
from pymilvus import Collection, MilvusException
if isinstance(self.col, Collection) and self._get_index() is None:
try:
# If no index params, use a default HNSW based one
if self.index_params is None:
self.index_params = {
"metric_type": "L2",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
try:
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
# If default did not work, most likely on Zilliz Cloud
except MilvusException:
# Use AUTOINDEX based index
self.index_params = {
"metric_type": "L2",
"index_type": "AUTOINDEX",
"params": {},
}
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
logger.debug(
"Successfully created an index on collection: %s",
self.collection_name,
)
except MilvusException as e:
logger.error(
"Failed to create an index on collection: %s", self.collection_name
)
raise e
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We are trying to use Langchain_milvus library to create milvus collection using metadata. Now latest version of milvus support Scalar Index for other column also. we have requirement to add Scalar Index for batter performance in filtering data.
Currently langchain milvus support to add index only for VECTOR field only.
We can use metadata_schema logic to support indexing on Scalar fields.
https://github.com/langchain-ai/langchain/pull/23219
### System Info
[langchain-core==0.2.20](https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.2.20)
[langchain-community==0.2.7](https://github.com/langchain-ai/langchain/releases/tag/langchain-community%3D%3D0.2.7) | Support scalar field indexing for milvus collection creation | https://api.github.com/repos/langchain-ai/langchain/issues/24343/comments | 5 | 2024-07-17T12:15:08Z | 2024-07-18T08:39:51Z | https://github.com/langchain-ai/langchain/issues/24343 | 2,413,462,272 | 24,343 |
[
"langchain-ai",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | how to specify a seed when calling the chatopenai model to ensure the stability of the output results. | https://api.github.com/repos/langchain-ai/langchain/issues/24336/comments | 0 | 2024-07-17T08:58:00Z | 2024-07-17T09:00:36Z | https://github.com/langchain-ai/langchain/issues/24336 | 2,413,053,515 | 24,336 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
os.environ["OPENAI_API_KEY"] = "sk-REDACTED"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Ignore SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
with open("/home/ehkim/git/testprj/code_snippet/swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://superset.mydomain.com/api/v1/security/login"
payload = {
"username": "myusername",
"password": "mypassword",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
from langchain.globals import set_debug
set_debug(True)
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers(), verify=False)
superset_agent = planner.create_openapi_agent(
swagger_api_spec,
swagger_requests_wrapper,
llm, allow_dangerous_requests=True,
handle_parsing_errors=True)
superset_agent.run(
"""
1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, table_name: raw_esg_volume, response : {{'result' : {{'table_id': (dataset_id)}}}})
2. Retrieve the dataset information obtained in step 1. (tool: requests_get, API: /api/v1/dataset/dataset/{{dataset_id}}/, params: None)
3. Create a chart referencing the dataset obtained in step 2. The chart should plot the trend of total, online_news, and (total - online_news) values as a line chart. (tool: requests_post, API: /api/v1/chart/, database_id: 1)
4. Return the URL of the created chart. https://superset.mydomain.com/explore/?slice_id={{chart_id}}
When specifying the action, only write the tool name without any additional explanation.
"""
)
```
In this file, I used swagger.yaml file from https://superset.demo.datahubproject.io/api/v1/_openapi.
It's json format, so I converted it with below code.
```python
import json
import yaml
# read file
with open('swagger.json', 'r') as json_file:
json_data = json.load(json_file)
# write file
with open('swagger.yaml', 'w') as yaml_file:
yaml.dump(json_data, yaml_file, default_flow_style=False)
```
### Error Message and Stack Trace (if applicable)
(myenv) ehkim@ehkim-400TEA-400SEA:~/git/testprj/code_snippet$ python openapi-agent.py
/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain/_api/module_import.py:92: LangChainDeprecationWarning: Importing RequestsWrapper from langchain is deprecated. Please replace deprecated imports:
>> from langchain import RequestsWrapper
with new imports of:
>> from langchain_community.utilities import RequestsWrapper
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here <https://python.langchain.com/v0.2/docs/versions/v0_2/>
warn_deprecated(
Traceback (most recent call last):
File "/home/ehkim/git/testprj/code_snippet/openapi-agent.py", line 23, in <module>
swagger_api_spec = reduce_openapi_spec(data)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 53, in reduce_openapi_spec
(name, description, dereference_refs(docs, full_schema=spec))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 108, in dereference_refs
else _infer_skip_keys(schema_obj, full_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 80, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema, processed_refs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 80, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema, processed_refs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 76, in _infer_skip_keys
ref = _retrieve_ref(v, full_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 17, in _retrieve_ref
out = out[int(component)]
~~~^^^^^^^^^^^^^^^^
KeyError: 400
### Description
I'm trying to use the langchain library to execute the OpenAPI Agent and fully interpret an OpenAPI specification using the reduce_openapi_spec function in my script.
I expect the agent to execute normally without any errors.
Instead, it raises a KeyError: 400.
### System Info
(myenv) ehkim@ehkim-400TEA-400SEA:~/git/testprj/code_snippet$ pip freeze | grep langchain
langchain==0.2.8
langchain-cli==0.0.21
langchain-community==0.2.7
langchain-core==0.2.20
langchain-experimental==0.0.37
langchain-google-vertexai==0.0.3
langchain-openai==0.1.16
langchain-robocorp==0.0.3
langchain-text-splitters==0.2.2
langchainhub==0.1.15 | 'KeyError: 400' occurs when using langchain_community.agent_toolkits.openapi.spec.reduce_openapi_spec. | https://api.github.com/repos/langchain-ai/langchain/issues/24335/comments | 0 | 2024-07-17T08:54:57Z | 2024-07-17T08:57:34Z | https://github.com/langchain-ai/langchain/issues/24335 | 2,413,047,320 | 24,335 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.runnables import ConfigurableField
from langchain_core.pydantic_v1 import BaseModel
import os
os.environ["OPENAI_API_KEY"] = "..."
class Add(BaseModel):
"""Add two numbers"""
a: int
b: int
configurable_temperature = ConfigurableField(
id="temperature",
name="Temperature",
description="The temperature of the model",
)
tools = [Add]
model = ChatOpenAI(temperature=0).configurable_fields(
temperature=configurable_temperature
)
print("Model without Tools")
print("Config Specs - ", model.config_specs)
print("Config Schema Json - ", model.config_schema(include=["temperature"]).schema_json())
print("\n\nModel with Tools")
model_with_tools = model.bind_tools(tools)
print("Config Specs - ", model_with_tools.config_specs)
print("Config Schema Json - ", model_with_tools.config_schema(include=["temperature"]).schema_json())
```
### Error Message and Stack Trace (if applicable)
```shell
Model without Tools
Config Specs - [ConfigurableFieldSpec(id='temperature', annotation=<class 'float'>, name='Temperature', description='The temperature of the model', default=0.0, is_shared=False, dependencies=None)]
Config Schema Json - {"title": "RunnableConfigurableFieldsConfig", "type": "object", "properties": {"configurable": {"$ref": "#/definitions/Configurable"}}, "definitions": {"Configurable": {"title": "Configurable", "type": "object", "properties": {"temperature": {"title": "Temperature", "description": "The temperature of the model", "default": 0.0, "type": "number"}}}}}
Model with Tools
Config Specs - []
Config Schema Json - {"title": "ChatOpenAIConfig", "type": "object", "properties": {}}
```
### Description
When using the model with tools, the configurable fields are not exposed or used internally.
Am I doing something wrong? Please suggest the correct approach for setting configurable fields while using model with tool_calling.
### System Info
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_chroma: 0.1.2
> langchain_cli: 0.0.25
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.8
> langserve: 0.2.2 | Configurable Fields Not available after bind_tools called on Runnable | https://api.github.com/repos/langchain-ai/langchain/issues/24341/comments | 3 | 2024-07-17T06:27:07Z | 2024-08-08T18:18:13Z | https://github.com/langchain-ai/langchain/issues/24341 | 2,413,346,088 | 24,341 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import TypedDict
from langchain_core.runnables import RunnableParallel, RunnableLambda
class Foo(TypedDict):
foo: str
class InputData(Foo):
bar: str
def forward_foo(input_data: InputData):
return input_data["foo"]
def transform_input(input_data: InputData):
foo = input_data["foo"]
bar = input_data["bar"]
return {
"transformed": foo + bar
}
foo_runnable = RunnableLambda(forward_foo)
other_runnable = RunnableLambda(transform_input)
parallel = RunnableParallel(
foo=foo_runnable,
other=other_runnable,
)
repr(parallel.input_schema.validate({ "foo": "Y", "bar": "Z" }))
# 'RunnableParallel<foo,other>Input()'
# If we remove the type annotations on forward_foo and transform_input
# args, validate() gives us the right result:
# "RunnableParallel<foo,other>Input(foo='Y', bar='Z')"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using `TypedDict` subclasses to annotate the arguments of a `RunnableParallel` children, the `RunnableParallel` schema isn't correctly inferred from the children's schemas.
The `RunnableParallel` schema is empty, i.e. `parallel.input_schema.schema()` outputs:
```
{'title': 'RunnableParallel<foo,other>Input',
'type': 'object',
'properties': {}}
```
and `parallel.input_schema.validate()` returns an empty dict for any input.
This is problematic when exposing the `RunnableParallel` chain using Langserve, because Langserve passes the endpoint input through `schema.validate()`, which essentially clears any input as it returns an empty `dict`
The only workarounds we have found so far are either:
* remove type annotations on the `RunnableParallel` children functions
* pipe a `RunnablePassthrough` before the `RunnableParallel` : `parallel = RunnablePassthrough() | RunnableParallel()`
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain_anthropic: 0.1.20
> langchain_cli: 0.0.25
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | RunnableParallel input schema is empty if children runnable input schemas use TypedDict's | https://api.github.com/repos/langchain-ai/langchain/issues/24326/comments | 1 | 2024-07-17T00:30:23Z | 2024-07-17T19:07:28Z | https://github.com/langchain-ai/langchain/issues/24326 | 2,412,292,456 | 24,326 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description and Example Code
Langchain seemingly computes token usage and cost for both OpenAI and AzureOpenAI models using `OpenAICallbackHandler`. However, that relies on the fact that both the APIs retrieve the "complete" name of the called model, which is not the case in Azure OpenAI.
In my subscription I have 3 deployments of gpt-3.5-turbo corresponding to `gpt-35-turbo-0613`, `gpt-35-turbo-0312`, `gpt-35-turbo-1106` and 2 deployments of gpt-4 corresponding to `gpt-4-1106-preview` and `gpt-4-0613`. However, when calling them for inference, the model is called, respectively `gpt-35-turbo` and `gpt-4` regardless of the version. Langchain can't compute the correct cost then, despite no warning is thrown. This dictionary [here](https://github.com/langchain-ai/langchain/blob/47ed7f766a5de1ee6e876be822536cd51ccb4777/libs/community/langchain_community/callbacks/openai_info.py#L10-L116) also contains entries that would never be used because of the above, e.g. [this one](https://github.com/langchain-ai/langchain/blob/47ed7f766a5de1ee6e876be822536cd51ccb4777/libs/community/langchain_community/callbacks/openai_info.py#L68).
```python
from langchain_openai import AzureChatOpenAI
llm1 = AzureChatOpenAI(
api_version="2023-08-01-preview",
azure_endpoint="https://YOUR_ENDPOINT.openai.azure.com/",
api_key="YOUR_KEY",
azure_deployment="gpt-35-turbo-0613",
temperature=0,
)
llm2 = AzureChatOpenAI(
api_version="2023-08-01-preview",
azure_endpoint="https://YOUR_ENDPOINT.openai.azure.com/",
api_key="YOUR_KEY",
azure_deployment="gpt-35-turbo-0312",
temperature=0,
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
llm1.invoke(messages).response_metadata['model_name'] # gpt-35-turbo
llm2.invoke(messages).response_metadata['model_name'] # gpt-35-turbo
```
### System Info
Not applicable here. | OpenAI callback is deceiving when used with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24324/comments | 1 | 2024-07-16T22:53:49Z | 2024-07-21T08:48:15Z | https://github.com/langchain-ai/langchain/issues/24324 | 2,412,171,445 | 24,324 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/google_cloud_sql_pg/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The code examples generate this error:
```console
File "main.py", line 18
engine = await PostgresEngine.afrom_instance(project_id=config.google_project_id, region=config.region, instance=config.cloud_sql_connection_name, database=config.db_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: 'await' outside function
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/vectorstores/google_cloud_sql_pg/> SyntaxError: 'await' outside function | https://api.github.com/repos/langchain-ai/langchain/issues/24319/comments | 1 | 2024-07-16T18:58:16Z | 2024-07-16T21:06:16Z | https://github.com/langchain-ai/langchain/issues/24319 | 2,411,848,751 | 24,319 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
PineconeApiException Traceback (most recent call last)
Cell In[26], [line 1](vscode-notebook-cell:?execution_count=26&line=1)
----> [1](vscode-notebook-cell:?execution_count=26&line=1) result = retriever.invoke("foo")
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_core\retrievers.py:222, in BaseRetriever.invoke(self, input, config, **kwargs)
[220](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:220) except Exception as e:
[221](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:221) run_manager.on_retriever_error(e)
--> [222](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:222) raise e
[223](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:223) else:
[224](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:224) run_manager.on_retriever_end(
[225](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:225) result,
[226](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:226) )
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_core\retrievers.py:215, in BaseRetriever.invoke(self, input, config, **kwargs)
[213](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:213) _kwargs = kwargs if self._expects_other_args else {}
[214](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:214) if self._new_arg_supported:
--> [215](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:215) result = self._get_relevant_documents(
[216](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:216) input, run_manager=run_manager, **_kwargs
[217](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:217) )
[218](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:218) else:
[219](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:219) result = self._get_relevant_documents(input, **_kwargs)
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_community\retrievers\pinecone_hybrid_search.py:167, in PineconeHybridSearchRetriever._get_relevant_documents(self, query, run_manager, **kwargs)
[165](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_community/retrievers/pinecone_hybrid_search.py:165) sparse_vec["values"] = [float(s1) for s1 in sparse_vec["values"]]
...
PineconeApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 16 Jul 2024 17:25:55 GMT', 'Content-Type': 'application/json', 'Content-Length': '103', 'Connection': 'keep-alive', 'x-pinecone-request-latency-ms': '1', 'x-pinecone-request-id': '3784258799918705851', 'x-envoy-upstream-service-time': '2', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"Vector dimension 384 does not match the dimension of the index 1536","details":[]}
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?c51472c5-0b39-4575-adec-6a963874b078) or open in a [text editor](command:workbench.action.openLargeOutput?c51472c5-0b39-4575-adec-6a963874b078). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/retrievers/pinecone_hybrid_search/> | https://api.github.com/repos/langchain-ai/langchain/issues/24317/comments | 0 | 2024-07-16T17:28:39Z | 2024-07-16T17:31:19Z | https://github.com/langchain-ai/langchain/issues/24317 | 2,411,686,931 | 24,317 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
+-----------+
| __start__ |
+-----------+
*
*
*
+--------+
| 数据分析专家 |
+--------+....
.. ...
.. ...
. ....
+---------+ ..
| 网络优化工程师 | .
+---------+ .
.. .. .
.. .. .
. .. .
+--------+ . ..
| 网络运营经理 | . ....
+--------+.... . ...
... . ...
.... . ....
.. . ..
+---------+
| __end__ |
+---------+
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
draw_ascii width calculation error when using Chinese description in LangGraph
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-experimental==0.0.56
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
macOS 14.3.1
Python 3.11.4 | graph_ascii multi-byte width calculation problem | https://api.github.com/repos/langchain-ai/langchain/issues/24308/comments | 0 | 2024-07-16T14:52:28Z | 2024-07-16T14:55:04Z | https://github.com/langchain-ai/langchain/issues/24308 | 2,411,371,903 | 24,308 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/nemo/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Could you a Nemo model link . I can download the nemo model weights. In hugging face I didn't find this nemo model weights.
When I exceute the 'NV-Embed-QA-003'. It giving the connection error.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/nemo/> | https://api.github.com/repos/langchain-ai/langchain/issues/24305/comments | 0 | 2024-07-16T11:50:19Z | 2024-07-16T11:52:50Z | https://github.com/langchain-ai/langchain/issues/24305 | 2,410,942,439 | 24,305 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentType, initialize_agent
```
### Error Message and Stack Trace (if applicable)
tests/langchain/test_langchain_model_export.py:19: in <module>
from langchain.agents import AgentType, initialize_agent
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain/agents/__init__.py:36: in <module>
from langchain_core.tools import Tool, tool
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain_core/tools.py:48: in <module>
from typing_extensions import Annotated, cast, get_args, get_origin
E ImportError: cannot import name 'cast' from 'typing_extensions' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/typing_extensions.py)
### Description
Langchain should pin typing_extensions>=4.7.0 (instead of 4.2.0) in the current dev version, otherwise we'll get `cannot import name 'cast' from 'typing_extensions' ` error
### System Info
Using langchain master branch. typing_extensions==4.5.0 fails | cannot import name 'cast' from 'typing_extensions' | https://api.github.com/repos/langchain-ai/langchain/issues/24287/comments | 1 | 2024-07-16T01:14:16Z | 2024-07-17T01:21:23Z | https://github.com/langchain-ai/langchain/issues/24287 | 2,409,950,747 | 24,287 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The code is:
```python
from langchain.chat_models import AzureChatOpenAI
from config import *
chat_model = AzureChatOpenAI(
openai_api_type=OPENAI_API_TYPE,
openai_api_version=OPENAI_API_VERSION,
openai_api_key=OPENAI_API_KEY,
azure_deployment=AZURE_DEPLOYMENT,
openai_api_base=OPENAI_API_BASE
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
chat_model.invoke(messages)
```
Error
```sh
Traceback (most recent call last):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno -3] Temporary failure in name resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 978, in _request
response = self._client.send(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno -3] Temporary failure in name resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/c/D/Python-dev3/rpa-infra/response_time/execution-eproc/Guardrails/Simple Bot/config/github.py", line 19, in <module>
chat_model.invoke(messages)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 441, in _generate
response = self.completion_with_retry(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 356, in completion_with_retry
return self.client.create(**kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
return self._retry_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
return self._retry_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1012, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain to have a simple API call with AzureChatOpenAI
### System Info
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.1.52
langchain-text-splitters==0.0.2 | ConnectError: [Errno -3] Temporary failure in name resolution | https://api.github.com/repos/langchain-ai/langchain/issues/24276/comments | 2 | 2024-07-15T17:14:23Z | 2024-07-31T08:17:27Z | https://github.com/langchain-ai/langchain/issues/24276 | 2,409,234,877 | 24,276 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Getting
raise ValueError(
ValueError: OpenAIChat currently only supports single prompt, got .
```
llm = AzureOpenAI(
azure_endpoint="https://.....openai.azure.com/",
deployment_name="....",
model_name="...",
openai_api_version="....",
)
def summarize(pdf):
loader = PyPDFLoader(pdf)
docs = loader.load_and_split()
chain = load_summarize_chain(llm=llm, chain_type="map_reduce", verbose=False)
summary = chain.run(docs)
print(summary)
print("\n")
return summary
```
### System Info
. | raise ValueError( ValueError: OpenAIChat currently only supports single prompt, got | https://api.github.com/repos/langchain-ai/langchain/issues/24268/comments | 1 | 2024-07-15T14:07:28Z | 2024-07-22T15:50:08Z | https://github.com/langchain-ai/langchain/issues/24268 | 2,408,838,622 | 24,268 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
agent = create_structured_chat_agent(llm, agent_tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=agent_tools,
verbose=os.getenv("ENV", "dev") == "dev",
handle_parsing_errors='Check you output. make sure double quotes inside of "action_input" are properly escaped with a backslash. Otherwise, the JSON will not parse correctly.',
callbacks=agent_callback_manager,
return_intermediate_steps=True,
)
```
### Error Message and Stack Trace (if applicable)
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor.
### Description
I already search on issued and stumbled upon [this](https://github.com/langchain-ai/langchain/issues/14580) and [this](https://github.com/langchain-ai/langchain/issues/14947) issues but non of them address the issue properly. I'm using an agent with JSON output parser. The agent constructs json in each step like
```json
json { "action": "Final Answer", "action_input": "Final answer"}
```
It uses [REACT](https://smith.langchain.com/hub/hwchase17/react) to construct each step's output. The problem is, whenever there is double quote(") inside of the "action_input" the agent raises ``OutputParserException``. I think this is somehow expected in the sense that the $JSON_BLOB will not be a valid json anyway. the proper way is to escape double quotes inside of the "action_input". I specifically told agent to escape double quotes inside "action_input" in the initial prompt but apparently the agent doesn't respect it. Besides, for this case we can't reply on the agent to always escape double quotes. I think the better approach is to refactor ``parse_json_markdown`` function. I did a bit of debug and this method calls ``_parse_json`` and I think this method should handle escaping double quotes inside "action_input" before trying to parse it.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
> Python Version: 3.12.0 (main, Oct 2 2023, 12:03:24) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_aws: 0.1.11
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | An output parsing error occurred. | https://api.github.com/repos/langchain-ai/langchain/issues/24266/comments | 0 | 2024-07-15T13:51:30Z | 2024-07-16T15:11:05Z | https://github.com/langchain-ai/langchain/issues/24266 | 2,408,802,870 | 24,266 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.tools.tavily_search import TavilySearchResults, TavilyAnswer
tool_with_raw = TavilySearchResults(include_raw_content=True, max_results=1)
tool_with_raw_and_answer = TavilySearchResults(include_raw_content=True, include_answer=True, max_results=1)
tool_without_raw = TavilySearchResults(include_raw_content=False, max_results=1)
r1=tool_with_raw.invoke({'query': 'how to cook a steak?'})
print(r1)
r2=tool_without_raw.invoke({'query': 'how to cook a steak?'})
print(r2)
r3=tool_with_raw_and_answer.invoke({'query': 'how to cook a steak?'})
print(r3)
```
```python
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
>>> r2=tool_without_raw.invoke({'query': 'how to cook a steak?'})
>>> print(r2)
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
>>> r3=tool_with_raw_and_answer.invoke({'query': 'how to cook a steak?'})
>>> print(r3)
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello,
I cannot get all the informations requested from the parameters.
Seems that only max_result is kept. I can understand that there is two classes (TavilySearchResults, TavilyAnswer) but if we can initiate TavilySearchResults with API options why to keep the two classes? D
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 21:00:12) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
> langserve: 0.2.2 | TavilySearch parameters don't change the output. | https://api.github.com/repos/langchain-ai/langchain/issues/24265/comments | 6 | 2024-07-15T13:41:05Z | 2024-07-18T00:13:03Z | https://github.com/langchain-ai/langchain/issues/24265 | 2,408,779,708 | 24,265 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
n/a
### Error Message and Stack Trace (if applicable)
n/a
### Description
AzureOpenAIEmbeddings and AzureChatOpenAI classes accept an azure_ad_token parameter instead of an api_key
However AzureSearch does not support it in the langchan community library. I was able to hack it, by Copy Paste the AzureSearch from the langchain community and make some modifications:
BearerTokenCredential.py
from azure.core.credentials import TokenCredential
from azure.core.credentials import AccessToken
import time
```
class BearerTokenCredential(TokenCredential):
def __init__(self, token):
self._token = token
def get_token(self, *scopes, **kwargs):
# The AccessToken expects the token and its expiry time in seconds.
# Here we set the expiry to an hour from now.
expiry = int(time.time()) + 3600
return AccessToken(self._token, expiry)
```
In AzureSearch.py
```
def _get_search_client(
endpoint: str,
key: str,
azure_ad_access_token: Optional[str],
index_name: str,
semantic_configuration_name: Optional[str] = None,
fields: Optional[List[SearchField]] = None,
vector_search: Optional[VectorSearch] = None,
semantic_configurations: Optional[
Union[SemanticConfiguration, List[SemanticConfiguration]]
] = None,
scoring_profiles: Optional[List[ScoringProfile]] = None,
default_scoring_profile: Optional[str] = None,
default_fields: Optional[List[SearchField]] = None,
user_agent: Optional[str] = "langchain",
cors_options: Optional[CorsOptions] = None,
async_: bool = False,
) -> Union[SearchClient, AsyncSearchClient]:
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import ResourceNotFoundError
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from azure.search.documents import SearchClient
from azure.search.documents.aio import SearchClient as AsyncSearchClient
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
ExhaustiveKnnAlgorithmConfiguration,
ExhaustiveKnnParameters,
HnswAlgorithmConfiguration,
HnswParameters,
SearchIndex,
SemanticConfiguration,
SemanticField,
SemanticPrioritizedFields,
SemanticSearch,
VectorSearch,
VectorSearchAlgorithmKind,
VectorSearchAlgorithmMetric,
VectorSearchProfile,
)
default_fields = default_fields or []
if key is None:
if azure_ad_access_token:
credential = BearerTokenCredential(azure_ad_access_token)
else:
credential = DefaultAzureCredential()
elif key.upper() == "INTERACTIVE":
credential = InteractiveBrowserCredential()
credential.get_token("https://search.azure.com/.default")
else:
credential = AzureKeyCredential(key)
index_client: SearchIndexClient = SearchIndexClient(
endpoint=endpoint, credential=credential, user_agent=user_agent
)
```
WOuld it be possible to include it in the next version of langchain community?
### System Info
n/a | AzureSearch vector store does not support access token authentication. FIX Suggested | https://api.github.com/repos/langchain-ai/langchain/issues/24263/comments | 2 | 2024-07-15T11:54:45Z | 2024-07-17T08:17:36Z | https://github.com/langchain-ai/langchain/issues/24263 | 2,408,547,672 | 24,263 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import GraphCypherQAChain
CYPHER_QA_TEMPLATE = """
You're an AI cook formulating Cypher statements to navigate through a recipe database.
Schema: {schema}
Examples: {examples}
Question: {question}
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema","examples","question"],
template = CYPHER_QA_TEMPLATE)
model = ChatOpenAI(temperature=0, model_name = "gpt-4-0125-preview")
chain = GraphCypherQAChain.from_llm(graph=graph, llm=model, verbose=True, validate_cypher = True, cypher_prompt = CYPHER_GENERATION_PROMPT)
res = chain.invoke({"schema": graph.schema,"examples" : examples,"question":question})
```
### Error Message and Stack Trace (if applicable)
```
> Entering new GraphCypherQAChain chain...
Traceback (most recent call last):
File "/Users/<path_to_my_project>/src/text2cypher_langchain.py", line 129, in <module>
res = chain.invoke({"schema": graph.schema,"examples" : examples,"question":question})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 154, in invoke
self._validate_inputs(inputs)
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 284, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'query'}
```
### Description
I'm getting a missing key error when passing custom arguments in `PromptTemplate` and `GraphCypherQAChain`.
This seems similar to #19560 now closed.
### System Info
- langchain==0.2.7
- MacOS 13.6.7 (Ventura)
- python 3.11.4 | Missing key error - Using PromptTemplate and GraphCypherQAChain. | https://api.github.com/repos/langchain-ai/langchain/issues/24260/comments | 8 | 2024-07-15T10:00:01Z | 2024-07-17T19:56:51Z | https://github.com/langchain-ai/langchain/issues/24260 | 2,408,338,395 | 24,260 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async def run_chatbot(vectorstore, session_id, uid, chatbot_data):
try:
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise ValueError("Missing OpenAI API key in environment variables")
print(chatbot_data['activeModel'])
model = ChatOpenAI(
temperature=0.5,
model_name=chatbot_data['activeModel'],
openai_api_key=openai_api_key,
)
firestore_config = {
"collection_name": "chathistory",
"session_id": session_id,
"user_id": uid,
}
chat_history = FirestoreChatMessageHistory(**firestore_config)
memory = ConversationBufferWindowMemory(
chat_history=chat_history,
memory_key="chat_history",
input_key="question",
output_key="text",
)
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=model,
chain_type="stuff",
retriever=vectorstore.as_retriever()
)
qa_tool = Tool(
name='Knowledge Base',
func=qa.run,
description=(
'use this tool when answering general knowledge queries to get '
'more information about the topic'
)
)
tools = [qa_tool]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=model,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory,
)
### Error Message and Stack Trace (if applicable)
variable chat_history should be a list of base messages, got
### Description
I dont know what the problem is.
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_1fa2djihwb/croot/annotated-types_1709542925772/work
anyio @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_a17a7759g2/croot/anyio_1706220182417/work
asgiref==3.8.1
async-timeout==4.0.3
attrs==23.2.0
bidict==0.23.1
blinker==1.7.0
CacheControl==0.14.0
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
cryptography==42.0.5
dataclasses-json==0.6.4
distro @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_ddkyz0575y/croot/distro_1714488254309/work
exceptiongroup @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_b2258scr33/croot/exceptiongroup_1706031391815/work
firebase-admin==6.5.0
Flask==3.0.3
Flask-Cors==4.0.0
Flask-SocketIO==5.3.6
frozenlist==1.4.1
google-api-core==2.18.0
google-api-python-client==2.127.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-cloud-core==2.4.1
google-cloud-firestore==2.16.0
google-cloud-storage==2.16.0
google-crc32c==1.5.0
google-resumable-media==2.7.0
googleapis-common-protos==1.63.0
grpcio==1.62.2
grpcio-status==1.62.2
h11 @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_110bmw2coo/croot/h11_1706652289620/work
httpcore @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_fcxiho9nv7/croot/httpcore_1706728465004/work
httplib2==0.22.0
httpx @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_727e6zfsxn/croot/httpx_1706887102687/work
idna @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_a12xpo84t2/croot/idna_1714398852854/work
itsdangerous==2.2.0
Jinja2==3.1.3
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.2.17
langchain-google-firestore==0.2.1
langchain-openai==0.1.16
langchain-pinecone==0.1.1
langchain-text-splitters==0.0.1
langsmith==0.1.85
MarkupSafe==2.1.5
marshmallow==3.21.1
more-itertools==10.2.0
msgpack==1.0.8
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
openai==1.35.13
orjson==3.10.1
packaging==23.2
pinecone-client==3.2.2
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pydantic @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_0ai8cvgm2c/croot/pydantic_1709577986211/work
pydantic_core @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_06smitnu98/croot/pydantic-core_1709573985903/work
PyJWT==2.8.0
pyparsing==3.1.2
python-dotenv==1.0.1
python-engineio==4.9.0
python-socketio==5.11.2
PyYAML==6.0.1
regex==2024.5.15
requests==2.31.0
rsa==4.9
simple-websocket==1.0.0
sniffio @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_1573pknjrg/croot/sniffio_1705431298885/work
SQLAlchemy==2.0.29
tenacity==8.2.3
tiktoken==0.7.0
tqdm==4.66.2
typing-inspect==0.9.0
typing_extensions @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_93dg13ilv4/croot/typing_extensions_1715268840722/work
uritemplate==4.1.1
urllib3==2.2.1
Werkzeug==3.0.2
wsproto==1.2.0
yarl==1.9.4
| variable chat_history should be a list of base messages, got | https://api.github.com/repos/langchain-ai/langchain/issues/24257/comments | 2 | 2024-07-15T08:55:17Z | 2024-07-17T08:52:45Z | https://github.com/langchain-ai/langchain/issues/24257 | 2,408,209,723 | 24,257 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
this is my code:
`from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.globals import set_verbose
from langchain.globals import set_debug
set_debug(True)
@tool
def multiply(x: float, y: float) -> float:
"""Multiply 'x' times 'y'."""
return x * y
@tool
def exponentiate(x: float, y: float) -> float:
"""Raise 'x' to the 'y'."""
return x**y
@tool
def add(x: float, y: float) -> float:
"""Add 'x' and 'y'."""
return x + y
prompt = ChatPromptTemplate.from_messages([
("system", "you're a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
tools = [multiply, exponentiate, add]
llm = ChatOpenAI(model="command-r", base_url="http://localhost:11434/v1")
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241", })`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i see in the debug log that the tools are not used in any of the following local models : command-r , qwen2, llama3 , nexusraven
for regular openai it worked,
can't i use the create_tool_calling_agent with ollama models ?
in here [https://blog.langchain.dev/tool-calling-with-langchain/](url) it is suggested that it should work with every model.
### System Info
system : Apple M3 Pro
libraries :
langchain==0.2.7
langchain-aws==0.1.10
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.18
langchain-experimental==0.0.62
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
| issue using tools with ollama local models | https://api.github.com/repos/langchain-ai/langchain/issues/24255/comments | 0 | 2024-07-15T07:46:02Z | 2024-07-15T07:48:45Z | https://github.com/langchain-ai/langchain/issues/24255 | 2,408,084,105 | 24,255 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'`
### System Info
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'` | No module named 'langchain.transformers' | https://api.github.com/repos/langchain-ai/langchain/issues/24251/comments | 3 | 2024-07-15T06:24:37Z | 2024-07-15T21:11:35Z | https://github.com/langchain-ai/langchain/issues/24251 | 2,407,953,032 | 24,251 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureOpenAIEmbeddings
os.environ["AZURE_OPENAI_API_KEY"] = get_auth_token()
os.environ["OPENAI_API_KEY"] = get_auth_token()
os.environ["AZURE_OPENAI_ENDPOINT"] = 'https://workspace.openai.azure.com/'
os.environ["OPENAI_ENDPOINT"] = 'https://workspace.openai.azure.com/'
os.environ['OPENAI_API_TYPE'] = "azure"
os.environ['OPENAI_API_VERSION']='2023-07-01-preview'
embeddings = AzureOpenAIEmbeddings(
model='text-embedding-3-small',
chunk_size=1
)
embeddings.embed_documents(['text'])
```
### Error Message and Stack Trace (if applicable)
```text
---------------------------------------------------------------------------
SSLEOFError Traceback (most recent call last)
File /anaconda/envs/nlp_min/lib/python3.10/site-packages/urllib3/connectionpool.py:670, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
[669](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/urllib3/connectionpool.py:669) # Make the request on the httplib connection object.
......
[706](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/sessions.py:706) elapsed = preferred_clock() - start
File /anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:517, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
[513](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:513) raise ProxyError(e, request=request)
[515](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:515) if isinstance(e.reason, _SSLError):
[516](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:516) # This branch is for urllib3 v1.22 and later.
--> [517](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:517) raise SSLError(e, request=request)
[519](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:519) raise ConnectionError(e, request=request)
[521](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:521) except ClosedPoolError as e:
SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
````
### Description
I tried this code snippets along with many variations, none worked, the issue is that under the hood the function tries to access openaipublic.blob.core.windows.net which is not allowed. Why is this trying to access an external link when all it needs to do is to connect to our azure openai endpoint?
### System Info
langchain==0.2.7
langchain-chroma==0.1.2
langchain-community==0.0.8
langchain-core==0.2.18
langchain-openai==0.1.16
langchain-text-splitters==0.2.2 | LangChain AzureOpenAIEmbeddings is not working due to model trying to access microsoft | https://api.github.com/repos/langchain-ai/langchain/issues/24248/comments | 1 | 2024-07-15T03:20:17Z | 2024-07-17T12:52:46Z | https://github.com/langchain-ai/langchain/issues/24248 | 2,407,782,564 | 24,248 |
[
"langchain-ai",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Please wri知识库交叉融合,在项目使用中,我有一些公用知识库和私有知识库,我想在回答的时候将私有知识库和公用的知识库结合起来,这怎么实现?后期可以更新吗te a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/24246/comments | 0 | 2024-07-15T02:31:34Z | 2024-07-15T02:31:34Z | https://github.com/langchain-ai/langchain/issues/24246 | 2,407,748,868 | 24,246 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
langchain pinecone store .from_documents and .add_documents don't support id_prefix.
### Error Message and Stack Trace (if applicable)
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235
### Description
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235
### System Info
.. | how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235 | https://api.github.com/repos/langchain-ai/langchain/issues/24239/comments | 0 | 2024-07-14T12:29:11Z | 2024-07-14T12:31:37Z | https://github.com/langchain-ai/langchain/issues/24239 | 2,407,414,207 | 24,239 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
.
### Error Message and Stack Trace (if applicable)
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when search for specific files vectors and then deleting those vectores.
### Description
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when search for specific files vectors and then deleting those vectores.
### System Info
.. | how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors | https://api.github.com/repos/langchain-ai/langchain/issues/24235/comments | 0 | 2024-07-14T09:49:14Z | 2024-07-14T12:28:04Z | https://github.com/langchain-ai/langchain/issues/24235 | 2,407,354,608 | 24,235 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# Data model
class GradeDocuments(BaseModel):
"""Binary score for relevance check on retrieved documents."""
binary_score: str = Field(description="Documents are relevant to the question, 'yes' or 'no'")
llm = OllamaFunctions(model="gemma:2b", format="json", temperature=0)
structured_llm_documents_grader = llm.with_structured_output(
GradeDocuments)
chain = grade_prompt | structured_llm_documents_grader
chain.invoke({"question": question, "document": document.page_content})
### Error Message and Stack Trace (if applicable)
```raised the following error:
<class 'TypeError'>: Object of type ModelMetaclass is not JSON serializable
ile "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4978, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in invoke
self.generate_prompt(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 698, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 555, in generate
raise e
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 545, in generate
self._generate_with_cache(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 758, in _generate_with_cache
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 344, in _stream
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 189, in _create_chat_stream
yield from self._create_stream(
^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 232, in _create_stream
response = requests.post(
^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/sessions.py", line 575, in request
prep = self.prepare_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/sessions.py", line 484, in prepare_request
p.prepare(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/models.py", line 370, in prepare
self.prepare_body(data, files, json)
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/models.py", line 510, in prepare_body
body = complexjson.dumps(json, allow_nan=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.9/Frameworks/Python.fr
### Description
I am trying to use OllamaFunction with with_structured_output following an example on the user doc. However, I am seeing Object of type ModelMetaclass is not JSON serializable
### System Info
(server-py3.11) denniswong@macbook-pro server % pip freeze | grep langchain
langchain==0.2.7
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.18
langchain-experimental==0.0.62
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
mac
python version 3.11.9 | OllamaFunction returns Object of type ModelMetaclass is not JSON serializable following example on documentation | https://api.github.com/repos/langchain-ai/langchain/issues/24234/comments | 0 | 2024-07-14T06:19:23Z | 2024-07-14T06:21:55Z | https://github.com/langchain-ai/langchain/issues/24234 | 2,407,287,144 | 24,234 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
There is an issue at https://chat.langchain.com/
### Description
When using IME to input Japanese prompts in the Chat LangChain (https://chat.langchain.com/), pressing the Enter key to confirm Japanese character conversion results in the prompt being prematurely sent. This issue likely affects other languages using IME as well. (The same type of issue as https://github.com/langchain-ai/langchain/issues/24231, but the solution is slightly different)
### Steps to Reproduce:
Use IME to input a Japanese prompt.
Press the Enter key to confirm character conversion.
### Expected Behavior:
The input should be correctly converted to Japanese.
The prompt should not be sent.
### Actual Behavior:
The prompt is sent prematurely while still being composed.
### Proposed Solution:
In my local environment, running the following code in the Chrome console resolves the issue. I suggest incorporating a similar solution into the Chat LangChain:
``` javascript
(function() {
'use strict';
var parent_element = document.querySelector("body");
var isComposing = false;
// Start of Japanese input
parent_element.addEventListener('compositionstart', function(){
if (event.target.tagName === 'TEXTAREA') {
isComposing = true;
}
});
// End of Japanese input
parent_element.addEventListener('compositionend', function(){
if (event.target.tagName === 'TEXTAREA') {
isComposing = false;
}
});
// Modified handleIMEEnter function
function handleIMEEnter(event) {
if (event.target.tagName === 'TEXTAREA') {
if (event.code == "Enter" && isComposing) {
event.stopPropagation();
}
}
}
// Register handleIMEEnter function as a keydown event listener
parent_element.addEventListener('keydown', handleIMEEnter);
})();
```
### Additional Notes:
The difference with [IME Input Handling Issue in LangChain Chat Playground](https://github.com/langchain-ai/langchain/issues/24231) is that in Chat LangChain, a new TextArea is dynamically added for each prompt submission. Therefore, it is necessary to ensure that events are fired from the newly added TextArea as well. Specifically, this is achieved by capturing and handling events that bubble up to the body element.
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Browser: Google Chrome Version 126.0.6478.127 (Official Build) (x86_64) | IME Input Handling Issue in Chat LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/24233/comments | 2 | 2024-07-13T20:14:44Z | 2024-07-15T17:06:57Z | https://github.com/langchain-ai/langchain/issues/24233 | 2,407,150,660 | 24,233 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description:
When using IME to input Japanese prompts in the LangChain Chat Playground, pressing the Enter key to confirm Japanese character conversion results in the prompt being prematurely sent. This issue likely affects other languages using IME as well.
### Example Code
```python
from fastapi import FastAPI
from langserve import add_routes
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
app = FastAPI()
_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Response to a user input in Japanese",
),
MessagesPlaceholder("chat_history"),
("human", "{text}"),
]
)
_model = ChatOpenAI(model='gpt-4o')
chain = _prompt | _model
add_routes(app,
chain,
path="/japanese-speak",
playground_type="chat",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Steps to Reproduce:
Use IME to input a Japanese prompt.
Press the Enter key to confirm character conversion.
### Expected Behavior:
The input should be correctly converted to Japanese.
The prompt should not be sent.
### Actual Behavior:
The prompt is sent prematurely while still being composed.
### Proposed Solution:
In my local environment, running the following code in the Chrome console resolves the issue. I suggest incorporating a similar solution into the Chat Playground:
```javascript
(function() {
'use strict';
var input_element = document.querySelector("textarea");
var isComposing = false;
// Start of Japanese input
input_element.addEventListener('compositionstart', function(){
isComposing = true;
});
// End of Japanese input
input_element.addEventListener('compositionend', function(){
isComposing = false;
});
// Modified handleIMEEnter function
function handleIMEEnter(event) {
if (event.code == "Enter" && isComposing) {
event.stopPropagation();
}
}
// Register handleIMEEnter function as a keydown event listener
input_element.addEventListener('keydown', handleIMEEnter, { capture: true });
})();
```
### Additional Notes:
The `{ capture: true }` option in the `addEventListener` call ensures that the `handleIMEEnter` function is called before the prompt submission event, preventing the prompt from being sent prematurely. In the implementation within the Chat Playground, setting the order of event listeners appropriately should eliminate the need for `{ capture: true }`.
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 16 2023, 15:57:36) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.17
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.20
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.25
> langchain_experimental: 0.0.61
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langchainplus_sdk: 0.0.20
> langgraph: 0.1.8
> langserve: 0.2.2
| IME Input Handling Issue in LangChain Chat Playground | https://api.github.com/repos/langchain-ai/langchain/issues/24231/comments | 0 | 2024-07-13T19:06:02Z | 2024-07-13T19:39:53Z | https://github.com/langchain-ai/langchain/issues/24231 | 2,407,101,635 | 24,231 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [ ] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
python
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Redis
from chatbot_api import config
_INDEX_NAME = "Postmarket"
rds = Redis.from_existing_index(
embedding=config.OPEN_AI_EMBEDDINGS,
index_name=_INDEX_NAME,
schema=config.INDEX_SCHEMA,
redis_url=config.REDIS_URL,
)
_template = """Your job is to use information on the documents
to answer questions about postmarket operations. Use the following
context to answer questions. Be as detailed as possible, but don't
make up any information that's not from the context. If you don't
know an answer, say you don't know. If you refer to a document, cite
your reference.
{context}
"""
system_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(input_variables=['context'], template=_template)
)
human_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(input_variables=['question'], template="{question}")
)
messages = [system_prompt, human_prompt]
postmarket_prompt = ChatPromptTemplate(input_variables=['context', 'question'], messages=messages)
postmarket_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=ChatOpenAI(model=config.QA_MODEL, temperature=config.TEMPERATURE),
chain_type="stuff",
retriever=rds.as_retriever(search_type="similarity", search_kwargs={"k": 8}),
return_source_documents=True,
# chain_type_kwargs={"prompt": postmarket_prompt}, # this also doesn't work throwing ValueError -> document_variable_name summaries was not found in llm_chain input_variables: ['context', 'question']
verbose=True,
)
postmarket_chain.combine_documents_chain.llm_chain.prompt = postmarket_prompt
```
Then the `postmarket_chain` is used by the tool i defined in my langchain agent as `func=postmarket_chain.invoke`
### Error Message and Stack Trace (if applicable)
```
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain] Entering Chain run with input:
{
"question": "What are the procedures for submitting an application for a new medical device?",
"history": []
}
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain] Entering Chain run with input:
{
"question": "What are the procedures for submitting an application for a new medical device?",
"summaries": "Content: Page 12D. Promotional Literature\nAny (I'm cutting the rest but this text is fetched from my vectorstore, I can confirm)"
}
[llm/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Your job is to use information on documents\nto answer questions about premarket operations. Use the following\ncontext to answer questions. Be as detailed as possible, but don't\nmake up any information that's not from the context. If you don't\nknow an answer, say you don't know. If you refer to a document, cite\nyour reference.\n{context}\n\nHuman: What are the procedures for submitting an application for a new medical device?"
]
}
[llm/end] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain > llm:ChatOpenAI] [5.16s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "I don't have the specific documents or guidelines available in the provided context to detail the procedures for submitting a 510(k) notification for a new medical device. Typically, this process involves preparing and submitting a premarket notification to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed device (predicate device) not subject to premarket approval (PMA). The submission includes information about the device, its intended use, and comparative analyses, among other data. For detailed steps and requirements, it is best to refer directly to the relevant FDA guidelines or documents.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
```
### Description
I have a multimodel RAG system that generates answers using the texts parsed from hundreds of PDFs that are retrieved from my Redis vectorstore. And I have several chains (RetrievalQAWithSourcesChain) to find relevant contextual texts from vectorstore and append them in my chatbot llm calls. I'm having problems in correctly adding context to the system prompt. Below code throws ValueError: Missing some input keys: {'context'} .
The RetrievalQAWithSourcesChain is supposed to use the Redis retriever and append the extracted texts to the {context} I believe, but seems like it can't or there's something else i can't see.
It surprisinly works when I use double brackets around 'context' in the prompt -> {{context}}. However, when I examine the logs of the intermediate steps of langchain trying to use the agent's tools to generate an answer, my understanding is that the context is not even passed and the llm model just uses its own knowledge to give answers without using any contextual info that's supposed to be passed from vectorstore. Here are some logs below. Notice how some text data returned from vectorstore is included in summaries but then when StuffDocumentsChain passed that to llm:ChatOpenAI you see that it's not injected into the system prompt (scroll right to see), the context field still remains as {context} (it dropped the outer brackets)
Am I right in my assumption of the context is not being passed to the knowledge window correctly? How can I fix this? All the examples I see from other projects use one bracket around context when they include it in the system prompt. However I could only make the code work with double brackets and that seems like it's not injecting the context at all...
Can this be due to the index schema I used when creating the vectorstore? the schema for reference:
```
text:
- name: content
- name: source
numeric:
- name: start_index
- name: page
vector:
- name: content_vector
algorithm: HNSW
datatype: FLOAT32
dims: 384
distance_metric: COSINE
```
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.16
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
Python 3.12.4
OS: MacOS Sonoma 14.4.1 | Langchain RetrievalQAWithSourcesChain throwing ValueError: Missing some input keys: {'context'} | https://api.github.com/repos/langchain-ai/langchain/issues/24229/comments | 4 | 2024-07-13T14:46:32Z | 2024-08-03T23:07:59Z | https://github.com/langchain-ai/langchain/issues/24229 | 2,406,966,926 | 24,229 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Collab link : https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing
Code :
```
!pip install -qU langchain langchain-community langchain-core
!pip install -qU langchain-google-genai
!pip install -qU langchain-text-splitters tiktoken
!pip install -qU faiss-gpu
```
```python
import os
import getpass
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Google API Key:")
import re
import requests
from langchain_community.document_loaders import BSHTMLLoader
# Download the content
response = requests.get("https://en.wikipedia.org/wiki/Car")
# Write it to a file
with open("car.html", "w", encoding="utf-8") as f:
f.write(response.text)
# Load it with an HTML parser
loader = BSHTMLLoader("car.html")
document = loader.load()[0]
# Clean up code
# Replace consecutive new lines with a single new line
document.page_content = re.sub("\n\n+", "\n", document.page_content)
from typing import List, Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""
year: int = Field(
..., description="The year when there was an important historic development."
)
description: str = Field(
..., description="What happened in this year? What was the development?"
)
evidence: str = Field(
...,
description="Repeat in verbatim the sentence(s) from which the year and description information were extracted",
)
class ExtractionData(BaseModel):
"""Extracted information about key developments in the history of cars."""
key_developments: List[KeyDevelopment]
# Define a custom prompt to provide instructions and any additional context.
# 1) You can add examples into the prompt template to improve extraction quality
# 2) Introduce additional parameters to take context into account (e.g., include metadata
# about the document from which the text was extracted.)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert at identifying key historic development in text. "
"Only extract important historic developments. Extract nothing if no important information can be found in the text.",
),
("human", "{text}"),
]
)
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
extractor = prompt | llm.with_structured_output(
schema=ExtractionData,
include_raw=False,
)
from langchain_text_splitters import TokenTextSplitter
text_splitter = TokenTextSplitter(
# Controls the size of each chunk
chunk_size=2000,
# Controls overlap between chunks
chunk_overlap=20,
)
texts = text_splitter.split_text(document.page_content)
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.runnables import RunnableLambda
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
texts = text_splitter.split_text(document.page_content)
vectorstore = FAISS.from_texts(texts, embedding=GoogleGenerativeAIEmbeddings(model="models/embedding-001"))
retriever = vectorstore.as_retriever(
search_kwargs={"k": 1}
) # Only extract from first document
rag_extractor = {
"text": retriever | (lambda docs: docs[0].page_content) # fetch content of top doc
} | extractor
results = rag_extractor.invoke("Key developments associated with cars")
```
### Error Message and Stack Trace (if applicable)
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(**kwargs)
177 try:
--> 178 return generation_method(**kwargs)
179 # Do not retry for these errors.
25 frames
[/usr/local/lib/python3.10/dist-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py](https://localhost:8080/#) in generate_content(self, request, model, contents, retry, timeout, metadata)
826 # Send the request.
--> 827 response = rpc(
828 request,
[/usr/local/lib/python3.10/dist-packages/google/api_core/gapic_v1/method.py](https://localhost:8080/#) in __call__(self, timeout, retry, compression, *args, **kwargs)
130
--> 131 return wrapped_func(*args, **kwargs)
132
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_wrapped_func(*args, **kwargs)
292 )
--> 293 return retry_target(
294 target,
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
152 # defer to shared logic for handling errors
--> 153 _retry_error_helper(
154 exc,
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_base.py](https://localhost:8080/#) in _retry_error_helper(exc, deadline, next_sleep, error_list, predicate_fn, on_error_fn, exc_factory_fn, original_timeout)
211 )
--> 212 raise final_exc from source_exc
213 if on_error_fn is not None:
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
143 try:
--> 144 result = target()
145 if inspect.isawaitable(result):
[/usr/local/lib/python3.10/dist-packages/google/api_core/timeout.py](https://localhost:8080/#) in func_with_timeout(*args, **kwargs)
119
--> 120 return func(*args, **kwargs)
121
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
80 except grpc.RpcError as exc:
---> 81 raise exceptions.from_grpc_error(exc) from exc
82
InvalidArgument: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[key_developments].items: missing field.
The above exception was the direct cause of the following exception:
ChatGoogleGenerativeAIError Traceback (most recent call last)
[<ipython-input-18-49ad0989f74d>](https://localhost:8080/#) in <cell line: 1>()
----> 1 results = rag_extractor.invoke("Key developments associated with cars")
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
2794 input = step.invoke(input, config, **kwargs)
2795 else:
-> 2796 input = step.invoke(input, config)
2797 # finish the root run
2798 except BaseException as e:
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
4976 **kwargs: Optional[Any],
4977 ) -> Output:
-> 4978 return self.bound.invoke(
4979 input,
4980 self._merge_configs(config),
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
263 return cast(
264 ChatGeneration,
--> 265 self.generate_prompt(
266 [self._convert_input(input)],
267 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
696 ) -> LLMResult:
697 prompt_messages = [p.to_messages() for p in prompts]
--> 698 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
699
700 async def agenerate_prompt(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
553 if run_managers:
554 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 555 raise e
556 flattened_outputs = [
557 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
543 try:
544 results.append(
--> 545 self._generate_with_cache(
546 m,
547 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
768 else:
769 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 770 result = self._generate(
771 messages, stop=stop, run_manager=run_manager, **kwargs
772 )
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _generate(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, **kwargs)
765 generation_config=generation_config,
766 )
--> 767 response: GenerateContentResponse = _chat_with_retry(
768 request=request,
769 **kwargs,
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(generation_method, **kwargs)
194 raise e
195
--> 196 return _chat_with_retry(**kwargs)
197
198
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in wrapped_f(*args, **kw)
334 copy = self.copy()
335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)
337
338 def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
378
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in <lambda>(rs)
396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
400
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
452
453 self._condition.wait(timeout)
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(**kwargs)
188
189 except google.api_core.exceptions.InvalidArgument as e:
--> 190 raise ChatGoogleGenerativeAIError(
191 f"Invalid argument provided to Gemini: {e}"
192 ) from e
ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[key_developments].items: missing field.
### Description
Hi !
Since yesterday, I try to follow this official guide in the v0.2 documentation : https://python.langchain.com/v0.2/docs/how_to/extraction_long_text/
However, it doesn't work well with Chat Google Generative AI
The collab link is here, if you want to try : https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing
I have followed the guide step by step, but it keep having an error about missing field on the request.
For information, Chat Google Generative AI have Structured Output : https://python.langchain.com/v0.2/docs/integrations/chat/google_generative_ai/
And also, it's not about my location either (I have already success for others use of Chat Google Generative AI)
I have try differents things with schema, and I go to the conclusion that I can't use scheme that define other scheme in it like (or List):
```python
class ExtractionData(BaseModel):
"""Extracted information about key developments in the history of cars."""
key_developments: List[KeyDevelopment]
```
However I can use without problem this scheme :
```python
class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""
year: int = Field(
..., description="The year when there was an important historic development."
)
description: str = Field(
..., description="What happened in this year? What was the development?"
)
evidence: str = Field(
...,
description="Repeat in verbatim the sentence(s) from which the year and description information were extracted",
)
```
(but responses with scheme tend to have very bad result with Chat Google, like it's 90% time non-sense)
Sorry for my english which is not really perfect and thank you for reading me !
- ToyHugs
### System Info
https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing | [Google Generative AI] Structured Output doesn't work with advanced schema | https://api.github.com/repos/langchain-ai/langchain/issues/24225/comments | 1 | 2024-07-13T11:54:26Z | 2024-07-22T13:53:13Z | https://github.com/langchain-ai/langchain/issues/24225 | 2,406,868,969 | 24,225 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import NotionDBLoader
loader = NotionDBLoader(database_id='your_database_id', integration_token='your_integration_token')
documents = loader.load()
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/lulu/dev/python/deeple_io/poet/main.py", line 133, in <module>
app = asyncio.run(main())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/lulu/dev/python/deeple_io/poet/main.py", line 40, in main
documents = loader.load()
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 67, in load
return list(self.load_page(page_summary) for page_summary in page_summaries)
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 67, in <genexpr>
return list(self.load_page(page_summary) for page_summary in page_summaries)
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 137, in load_page
[item["name"] for item in prop_data["people"]]
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 137, in <listcomp>
[item["name"] for item in prop_data["people"]]
KeyError: 'name'
### Description
## **Problem Description:**
When attempting to load documents from NotionDB using the LangChain library, a `KeyError: 'name'` occurs.
## **Steps to Reproduce:**
1. Install the LangChain library.
2. Run the following code.
3. Observe the error.
## **Expected Behavior:**
The documents should be loaded correctly from NotionDB.
## **Actual Behavior:**
A `KeyError: 'name'` occurs.
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.17
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
| Issue: Document loader for Notion DB doesn't supports KeyError: 'name' | https://api.github.com/repos/langchain-ai/langchain/issues/24223/comments | 0 | 2024-07-13T09:12:21Z | 2024-08-01T13:55:41Z | https://github.com/langchain-ai/langchain/issues/24223 | 2,406,813,253 | 24,223 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```Python
import os
from typing import List
import dotenv
from langchain.output_parsers import OutputFixingParser
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
dotenv.load_dotenv()
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI(openai_api_base=os.getenv('OPENAI_API_BASE')))
print(new_parser.parse(misformatted))
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/zhangshenao/Desktop/LLM/happy-langchain/6-输出解析/2.使用OutputFixingParser自动修复解析器.py", line 39, in <module>
print(new_parser.parse(misformatted))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain/output_parsers/fix.py", line 74, in parse
completion = self.retry_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2497, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 179, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1593, in _call_with_config
context.run(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 153, in _format_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']"
### Description
* I am using the OutputFixingParser component according to the official documentation, but an exception has occurred
* The official documentation link is: https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | [OutputFixingParser] I am using the OutputFixingParser component according to the official documentation, but an exception has occurred | https://api.github.com/repos/langchain-ai/langchain/issues/24219/comments | 2 | 2024-07-13T02:55:33Z | 2024-07-18T02:14:36Z | https://github.com/langchain-ai/langchain/issues/24219 | 2,406,650,388 | 24,219 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I simply tried to run the sample code in the [Agents section](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agents) and it raised the following exception:
`openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_OTNdB9zMnNa1V7U8G5Omt7Jr", 'type': 'invalid_request_error', 'param': 'messages.[2].role', 'code': None}}`
I am using the following versions of langchain, langgraph, and openai:
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.16
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langgraph==0.1.8
langsmith==0.1.84
openai==1.35.13
### Idea or request for content:
_No response_ | DOC: openai.BadRequestError Raised when Running the "Agents" Sample Code | https://api.github.com/repos/langchain-ai/langchain/issues/24196/comments | 1 | 2024-07-12T19:03:34Z | 2024-07-15T05:39:22Z | https://github.com/langchain-ai/langchain/issues/24196 | 2,406,179,110 | 24,196 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The function _await_for_run inside [openai_assistant/base.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/openai_assistant/base.py) has a sleep invocation, in the file only a sleep function is imported that is the time.sleep implementation which is blocking.
awaiting asyncio.sleep instead would be the correct solution to avoid blocking invocations in an async function.
In particular this code:
```python
async def _await_for_run(self, run_id: str, thread_id: str) -> Any:
in_progress = True
while in_progress:
run = await self.async_client.beta.threads.runs.retrieve(
run_id, thread_id=thread_id
)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
sleep(self.check_every_ms / 1000)
return run
```
should become:
```python
async def _await_for_run(self, run_id: str, thread_id: str) -> Any:
in_progress = True
while in_progress:
run = await self.async_client.beta.threads.runs.retrieve(
run_id, thread_id=thread_id
)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
---------------await asyncio.sleep(self.check_every_ms / 1000)
return run
```
in addition to this asyncio should be imported somewhere in that file.
I may open a pull request to fix this but I would be able to do so in the beginning of the next week.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to create a fastAPI endpoint to serve langchain completions and I noticed that increasing check_every_ms would block completely my endpoint for the specified ms instead of asyncrhonously awaiting the specified time.
Considering the high response time of some openai models it is not an unlickely situation increasing that number to avoid useless excess traffic every second.
I include system info below but this issue is present also in the current langchain repo.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #41~22.04.2-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 3 11:32:55 UTC 2
> Python Version: 3.11.6 (main, Jul 5 2024, 16:48:21) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.81
> langchain_cli: 0.0.25
> langchain_text_splitters: 0.2.1
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | Openai Assistant async _await_for_run method is not really async | https://api.github.com/repos/langchain-ai/langchain/issues/24194/comments | 4 | 2024-07-12T18:05:47Z | 2024-07-17T19:06:05Z | https://github.com/langchain-ai/langchain/issues/24194 | 2,406,104,929 | 24,194 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```typescript
const loader = new PDFLoader("./sample-docs/layout-parser-paper-fast.pdf");
const docs = await loader.load();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await textSplitter.splitDocuments(docs);
const model = new ChatOllama({
model: 'mistral',
temperature: 0,
baseUrl: 'http://localhost:11433',
useMMap: true,
});
const embeddings = new OllamaEmbeddings({model:"mxbai-embed-large", baseUrl: 'http://localhost:11434', onFailedAttempt: e => {throw e}, requestOptions: {
useMMap: false,
}});
const vectorstore = await ElasticVectorSearch.fromDocuments(
splits,
embeddings,
clientArgs,
);
const retriever = vectorstore.asRetriever();
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const ragChainFromDocs = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);
const stream = await ragChainFromDocs.stream(
messages.map(message =>
message.role == 'user'
? new HumanMessage(message.content)
: new AIMessage(message.content),
),
)
```
### Error Message and Stack Trace (if applicable)
DEBUG [update_slots] slot released | n_cache_tokens=211 n_ctx=512 n_past=211 n_system_tokens=0 slot_id=0 task_id=217 tid="139849943545728" timestamp=1720804402 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36972 status=200 tid="139849797445184" timestamp=1720804402
[GIN] 2024/07/12 - 14:13:22 | 200 | 1.418235476s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:22.808-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=220 tid="139849943545728" timestamp=1720804402
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=221 tid="139849943545728" timestamp=1720804402
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804402
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804402
DEBUG [update_slots] slot released | n_cache_tokens=189 n_ctx=512 n_past=189 n_system_tokens=0 slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804404 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36976 status=200 tid="139849789052480" timestamp=1720804404
[GIN] 2024/07/12 - 14:13:24 | 200 | 1.277078941s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:24.087-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=225 tid="139849943545728" timestamp=1720804404
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=226 tid="139849943545728" timestamp=1720804404
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804404
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804404
DEBUG [update_slots] slot released | n_cache_tokens=165 n_ctx=512 n_past=165 n_system_tokens=0 slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804405 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36976 status=200 tid="139849789052480" timestamp=1720804405
[GIN] 2024/07/12 - 14:13:25 | 200 | 1.116597159s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:25.206-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=230 tid="139849943545728" timestamp=1720804405
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=231 tid="139849943545728" timestamp=1720804405
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804405
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804405
DEBUG [update_slots] slot released | n_cache_tokens=202 n_ctx=512 n_past=202 n_system_tokens=0 slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804406 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36982 status=200 tid="139849780659776" timestamp=1720804406
[GIN] 2024/07/12 - 14:13:26 | 200 | 1.398312778s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:26.607-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=235 tid="139849943545728" timestamp=1720804406
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=236 tid="139849943545728" timestamp=1720804406
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804406
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804406
DEBUG [update_slots] slot released | n_cache_tokens=187 n_ctx=512 n_past=187 n_system_tokens=0 slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804407 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33576 status=200 tid="139849935148608" timestamp=1720804407
[GIN] 2024/07/12 - 14:13:27 | 200 | 1.235134467s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:27.846-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=240 tid="139849943545728" timestamp=1720804407
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=241 tid="139849943545728" timestamp=1720804407
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804407
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804407
DEBUG [update_slots] slot released | n_cache_tokens=205 n_ctx=512 n_past=205 n_system_tokens=0 slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804409 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33576 status=200 tid="139849935148608" timestamp=1720804409
[GIN] 2024/07/12 - 14:13:29 | 200 | 1.439000676s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:29.287-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=245 tid="139849943545728" timestamp=1720804409
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=246 tid="139849943545728" timestamp=1720804409
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804409
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804409
DEBUG [update_slots] slot released | n_cache_tokens=202 n_ctx=512 n_past=202 n_system_tokens=0 slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804410 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33590 status=200 tid="139849918363200" timestamp=1720804410
[GIN] 2024/07/12 - 14:13:30 | 200 | 1.358210814s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
[GIN] 2024/07/12 - 14:13:33 | 400 | 65.664µs | 127.0.0.1 | POST "/api/embeddings"
### Description
I'm trying to embend PDF splited documents on a vector store but the embeddings from OllamaEmbedding only returns 400 Bad Request on it's final request, thas a strage behaviour because counting the requests we have plus 1 final requests that always return this status even if the others are 200.
### System Info
langchain v0.2.18
npm 20
wsl
next.js | OllamaEmbeddings returns error 400 Bad Request when embedding documents | https://api.github.com/repos/langchain-ai/langchain/issues/24190/comments | 1 | 2024-07-12T17:31:07Z | 2024-07-30T11:24:42Z | https://github.com/langchain-ai/langchain/issues/24190 | 2,406,041,713 | 24,190 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/structured_output/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
From:
https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method
the below function:
json_schema = {
"title": "joke",
"description": "Joke to tell user.",
"type": "object",
"properties": {
"setup": {
"type": "string",
"description": "The setup of the joke",
},
"punchline": {
"type": "string",
"description": "The punchline to the joke",
},
"rating": {
"type": "integer",
"description": "How funny the joke is, from 1 to 10",
},
},
"required": ["setup", "punchline"],
}
structured_llm = llm.with_structured_output(json_schema)
structured_llm.invoke("Tell me a joke about cats")
Returns JSON with single quotes which causes issues for further processing. I'm using OpenAI's API but I don't think the model is the issue because when prompted without using the with_structured_output() method, it returns JSON templates with double quotes but with preceding text and ```json. So is there a way to get JSON schemas in double quotes without preceding text and ```json.
### Idea or request for content:
Can we get with_structured_output method to return in JSON format with double quotes without any preceding text and ```json? | DOC: <Issue related to /v0.2/docs/how_to/structured_output/> | https://api.github.com/repos/langchain-ai/langchain/issues/24183/comments | 0 | 2024-07-12T13:53:49Z | 2024-07-12T13:56:28Z | https://github.com/langchain-ai/langchain/issues/24183 | 2,405,669,156 | 24,183 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_experimental.llms.ollama_functions import OllamaFunctions
class Schema(BaseModel): pass
prompt = ChatPromptTemplate.from_messages([("human", [{"image_url": "data:image/jpeg;base64,{image_url}"}])])
model = OllamaFunctions()
structured_llm = prompt | model.with_structured_output(schema=Schema)
structured_llm.invoke(dict(image_url=''))
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/xyz/workspace/xyz/extraction/scratch_6.py", line 14, in <module>
structured_llm.invoke(dict(image_url=''))
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2576, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4657, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 265, in invoke
self.generate_prompt(
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 698, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 555, in generate
raise e
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 545, in generate
self._generate_with_cache(
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 770, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_experimental/llms/ollama_functions.py", line 363, in _generate
response_message = super()._generate(
^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 286, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 217, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 187, in _create_chat_stream
"messages": self._convert_messages_to_ollama_messages(messages),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_experimental/llms/ollama_functions.py", line 315, in _convert_messages_to_ollama_messages
raise ValueError(
ValueError: Only string image_url content parts are supported.
```
### Description
I'm using langchain to extract structured output from base64 encoded image using multimodal models running on ollama.
When running the example code, we get an error as `OllamaFunctions` does not support the provided message format.
If we replace the ollama `model` with an Azure GPT-4o model instead, we do not receive the error. i.e.
```python
model = AzureChatOpenAI(api_key='sk-1234',
openai_api_version="2023-12-01-preview",
azure_endpoint="https://language.openai.azure.com/")
structured_llm = prompt | model.with_structured_output(schema=Schema)
structured_llm.invoke(dict(image_url=''))
```
works as expected.
The prompt message is eventually [converted](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/chat.py#L538) into a `ImagePromptTemplate`. Which in turn is constructing the unsupported dict structure.
It appears that [`ChatOllama._convert_messages_to_ollama_messages`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/ollama.py#L142) is trying to cope with the different formats. While the overwritten [`OllamaFunction._convert_messages_to_ollama_messages`](https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/llms/ollama_functions.py#L306) does not.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:09:52 PDT 2024; root:xnu-10063.121.3~5/RELEASE_X86_64
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.16
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | OllamaFunctions incompatible with ImagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/24174/comments | 0 | 2024-07-12T08:00:09Z | 2024-07-12T08:02:48Z | https://github.com/langchain-ai/langchain/issues/24174 | 2,404,991,555 | 24,174 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def create_selector():
try:
vectorstore = Chroma()
vectorstore.delete_collection()
selector = SemanticSimilarityExampleSelector.from_examples(
examples,
llm_embeddings,
vectorstore,
k=1,
input_keys=["input"],
)
return selector
except Exception as e:
logger.error(e)
return None
```
### Error Message and Stack Trace (if applicable)
'Collection' object has no attribute 'model_fields'
### Description
I'm trying to use Chroma vectorstore in Langchain, and receive the error above. Error appeared when calling `Chroma()` function.
### System Info
OS: Ubuntu
OS Version: Ubuntu 22.04
Python Version: 3.10.12
### Packages
langchain==0.2.5
langchain-chroma==0.1.2
langchain_community==0.2.5
langchain-openai==0.1.8 | AttributeError: 'Collection' object has no attribute 'model_fields' | https://api.github.com/repos/langchain-ai/langchain/issues/24163/comments | 19 | 2024-07-12T02:41:35Z | 2024-08-02T07:25:40Z | https://github.com/langchain-ai/langchain/issues/24163 | 2,404,531,016 | 24,163 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
NA
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm a beginner to open source projects and submitted my first pull request (https://github.com/langchain-ai/langchain/pull/23628) two weeks ago. Initially, it reported some linting errors, but I fixed them, and the pull request was approved. However, it has been stuck at this stage for more than two weeks. I tried updating the branch and rerunning the workflows, but the same issue persists. Could you please advise on what might be the problem here?
Thank you!

### System Info
NA | unable to merge approved pull request | https://api.github.com/repos/langchain-ai/langchain/issues/24154/comments | 1 | 2024-07-11T22:33:58Z | 2024-07-12T15:02:26Z | https://github.com/langchain-ai/langchain/issues/24154 | 2,404,272,511 | 24,154 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/google_generative_ai/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
the documentation states that i can use GoogleGenerativeAIEmbeddings from langchain-google-genai but i got an error that i can not import it form the library
link for documentation page:
https://python.langchain.com/v0.2/docs/integrations/text_embedding/google_generative_ai/
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/google_generative_ai/> | https://api.github.com/repos/langchain-ai/langchain/issues/24148/comments | 1 | 2024-07-11T21:07:41Z | 2024-07-13T09:46:47Z | https://github.com/langchain-ai/langchain/issues/24148 | 2,404,128,451 | 24,148 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_aws import ChatBedrock
from pydantic import BaseModel, Field
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
chat = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.1},
region_name="my-region-name",
credentials_profile_name="my-profile-name",
streaming=True,
).bind_tools([Joke])
chat.invoke(""tell me a joke")```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
Cell In[22], [line 1](vscode-notebook-cell:?execution_count=22&line=1)
----> [1](vscode-notebook-cell:?execution_count=22&line=1) chain.invoke("tell me a joke")
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4653, in RunnableBindingBase.invoke(self, input, config, **kwargs)
[4647](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4647) def invoke(
[4648](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4648) self,
[4649](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4649) input: Input,
[4650](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4650) config: Optional[RunnableConfig] = None,
[4651](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4651) **kwargs: Optional[Any],
[4652](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4652) ) -> Output:
-> [4653](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4653) return self.bound.invoke(
[4654](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4654) input,
[4655](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4655) self._merge_configs(config),
[4656](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4656) **{**self.kwargs, **kwargs},
[4657](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4657) )
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
[254](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:254) def invoke(
[255](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:255) self,
[256](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:256) input: LanguageModelInput,
(...)
[260](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:260) **kwargs: Any,
[261](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:261) ) -> BaseMessage:
[262](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:262) config = ensure_config(config)
[263](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:263) return cast(
[264](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:264) ChatGeneration,
--> [265](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:265) self.generate_prompt(
[266](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:266) [self._convert_input(input)],
[267](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:267) stop=stop,
[268](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:268) callbacks=config.get("callbacks"),
[269](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:269) tags=config.get("tags"),
[270](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:270) metadata=config.get("metadata"),
[271](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:271) run_name=config.get("run_name"),
[272](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:272) run_id=config.pop("run_id", None),
[273](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:273) **kwargs,
[274](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:274) ).generations[0][0],
[275](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:275) ).message
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:698, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[690](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:690) def generate_prompt(
[691](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:691) self,
[692](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:692) prompts: List[PromptValue],
(...)
[695](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:695) **kwargs: Any,
[696](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:696) ) -> LLMResult:
[697](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:697) prompt_messages = [p.to_messages() for p in prompts]
--> [698](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:698) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:555, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:553) if run_managers:
[554](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:554) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [555](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:555) raise e
[556](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:556) flattened_outputs = [
[557](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:557) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[558](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:558) for res in results
[559](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:559) ]
[560](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:560) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:545, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[542](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:542) for i, m in enumerate(messages):
[543](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:543) try:
[544](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:544) results.append(
--> [545](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:545) self._generate_with_cache(
[546](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:546) m,
[547](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:547) stop=stop,
[548](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:548) run_manager=run_managers[i] if run_managers else None,
[549](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:549) **kwargs,
[550](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:550) )
[551](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:551) )
[552](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:552) except BaseException as e:
[553](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:553) if run_managers:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:770, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[768](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:768) else:
[769](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:769) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [770](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:770) result = self._generate(
[771](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:771) messages, stop=stop, run_manager=run_manager, **kwargs
[772](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:772) )
[773](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:773) else:
[774](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:774) result = self._generate(messages, stop=stop, **kwargs)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
[... skipping similar frames: ChatBedrock._generate at line 521 (734 times), ChatBedrock._stream at line 442 (734 times)]
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:517, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[514](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:514) llm_output: Dict[str, Any] = {}
[515](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:515) tool_calls: List[Dict[str, Any]] = []
[516](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:516) provider_stop_reason_code = self.provider_stop_reason_key_map.get(
--> [517](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:517) self._get_provider(), "stop_reason"
[518](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:518) )
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:585, in BedrockBase._get_provider(self)
[583](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:583) if self.provider:
[584](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:584) return self.provider
--> [585](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:585) if self.model_id.startswith("arn"):
[586](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:586) raise ValueError(
[587](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:587) "Model provider should be supplied when passing a model ARN as "
[588](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:588) "model_id"
[589](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:589) )
[591](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:591) return self.model_id.split(".")[0]
RecursionError: maximum recursion depth exceeded while calling a Python object```
### Description
- I am trying to stream with tool calling (recently added by Anthropic).
- Setting `streaming = False` works.
- Setting `streaming = True` I get recursion error.
- The same setting works using `ChatAnthropic` class.
### System Info
Python 3.12.1
langchain-anthropic==0.1.19
langchain-aws==0.1.10
langchain-core==0.2.13
langchain-openai==0.1.15
langchain-qdrant==0.1.1 | `RecursionError ` in `ChatBedrock` with Anthropic model, tool calling and streaming | https://api.github.com/repos/langchain-ai/langchain/issues/24136/comments | 2 | 2024-07-11T17:44:05Z | 2024-07-25T08:58:48Z | https://github.com/langchain-ai/langchain/issues/24136 | 2,403,723,399 | 24,136 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
QianfanLLMEndpoint(qianfan_ak="xxx",qianfan_sk="xxx")
```
### Error Message and Stack Trace (if applicable)
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanLLMEndpoint
qianfan_ak
str type expected (type=type_error.str)
qianfan_sk
### Description
qianfan_ak qianfan_sk pydantic check error SecretStr != str
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.13
langchain-mongodb==0.1.6
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
python 3.11
mac m3 | QianfanLLMEndpoint ak/sk SecretStr ERROR | https://api.github.com/repos/langchain-ai/langchain/issues/24126/comments | 4 | 2024-07-11T15:30:30Z | 2024-07-26T01:45:07Z | https://github.com/langchain-ai/langchain/issues/24126 | 2,403,489,721 | 24,126 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import ConfigurableField, RunnableConfig
from pydantic.v1 import SecretStr
client = ChatAnthropic(
base_url=os.environ['CHAT_ANTHROPIC_BASE_URL'],
api_key=SecretStr(os.environ['CHAT_ANTHROPIC_API_KEY']),
model_name='claude-3-opus-20240229',
).configurable_fields(
model_kwargs=ConfigurableField(
id="model_kwargs",
name="Model Kwargs",
description="Keyword arguments to pass through to the chat client (e.g. user)",
),
)
configurable = {
"model_kwargs": {"metadata": {"user_id": "testuserid"}}
}
response = client.invoke("Write me a short story", config=RunnableConfig(configurable=configurable))
print(response)
```
### Error Message and Stack Trace (if applicable)
Exception: `ValidatorError`
```
Traceback (most recent call last):
File "main.py", line 32, in <module>
response = client.invoke("Write me a short story", config=RunnableConfig(configurable=configurable))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 115, in invoke
runnable, config = self.prepare(config)
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 104, in prepare
runnable, config = runnable._prepare(merge_configs(runnable.config, config))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 415, in _prepare
self.default.__class__(**{**init_params, **configurable}),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatAnthropic
__root__
Found metadata supplied twice. (type=value_error)
```
### Description
- I'm trying to set up a reusable chat model where I can pass in a user on each invocation
- Anthropic expects this via a `metadata` object on the `messages.create(...)` call, as described here
- Since it is an extra argument to the `create()` call, I believe I should be able to pass it via `model_kwargs`
- But it seems to clash with something else (I'm guessing the `metadata` field of `BaseLanguageModel`)
Is there a way around this so that I can pass the `metadata` kwarg to the `create()` call as expected? At a glance since it's nested under `model_kwargs` it shouldn't clash with other params. Are they being flattened and if so, why?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.12.3 (main, Jul 2 2024, 11:16:56) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.19
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatAnthropic - Found metadata supplied twice | https://api.github.com/repos/langchain-ai/langchain/issues/24121/comments | 2 | 2024-07-11T14:03:02Z | 2024-07-12T12:54:48Z | https://github.com/langchain-ai/langchain/issues/24121 | 2,403,275,400 | 24,121 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.