issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[68], line 2
1 # Build graph
----> 2 from langgraph.graph import END, StateGraph
4 workflow = StateGraph(GraphState)
5 # Define the nodes
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/graph/__init__.py:1
----> 1 from langgraph.graph.graph import END, START, Graph
2 from langgraph.graph.message import MessageGraph, MessagesState, add_messages
3 from langgraph.graph.state import StateGraph
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/graph/graph.py:31
29 from langgraph.constants import END, START, TAG_HIDDEN, Send
30 from langgraph.errors import InvalidUpdateError
---> 31 from langgraph.pregel import Channel, Pregel
32 from langgraph.pregel.read import PregelNode
33 from langgraph.pregel.types import All
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/pregel/__init__.py:46
36 from langchain_core.runnables.base import Input, Output, coerce_to_runnable
37 from langchain_core.runnables.config import (
38 RunnableConfig,
39 ensure_config,
(...)
44 patch_config,
45 )
---> 46 from langchain_core.runnables.utils import (
47 ConfigurableFieldSpec,
48 create_model,
49 get_unique_config_specs,
50 )
51 from langchain_core.tracers._streaming import _StreamingCallbackHandler
52 from typing_extensions import Self
ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils' (/Users/UserName/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langchain_core/runnables/utils.py)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I was trying to import StateGraph from langchain.graph and it kept returning the error of 'create_model' not available in langchain_core.runnables.utils
### System Info
langchain==0.2.5
langchain-community==0.0.13
langchain-core==0.2.7
langchain-text-splitters==0.2.1 | Cannot import name 'create_model' from 'langchain_core.runnables.utils' | https://api.github.com/repos/langchain-ai/langchain/issues/22956/comments | 4 | 2024-06-16T12:18:50Z | 2024-07-01T14:21:23Z | https://github.com/langchain-ai/langchain/issues/22956 | 2,355,726,106 | 22,956 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Directly from the documentation:
```
URI = "./milvus_demo.db"
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"uri": URI},
)
```
### Error Message and Stack Trace (if applicable)
```
ERROR:langchain_community.vectorstores.milvus:Invalid Milvus URI: ./milvus_demo.db
Traceback (most recent call last):
File "/home/erik/RAGMeUp/server/server.py", line 13, in <module>
raghelper = RAGHelper(logger)
^^^^^^^^^^^^^^^^^
File "/home/erik/RAGMeUp/server/RAGHelper.py", line 113, in __init__
self.loadData()
File "/home/erik/RAGMeUp/server/RAGHelper.py", line 258, in loadData
vector_db = Milvus.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 1010, in from_texts
vector_db = cls(
^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 183, in warn_if_direct_instance
return wrapped(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 206, in __init__
self.alias = self._create_connection_alias(connection_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 254, in _create_connection_alias
raise ValueError("Invalid Milvus URI: %s", uri)
ValueError: ('Invalid Milvus URI: %s', './milvus_demo.db')
```
### Description
* Milvus should work with local file DB
* Connection URI cannot be anything local, triggers above error
### System Info
```
langchain==0.2.2
langchain-community==0.2.2
langchain-core==0.2.4
langchain-huggingface==0.0.2
langchain-milvus==0.1.1
langchain-postgres==0.0.6
langchain-text-splitters==0.2.1
milvus-lite==2.4.7
pymilvus==2.4.3
``` | Invalid Milvus URI when using Milvus lite with local DB | https://api.github.com/repos/langchain-ai/langchain/issues/22953/comments | 1 | 2024-06-16T09:10:07Z | 2024-06-16T09:19:33Z | https://github.com/langchain-ai/langchain/issues/22953 | 2,355,586,348 | 22,953 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following is the code when using **PydanticOutputParser** as Langchain fails to parse LLM output
```
HUGGINGFACEHUB_API_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")
repo_id = "mistralai/Mistral-7B-Instruct-v0.3"
model_kwargs = {
"max_new_tokens": 60,
"max_length": 200,
"temperature": 0.1,
"timeout": 6000
}
# Using HuggingFaceHub
llm = HuggingFaceHub(
repo_id=repo_id,
huggingfacehub_api_token = HUGGINGFACEHUB_API_TOKEN,
model_kwargs = model_kwargs,
)
# Define your desired data structure.
class Suggestions(BaseModel):
words: List[str] = Field(description="list of substitute words based on context")
# Throw error in case of receiving a numbered-list from API
@field_validator('words')
def not_start_with_number(cls, field):
for item in field:
if item[0].isnumeric():
raise ValueError("The word can not start with numbers!")
return field
parser = PydanticOutputParser(pydantic_object=Suggestions)
prompt_template = """
Offer a list of suggestions to substitute the specified target_word based on the context.
{format_instructions}
target_word={target_word}
context={context}
"""
prompt_input_variables = ["target_word", "context"]
partial_variables = {"format_instructions":parser.get_format_instructions()}
prompt = PromptTemplate(
template=prompt_template,
input_variables=prompt_input_variables,
partial_variables=partial_variables
)
model_input = prompt.format_prompt(
target_word="behaviour",
context="The behaviour of the students in the classroom was disruptive and made it difficult for the teacher to conduct the lesson."
)
output = llm(model_input.to_string())
parser.parse(output)
```
When trying to fix the error using **OutputFixingParser** another error was experienced below is the codebase
```
outputfixing_parser = OutputFixingParser.from_llm(parser=parser,llm=llm)
print(outputfixing_parser)
outputfixing_parser.parse(output)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33), in PydanticOutputParser._parse_obj(self, obj)
[32](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:32) if issubclass(self.pydantic_object, pydantic.BaseModel):
---> [33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33) return self.pydantic_object.model_validate(obj)
[34](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:34) elif issubclass(self.pydantic_object, pydantic.v1.BaseModel):
File [~\Desktop\llmai\llm_deep\Lib\site-packages\pydantic\main.py:551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551), in BaseModel.model_validate(cls, obj, strict, from_attributes, context)
[550](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:550) __tracebackhide__ = True
--> [551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551) return cls.__pydantic_validator__.validate_python(
[552](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:552) obj, strict=strict, from_attributes=from_attributes, context=context
[553](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:553) )
ValidationError: 1 validation error for Suggestions
words
Field required [type=missing, input_value={'properties': {'words': ..., 'required': ['words']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[284], [line 1](vscode-notebook-cell:?execution_count=284&line=1)
----> [1](vscode-notebook-cell:?execution_count=284&line=1) parser.parse(output)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:64](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:64), in PydanticOutputParser.parse(self, text)
...
OutputParserException: Failed to parse Suggestions from completion {"properties": {"words": {"description": "list of substitute words based on context", "items": {"type": "string"}, "title": "Words", "type": "array"}}, "required": ["words"]}. Got: 1 validation error for Suggestions
words
Field required [type=missing, input_value={'properties': {'words': ..., 'required': ['words']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
```
Error when using **OutputFixingParser**
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:33](~\Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33), in PydanticOutputParser._parse_obj(self, obj)
[32](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:32) if issubclass(self.pydantic_object, pydantic.BaseModel):
---> [33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33) return self.pydantic_object.model_validate(obj)
[34](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:34) elif issubclass(self.pydantic_object, pydantic.v1.BaseModel):
File [~\Desktop\llmai\llm_deep\Lib\site-packages\pydantic\main.py:551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551), in BaseModel.model_validate(cls, obj, strict, from_attributes, context)
[550](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:550) __tracebackhide__ = True
--> [551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551) return cls.__pydantic_validator__.validate_python(
[552](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:552) obj, strict=strict, from_attributes=from_attributes, context=context
[553](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:553) )
ValidationError: 1 validation error for Suggestions
Input should be a valid dictionary or instance of Suggestions [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[265], [line 1](vscode-notebook-cell:?execution_count=265&line=1)
----> [1](vscode-notebook-cell:?execution_count=265&line=1) outputfixing_parser.parse(output)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain\output_parsers\fix.py:62](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain/output_parsers/fix.py:62), in OutputFixingParser.parse(self, completion)
[60](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain/output_parsers/fix.py:60) except OutputParserException as e:
...
[44](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:44) try:
OutputParserException: Failed to parse Suggestions from completion null. Got: 1 validation error for Suggestions
Input should be a valid dictionary or instance of Suggestions [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
```
### Description
Output must be able to parse LLM output and extract the json produced as shown below;
```
Suggestions(words=["conduct", "misconduct", "actions", "antics", "performance", "demeanor", "attitude", "behavior", "manner", "pupil actions"])
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:27:10) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.4
> langchain: 0.2.2
> langchain_community: 0.2.4
> langsmith: 0.1.73
> langchain_google_community: 1.0.5
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Failed to parse Suggestions from completion | https://api.github.com/repos/langchain-ai/langchain/issues/22952/comments | 2 | 2024-06-16T07:10:57Z | 2024-06-18T11:22:23Z | https://github.com/langchain-ai/langchain/issues/22952 | 2,355,491,461 | 22,952 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
chain.py
```
google_api: str = os.environ["GOOGLE_API_KEY"]
vertex_model: str = os.environ["vertex_model"]
llm = ChatGoogleGenerativeAI(temperature=1.0,
model=vertex_model,
google_api_key=google_api,
safety_settings=safety_settings_NONE)
```
server.py
```
@app.post("/admin/ases-ai/{instance_id}/content-generate/invoke", include_in_schema=True)
async def ai_route(instance_id: str, token: str = Depends(validate_token), request: Request = None):
instance_id=token['holder']
try:
path = f"/admin/ases-ai/{instance_id}/question-generate/pppk/invoke"
response = await invoke_api(
api_chain=soal_pppk_chain.with_config(config=set_langfuse_config(instance_id=instance_id)),
path=path,
request=request)
return response
except Exception as e:
raise HTTPException(status_code=500, detail=f"Status code: 500, Error: {str(e)}")
```
### Error Message and Stack Trace (if applicable)
`httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.`
### Description
Trying to run `langchain serve`. When trying the API, specially post got the error `httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.`
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==3.7.1
attrs==23.2.0
backoff==2.2.1
cachetools==5.3.3
certifi==2024.6.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
cryptography==42.0.7
dataclasses-json==0.6.7
dnspython==2.6.1
fastapi==0.110.3
frozenlist==1.4.1
gitdb==4.0.11
GitPython==3.1.43
google-ai-generativelanguage==0.6.4
google-api-core==2.19.0
google-api-python-client==2.133.0
google-auth==2.30.0
google-auth-httplib2==0.2.0
google-cloud-discoveryengine==0.11.12
google-generativeai==0.5.4
googleapis-common-protos==1.63.1
grpcio==1.64.1
grpcio-status==1.62.2
h11==0.14.0
httpcore==1.0.5
httplib2==0.22.0
httpx==0.27.0
httpx-sse==0.4.0
idna==3.7
jsonpatch==1.33
jsonpointer==3.0.0
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
langchain==0.2.5
langchain-cli==0.0.25
langchain-community==0.2.5
langchain-core==0.2.7
langchain-google-genai==1.0.6
langchain-mongodb==0.1.6
langchain-text-splitters==0.2.1
langfuse==2.36.1
langserve==0.2.2
langsmith==0.1.77
libcst==1.4.0
markdown-it-py==3.0.0
marshmallow==3.21.3
mdurl==0.1.2
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
orjson==3.10.5
packaging==23.2
pipdeptree==2.22.0
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pydantic==2.7.4
pydantic_core==2.18.4
Pygments==2.18.0
PyJWT==2.3.0
pymongo==4.7.2
pyparsing==3.1.2
pypdf==4.2.0
pyproject-toml==0.0.10
python-dotenv==1.0.1
python-multipart==0.0.9
PyYAML==6.0.1
referencing==0.35.1
requests==2.32.3
rfc3986==1.5.0
rich==13.7.1
rpds-py==0.18.1
rsa==4.9
shellingham==1.5.4
smmap==5.0.1
sniffio==1.3.1
SQLAlchemy==2.0.30
sse-starlette==1.8.2
starlette==0.37.2
tenacity==8.3.0
toml==0.10.2
tomlkit==0.12.5
tqdm==4.66.4
typer==0.9.4
typing-inspect==0.9.0
typing_extensions==4.12.2
uritemplate==4.1.1
urllib3==2.2.1
uvicorn==0.23.2
wrapt==1.16.0
yarl==1.9.4 | httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol | https://api.github.com/repos/langchain-ai/langchain/issues/22951/comments | 0 | 2024-06-16T06:50:13Z | 2024-06-16T06:52:45Z | https://github.com/langchain-ai/langchain/issues/22951 | 2,355,482,036 | 22,951 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/streaming/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:

https://python.langchain.com/v0.2/docs/how_to/streaming/#chains
### Idea or request for content:
It looks like `streaming` is misspelled `sreaming`.
Positioned at the end of the `Chain` under `Using stream events`. | Wrong spell in DOC: <Issue related to /v0.2/docs/how_to/streaming/> | https://api.github.com/repos/langchain-ai/langchain/issues/22935/comments | 0 | 2024-06-15T09:03:02Z | 2024-06-15T09:13:18Z | https://github.com/langchain-ai/langchain/issues/22935 | 2,354,680,156 | 22,935 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was having a problem with using ChatMistralAI for extraction, with examples, so I went and followed the how-to page exactly. Without examples it works fine, but when I add the examples as described here:
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/#with-examples-
I get the following error:
HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_error","param":null,"code":null}
### Idea or request for content:
_No response_ | MistralAI Extraction How-To (with examples) throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/22928/comments | 4 | 2024-06-14T23:49:43Z | 2024-06-26T11:15:59Z | https://github.com/langchain-ai/langchain/issues/22928 | 2,354,262,911 | 22,928 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The webpage https://python.langchain.com/v0.2/docs/integrations/chat/#advanced-features says ChatOllama has no JSON mode (red cross) while https://python.langchain.com/v0.2/docs/concepts/#structured-output says: Some models, such as (...) Ollama support a feature called JSON mode. Also examples here support JSON mode existence https://python.langchain.com/v0.2/docs/integrations/chat/ollama/#extraction
### Idea or request for content:
Insert green checkbox for Ollama/JSON on https://python.langchain.com/v0.2/docs/integrations/chat/#advanced-features | DOC: <Issue related to /v0.2/docs/integrations/chat/> Ollama JSON mode seems to be marked incorrectly as NO | https://api.github.com/repos/langchain-ai/langchain/issues/22910/comments | 1 | 2024-06-14T17:48:39Z | 2024-06-14T23:27:56Z | https://github.com/langchain-ai/langchain/issues/22910 | 2,353,826,349 | 22,910 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from pymilvus import (
Collection,
CollectionSchema,
DataType,
FieldSchema,
WeightedRanker,
connections,
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever
from langchain_milvus.utils.sparse import BM25SparseEmbedding
# from langchain_openai import ChatOpenAI, OpenAIEmbeddings
import logging
logger = logging.getLogger("gunicorn.error")
texts = [
"In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.",
"In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.",
"In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.",
"In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.",
"In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.",
"In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.",
"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.",
"In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.",
"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.",
"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.",
]
from langchain_openai import AzureOpenAIEmbeddings
dense_embedding_func: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment="************",
openai_api_version="************",
azure_endpoint="*******************",
api_key="************************",
)
# dense_embedding_func = OpenAIEmbeddings()
dense_dim = len(dense_embedding_func.embed_query(texts[1]))
# logger.info(f"DENSE DIM - {dense_dim}")
print("DENSE DIM")
print(dense_dim)
sparse_embedding_func = BM25SparseEmbedding(corpus=texts)
sparse_embedding = sparse_embedding_func.embed_query(texts[1])
print("SPARSE EMBEDDING")
print(sparse_embedding)
# connections.connect(uri=CONNECTION_URI)
connections.connect(
host="**************", # Replace with your Milvus server IP
port="***********",
user="**************",
password="***************",
db_name="*****************"
)
print("CONNECTED")
pk_field = "doc_id"
dense_field = "dense_vector"
sparse_field = "sparse_vector"
text_field = "text"
fields = [
FieldSchema(
name=pk_field,
dtype=DataType.VARCHAR,
is_primary=True,
auto_id=True,
max_length=100,
),
FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),
FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),
FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),
]
schema = CollectionSchema(fields=fields, enable_dynamic_field=False)
collection = Collection(
name="IntroductionToTheNovels", schema=schema, consistency_level="Strong"
)
print("SCHEMA CRAETED")
dense_index = {"index_type": "FLAT", "metric_type": "IP"}
collection.create_index("dense_vector", dense_index)
sparse_index = {"index_type": "SPARSE_INVERTED_INDEX", "metric_type": "IP"}
collection.create_index("sparse_vector", sparse_index)
print("INDEX CREATED")
collection.flush()
print("FLUSHED")
entities = []
for text in texts:
entity = {
dense_field: dense_embedding_func.embed_documents([text])[0],
sparse_field: sparse_embedding_func.embed_documents([text])[0],
text_field: text,
}
entities.append(entity)
print("ENTITES")
collection.insert(entities)
print("INSERTED")
collection.load()
print("LOADED")
sparse_search_params = {"metric_type": "IP"}
dense_search_params = {"metric_type": "IP", "params": {}}
retriever = MilvusCollectionHybridSearchRetriever(
collection=collection,
rerank=WeightedRanker(0.5, 0.5),
anns_fields=[dense_field, sparse_field],
field_embeddings=[dense_embedding_func, sparse_embedding_func],
field_search_params=[dense_search_params, sparse_search_params],
top_k=3,
text_field=text_field,
)
print("RETRIEVED CREATED")
documents = retriever.invoke("What are the story about ventures?")
print(documents)
### Error Message and Stack Trace (if applicable)
RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}>
### Description
I am trying to use hybrid search in milvus database using langchain-milvus library.
But when I created index for sparse vector field, it gives an error -
RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}>
I have tried milvusclient for create collection as well but that also gives me same error.
We have commited the implementation of hybrid search after finding langchain's document but it gives an error, we are stuck in middle now, so please resolve it as soon as possible.
### System Info
pip freeze | grep langchain -
langchain-core==0.2.6
langchain-milvus==0.1.1
langchain-openai==0.1.8
----------------
Platform - linux
----------------
python version - 3.11.7
-----------------------------
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #73~20.04.1-Ubuntu SMP Mon May 6 09:43:44 UTC 2024
> Python Version: 3.11.7 (main, Dec 8 2023, 18:56:57) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.6
> langsmith: 0.1.77
> langchain_milvus: 0.1.1
> langchain_openai: 0.1.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}> | https://api.github.com/repos/langchain-ai/langchain/issues/22901/comments | 1 | 2024-06-14T14:22:14Z | 2024-06-18T07:03:54Z | https://github.com/langchain-ai/langchain/issues/22901 | 2,353,491,955 | 22,901 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
"ollama = Ollama(model=vicuna)
print(ollama.invoke("why is the sky blue"))
DATA_PATH = '/home/lamia/arenault/test_ollama_container/advancedragtest/dataPV'
DB_FAISS_PATH = 'vectorstore1/db_faiss'
loader = DirectoryLoader(DATA_PATH,
glob='*.pdf',
loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000,
chunk_overlap=250)
texts = text_splitter.split_documents(documents)
embeddings = FastEmbedEmbeddings(model_name="sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
db = FAISS.from_documents(texts, embeddings)
db.save_local(DB_FAISS_PATH)
question="What is said during the meeting ? "
docs = db.similarity_search(question)
len(docs)
qachain=RetrievalQA.from_chain_type(ollama, retriever=db.as_retriever())
res = qachain.invoke({"query": question})
print(res['result']) "
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/lamia/user/test_ollama_container/advancedragtest/rp1.py", line 53, in <module>
db = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 930, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_community/embeddings/fastembed.py", line 107, in embed_documents
return [e.tolist() for e in embeddings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/text_embedding.py", line 95, in embed
yield from self.model.embed(documents, batch_size, parallel, **kwargs)
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_embedding.py", line 268, in embed
yield from self._embed_documents(
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_text_model.py", line 105, in _embed_documents
yield from self._post_process_onnx_output(self.onnx_embed(batch))
^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_text_model.py", line 75, in onnx_embed
model_output = self.model.run(self.ONNX_OUTPUT_NAMES, onnx_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
"2024-06-13 14:16:16.791673276 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6235, index: 29, mask: {30, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.795639259 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6237, index: 31, mask: {32, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.799636049 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6240, index: 34, mask: {35, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.803653423 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6219, index: 13, mask: {14, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.807644288 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6221, index: 15, mask: {16, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.811642466 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6222, index: 16, mask: {17, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.815644076 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6224, index: 18, mask: {19, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.819637551 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6227, index: 21, mask: {22, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.823634320 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6230, index: 24, mask: {25, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.827633559 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6231, index: 25, mask: {26, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.831634722 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6239, index: 33, mask: {34, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.835633827 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6220, index: 14, mask: {15, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.839637477 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6216, index: 10, mask: {11, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.843634160 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6229, index: 23, mask: {24, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.847637243 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6238, index: 32, mask: {33, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.851635120 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6211, index: 5, mask: {6, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.855633715 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6212, index: 6, mask: {7, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.859633326 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6232, index: 26, mask: {27, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.863633725 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6213, index: 7, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.867635041 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6214, index: 8, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.871634579 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6215, index: 9, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.875635959 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6218, index: 12, mask: {13, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.879634416 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6223, index: 17, mask: {18, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.883633691 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6225, index: 19, mask: {20, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.887633415 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6228, index: 22, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.891633366 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6233, index: 27, mask: {28, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.895632904 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6234, index: 28, mask: {29, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.899633485 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6236, index: 30, mask: {31, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.903633135 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6217, index: 11, mask: {12, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.907633546 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6226, index: 20, mask: {21, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.911636455 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6210, index: 4, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.915633695 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6209, index: 3, mask: {4, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.919634692 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6208, index: 2, mask: {3, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.927640504 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6207, index: 1, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:17.058635376 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6206, index: 0, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
"
### Description
# Hey there, I am quite new to this and will be so grateful if you could propose a way to solve this !
## Here is what I tried (without success) :
"import os
os.environ["OMP_NUM_THREADS"] = "4"
Now import ONNX Runtime and other libraries
import onnxruntime as ort"
I also tried :
"
#import onnxruntime as ort
#sess_options = ort.SessionOptions()
#sess_options.intra_op_num_threads = 15
#sess_options.inter_op_num_threads = 15
"
I am running my code on a singularity container.
I will be incredibly helpful of any help.
Thanks a lot.
### System Info
python : 3.12
pip freeze
aiohttp==3.9.5
aiosignal==1.3.1
anaconda-anon-usage @ file:///croot/anaconda-anon-usage_1710965072196/work
annotated-types==0.7.0
anyio @ file:///home/conda/feedstock_root/build_artifacts/anyio_1708355285029/work
archspec @ file:///croot/archspec_1709217642129/work
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1692818318753/work
argon2-cffi-bindings @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi-bindings_1695386549414/work
arrow @ file:///home/conda/feedstock_root/build_artifacts/arrow_1696128962909/work
asgiref==3.8.1
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1698341106958/work
async-lru @ file:///home/conda/feedstock_root/build_artifacts/async-lru_1690563019058/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1704011227531/work
Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1702422572539/work
backoff==2.2.1
bcrypt==4.1.3
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1705564648255/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work
boltons @ file:///work/perseverance-python-buildout/croot/boltons_1698851177130/work
Brotli @ file:///croot/brotli-split_1714483155106/work
build==1.2.1
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cachetools==5.3.3
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1707022139797/work/certifi
cffi @ file:///croot/cffi_1714483155441/work
chardet==5.2.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.5.0
click==8.1.7
coloredlogs==15.0.1
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1710320294760/work
conda @ file:///home/conda/feedstock_root/build_artifacts/conda_1715631928597/work
conda-content-trust @ file:///croot/conda-content-trust_1714483159009/work
conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src
conda-package-handling @ file:///croot/conda-package-handling_1714483155348/work
conda_package_streaming @ file:///work/perseverance-python-buildout/croot/conda-package-streaming_1698847176583/work
contourpy @ file:///home/conda/feedstock_root/build_artifacts/contourpy_1712429918028/work
cryptography @ file:///croot/cryptography_1714660666131/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1696677705766/work
dataclasses-json==0.6.6
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1707444401483/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
deepdiff==7.0.1
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
Deprecated==1.2.14
dirtyjson==1.0.8
diskcache==5.6.3
distro @ file:///croot/distro_1714488253808/work
dnspython==2.6.1
email_validator==2.1.1
emoji==2.12.1
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1704921103267/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1698579936712/work
faiss-cpu==1.8.0
fastapi==0.111.0
fastapi-cli==0.0.4
fastembed==0.3.0
fastjsonschema @ file:///home/conda/feedstock_root/build_artifacts/python-fastjsonschema_1703780968325/work/dist
filelock==3.14.0
filetype==1.2.0
FlashRank==0.2.5
flatbuffers==24.3.25
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1717209197958/work
fqdn @ file:///home/conda/feedstock_root/build_artifacts/fqdn_1638810296540/work/dist
frozendict @ file:///home/conda/feedstock_root/build_artifacts/frozendict_1715092752354/work
frozenlist==1.4.1
fsspec==2024.6.0
google-auth==2.30.0
googleapis-common-protos==1.63.1
greenlet==3.0.3
groq==0.8.0
grpcio==1.64.1
grpcio-tools==1.64.1
h11 @ file:///home/conda/feedstock_root/build_artifacts/h11_1664132893548/work
h2 @ file:///home/conda/feedstock_root/build_artifacts/h2_1634280454336/work
hpack==4.0.0
httpcore @ file:///home/conda/feedstock_root/build_artifacts/httpcore_1711596990900/work
httptools==0.6.1
httpx @ file:///home/conda/feedstock_root/build_artifacts/httpx_1708530890843/work
huggingface-hub==0.23.3
humanfriendly==10.0
hyperframe @ file:///home/conda/feedstock_root/build_artifacts/hyperframe_1619110129307/work
idna @ file:///croot/idna_1714398848350/work
importlib_metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1710971335535/work
importlib_resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1711040877059/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1708996548741/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1717182742060/work
isoduration @ file:///home/conda/feedstock_root/build_artifacts/isoduration_1638811571363/work/dist
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1696326070614/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1715127149914/work
joblib==1.4.2
json5 @ file:///home/conda/feedstock_root/build_artifacts/json5_1712986206667/work
jsonpatch @ file:///croot/jsonpatch_1714483231291/work
jsonpath-python==1.0.6
jsonpointer==2.1
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1714573116818/work
jsonschema-specifications @ file:///tmp/tmpkv1z7p57/src
jupyter-events @ file:///home/conda/feedstock_root/build_artifacts/jupyter_events_1710805637316/work
jupyter-lsp @ file:///home/conda/feedstock_root/build_artifacts/jupyter-lsp-meta_1712707420468/work/jupyter-lsp
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1716472197302/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1710257406420/work
jupyter_server @ file:///home/conda/feedstock_root/build_artifacts/jupyter_server_1717122053158/work
jupyter_server_terminals @ file:///home/conda/feedstock_root/build_artifacts/jupyter_server_terminals_1710262634903/work
jupyterlab @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_1716470278966/work
jupyterlab_pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1707149102966/work
jupyterlab_server @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_server-split_1716433953404/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1695379925569/work
kubernetes==30.1.0
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-groq==0.1.5
langchain-text-splitters==0.2.1
langdetect==1.0.9
langsmith==0.1.74
libmambapy @ file:///croot/mamba-split_1714483352891/work/libmambapy
llama-index-core==0.10.43.post1
llama-index-readers-file==0.1.23
llama-parse==0.4.4
llama_cpp_python==0.2.67
llamaindex-py-client==0.1.19
loguru==0.7.2
lxml==5.2.2
Markdown==3.6
markdown-it-py @ file:///home/conda/feedstock_root/build_artifacts/markdown-it-py_1686175045316/work
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1706899920239/work
marshmallow==3.21.3
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1715976243782/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1713250518406/work
mdurl @ file:///home/conda/feedstock_root/build_artifacts/mdurl_1704317613764/work
menuinst @ file:///croot/menuinst_1714510563922/work
mistune @ file:///home/conda/feedstock_root/build_artifacts/mistune_1698947099619/work
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
munkres==1.1.4
mypy-extensions==1.0.0
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1710317608672/work
nbconvert @ file:///home/conda/feedstock_root/build_artifacts/nbconvert-meta_1714477135335/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1712238998817/work
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1705850609492/work
networkx==3.3
nltk==3.8.1
notebook_shim @ file:///home/conda/feedstock_root/build_artifacts/notebook-shim_1707957777232/work
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1707225359967/work/dist/numpy-1.26.4-cp312-cp312-linux_x86_64.whl#sha256=031b7d6b2e5e604d9e21fc21be713ebf28ce133ec872dce6de006742d5e49bab
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.5.40
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
ollama==0.2.1
onnx==1.16.1
onnxruntime==1.18.0
onnxruntime-gpu==1.18.0
openai==1.31.2
opentelemetry-api==1.25.0
opentelemetry-exporter-otlp-proto-common==1.25.0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
opentelemetry-instrumentation==0.46b0
opentelemetry-instrumentation-asgi==0.46b0
opentelemetry-instrumentation-fastapi==0.46b0
opentelemetry-proto==1.25.0
opentelemetry-sdk==1.25.0
opentelemetry-semantic-conventions==0.46b0
opentelemetry-util-http==0.46b0
ordered-set==4.1.0
orjson==3.10.3
overrides @ file:///home/conda/feedstock_root/build_artifacts/overrides_1706394519472/work
packaging @ file:///croot/packaging_1710807400464/work
pandas @ file:///home/conda/feedstock_root/build_artifacts/pandas_1715897630316/work
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1712320355065/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1706113125309/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
pillow @ file:///croot/pillow_1714398848491/work
pkgutil_resolve_name @ file:///home/conda/feedstock_root/build_artifacts/pkgutil-resolve-name_1694617248815/work
platformdirs @ file:///work/perseverance-python-buildout/croot/platformdirs_1701732573265/work
pluggy @ file:///work/perseverance-python-buildout/croot/pluggy_1698805497733/work
ply @ file:///home/conda/feedstock_root/build_artifacts/ply_1712242996588/work
portalocker==2.8.2
posthog==3.5.0
prometheus_client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1707932675456/work
prompt_toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1717583537988/work
protobuf==4.25.3
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1705722396628/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycosat @ file:///croot/pycosat_1714510623388/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pydantic==2.7.3
pydantic_core==2.18.4
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1714846767233/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1709721012883/work
pypdf==4.2.0
PyPDF2==3.0.1
PyPika==0.48.9
pyproject_hooks==1.1.0
PyQt5==5.15.10
PyQt5-sip @ file:///work/perseverance-python-buildout/croot/pyqt-split_1698847927472/work/pyqt_sip
PySocks @ file:///work/perseverance-python-buildout/croot/pysocks_1698845478203/work
PyStemmer==2.2.0.1
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1709299778482/work
python-dotenv==1.0.1
python-iso639==2024.4.27
python-json-logger @ file:///home/conda/feedstock_root/build_artifacts/python-json-logger_1677079630776/work
python-magic==0.4.27
python-multipart==0.0.9
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1706886791323/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1695373450623/work
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1715024373784/work
qdrant-client==1.9.1
rapidfuzz==3.9.3
referencing @ file:///home/conda/feedstock_root/build_artifacts/referencing_1714619483868/work
regex==2024.5.15
requests @ file:///croot/requests_1707355572290/work
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
rfc3339-validator @ file:///home/conda/feedstock_root/build_artifacts/rfc3339-validator_1638811747357/work
rfc3986-validator @ file:///home/conda/feedstock_root/build_artifacts/rfc3986-validator_1598024191506/work
rich @ file:///home/conda/feedstock_root/build_artifacts/rich-split_1709150387247/work/dist
rpds-py @ file:///home/conda/feedstock_root/build_artifacts/rpds-py_1715089993456/work
rsa==4.9
ruamel.yaml @ file:///work/perseverance-python-buildout/croot/ruamel.yaml_1698863605521/work
safetensors==0.4.3
scikit-learn==1.5.0
scipy==1.13.1
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1712584999685/work
sentence-transformers==3.0.1
setuptools==69.5.1
shellingham==1.5.4
sip @ file:///home/conda/feedstock_root/build_artifacts/sip_1697300425834/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
sniffio @ file:///home/conda/feedstock_root/build_artifacts/sniffio_1708952932303/work
snowballstemmer==2.2.0
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1693929250441/work
SQLAlchemy==2.0.30
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
starlette==0.37.2
striprtf==0.0.26
sympy==1.12.1
tabulate==0.9.0
tenacity==8.3.0
terminado @ file:///home/conda/feedstock_root/build_artifacts/terminado_1710262609923/work
threadpoolctl==3.5.0
tiktoken==0.7.0
tinycss2 @ file:///home/conda/feedstock_root/build_artifacts/tinycss2_1713974937325/work
tokenizers==0.19.1
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1644342247877/work
torch==2.2.0
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1708363096407/work
tqdm @ file:///croot/tqdm_1714567712644/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1713535121073/work
transformers==4.41.2
triton==2.2.0
truststore @ file:///work/perseverance-python-buildout/croot/truststore_1701735771625/work
typer==0.12.3
types-python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/types-python-dateutil_1710589910274/work
typing-inspect==0.9.0
typing-utils @ file:///home/conda/feedstock_root/build_artifacts/typing_utils_1622899189314/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1717287769032/work
tzdata @ file:///home/conda/feedstock_root/build_artifacts/python-tzdata_1707747584337/work
ujson==5.10.0
unstructured==0.14.4
unstructured-client==0.23.0
uri-template @ file:///home/conda/feedstock_root/build_artifacts/uri-template_1688655812972/work/dist
urllib3 @ file:///croot/urllib3_1707770551213/work
uvicorn==0.30.1
uvloop==0.19.0
watchfiles==0.22.0
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1704731205417/work
webcolors @ file:///home/conda/feedstock_root/build_artifacts/webcolors_1717667289718/work
webencodings @ file:///home/conda/feedstock_root/build_artifacts/webencodings_1694681268211/work
websocket-client @ file:///home/conda/feedstock_root/build_artifacts/websocket-client_1713923384721/work
websockets==12.0
wheel==0.43.0
wrapt==1.16.0
yarl==1.9.4
| [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 8353, index: 0, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set. | https://api.github.com/repos/langchain-ai/langchain/issues/22898/comments | 0 | 2024-06-14T13:11:49Z | 2024-06-14T13:23:00Z | https://github.com/langchain-ai/langchain/issues/22898 | 2,353,353,463 | 22,898 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class PyPDFParser(BaseBlobParser):
"""Load `PDF` using `pypdf`"""
def __init__(
self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False
):
self.password = password
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]: # type: ignore[valid-type]
"""Lazily parse the blob."""
import pypdf
self.pdf_blob = blob
with blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
yield from [
Document(
page_content=page.extract_text()
+ self._extract_images_from_page(page),
metadata={"source": blob.source, "page": page_number}, # type: ignore[attr-defined]
)
for page_number, page in enumerate(pdf_reader.pages)
]
def _extract_images_from_page(self, page: pypdf._page.PageObject) -> str:
"""Extract images from page and get the text with RapidOCR."""
if not self.extract_images or "/XObject" not in page["/Resources"].keys():
return ""
xObject = page["/Resources"]["/XObject"].get_object() # type: ignore
images = []
for obj in xObject:
# print(f"obj: {xObject[obj]}")
if xObject[obj]["/Subtype"] == "/Image":
if xObject[obj].get("/Filter"):
if isinstance(xObject[obj]["/Filter"], str):
if xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITHOUT_LOSS:
height, width = xObject[obj]["/Height"], xObject[obj]["/Width"]
# print(xObject[obj].get_data())
try:
images.append(
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
height, width, -1
)
)
except Exception as e:
if xObject[obj]["/Filter"][1:] == "CCITTFaxDecode":
import fitz
with self.pdf_blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
with fitz.open("pdf", pdf_file_obj.read()) as doc:
pix = doc.load_page(page.page_number).get_pixmap(matrix=fitz.Matrix(1,1), colorspace=fitz.csGRAY)
images.append(pix.tobytes())
else:
warnings.warn(f"Reshape Error: {xObject[obj]}")
elif xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITH_LOSS:
images.append(xObject[obj].get_data())
else:
warnings.warn(f"Unknown PDF Filter: {xObject[obj]["/Filter"][1:]}")
elif isinstance(xObject[obj]["/Filter"], list):
for xObject_filter in xObject[obj]["/Filter"]:
if xObject_filter[1:] in _PDF_FILTER_WITHOUT_LOSS:
height, width = xObject[obj]["/Height"], xObject[obj]["/Width"]
# print(xObject[obj].get_data())
try:
images.append(
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
height, width, -1
)
)
except Exception as e:
if xObject[obj]["/Filter"][1:] == "CCITTFaxDecode":
import fitz
with self.pdf_blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
with fitz.open("pdf", pdf_file_obj.read()) as doc:
pix = doc.load_page(page.number).get_pixmap(matrix=fitz.Matrix(1,1), colorspace=fitz.csGRAY)
images.append(pix.tobytes())
else:
warnings.warn(f"Reshape Error: {xObject[obj]}")
break
elif xObject_filter[1:] in _PDF_FILTER_WITH_LOSS:
images.append(xObject[obj].get_data())
break
else:
warnings.warn(f"Unknown PDF Filter: {xObject_filter[1:]}")
else:
warnings.warn("Can Not Find PDF Filter!")
return extract_from_images_with_rapidocr(images)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When I use langchain-community, some PDF images will report errors during OCR. I tried to add some processing based on the source code PyPDFParser class, which temporarily solved the problem. Administrators can check whether to add this part of code in the new version. The complete PyPDFParser class is shown in Example Code.
### System Info
langchain-community==0.2.4 | When using langchain-community, some PDF images will report errors during OCR | https://api.github.com/repos/langchain-ai/langchain/issues/22892/comments | 0 | 2024-06-14T11:02:04Z | 2024-06-14T11:04:33Z | https://github.com/langchain-ai/langchain/issues/22892 | 2,353,121,701 | 22,892 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our document loader integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the document loader docstrings and updating the actual integration docs.
This needs to be done for each DocumentLoader integration, ideally with one PR per DocumentLoader.
Related to broader issues https://github.com/langchain-ai/langchain/issues/21983 and https://github.com/langchain-ai/langchain/issues/22005.
## Docstrings
Each DocumentLoader class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant. See RecursiveUrlLoader [docstrings](https://github.com/langchain-ai/langchain/blob/869523ad728e6b76d77f170cce13925b4ebc3c1e/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L54) and [corresponding API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) for an example.
## Doc Pages
Each DocumentLoader [docs page](https://python.langchain.com/v0.2/docs/integrations/document_loaders/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/document_loaders.ipynb). See [RecursiveUrlLoader](https://python.langchain.com/v0.2/docs/integrations/document_loaders/recursive_url/) for an example.
You can use the `langchain-cli` to quickly get started with a new document loader integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type "DocumentLoader" --destination-dir ./docs/docs/integrations/document_loaders/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Loader" suffix. This will create a template doc with some autopopulated fields at docs/docs/integrations/document_loaders/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
"""
__ModuleName__ document loader integration
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__Loader
loader = __ModuleName__Loader(
url = "https://docs.python.org/3.9/",
# otherparams = ...
)
Load:
.. code-block:: python
docs = loader.load()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
# TODO: Delete if async load is not implemented
Async load:
.. code-block:: python
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
Lazy load:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
# async variant:
# docs_lazy = await loader.alazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
""" | Standardize DocumentLoader docstrings and integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/22866/comments | 1 | 2024-06-13T21:10:15Z | 2024-07-31T21:46:26Z | https://github.com/langchain-ai/langchain/issues/22866 | 2,352,072,105 | 22,866 |
[
"langchain-ai",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | Standardize DocumentLoader docstrings and integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/22856/comments | 0 | 2024-06-13T18:22:34Z | 2024-06-13T19:57:10Z | https://github.com/langchain-ai/langchain/issues/22856 | 2,351,793,656 | 22,856 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_message_histories.redis import RedisChatMessageHistory
try:
message_history = RedisChatMessageHistory(
session_id="12345678", url="redis://localhost:6379", ttl=600
)
except Exception as e:
abort(500, f'Error occurred: {str(e)}')
retriever = pdf_docsearch.as_retriever(search_type="similarity", search_kwargs={"k": 4})
memory = ConversationBufferWindowMemory(memory_key="chat_history", chat_memory=message_history,
input_key='question', output_key='answer',
return_messages=True, k=20)
### Error Message and Stack Trace (if applicable)
Error occurred: 'cluster_enabled'"
### Description
I'm working on implementing long-term memory for a chatbot using Langchain and a Redis database. However, I'm facing issues with the Redis client connection, particularly with the `redis.py` file where `cluster_info` seems to be empty in standalone mode
### System Info
Python 3.10
langchain-core 0.1.43
langchain-community 0.0.32 | RedisChatMessageHistory encountering issues in Redis standalone mode on Windows. | https://api.github.com/repos/langchain-ai/langchain/issues/22845/comments | 1 | 2024-06-13T10:37:38Z | 2024-07-25T20:04:17Z | https://github.com/langchain-ai/langchain/issues/22845 | 2,350,800,284 | 22,845 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Context:
* MessagePlaceholder can be optional or non optional
* Current LangChain API for input variables doesn't distinguish between possible input variables vs. required input variables
See: https://github.com/langchain-ai/langchain/pull/21640
## Requirements
* get_input_schema() should reflect optional and required inputs
* Expose another property to either fetch all required or all possible input variables (with explanation about why this is the correct approach) alternatively delegate to `get_input_schema()`, and make semantics of input_variables clear (e.g., all possible values)
```python
from langchain import LLMChain
prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder("history", optional=True), ('user', '${input}')])
model = ChatOpenAI()
chain = LLMChain(llm=model, prompt=prompt)
chain({'input': 'what is your name'})
prompt.get_input_schema()
```
| Spec out API for all required vs. all possible input variables | https://api.github.com/repos/langchain-ai/langchain/issues/22832/comments | 2 | 2024-06-12T20:14:59Z | 2024-07-17T21:34:51Z | https://github.com/langchain-ai/langchain/issues/22832 | 2,349,606,960 | 22,832 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
https://github.com/langchain-ai/langchain/pull/15659 (introduced in `langchain-community==0.0.20`) removed the document id from AzureSearch retrieved Documents, which was a breaking change. Was there are a reason this was done? If not let's add it back. | Add Document ID back to AzureSearch Documents | https://api.github.com/repos/langchain-ai/langchain/issues/22827/comments | 1 | 2024-06-12T17:11:45Z | 2024-06-12T18:07:37Z | https://github.com/langchain-ai/langchain/issues/22827 | 2,349,293,411 | 22,827 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In section **5. Retrieval and Generation: Generate** under [Built-in chains](https://python.langchain.com/v0.2/docs/tutorials/rag/#built-in-chains) there is an error in the code example:
from **langchain.chains** import create_retrieval_chain
should be changed to
from **langchain.chains.retrieval** import create_retrieval_chain
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/22826/comments | 11 | 2024-06-12T17:04:38Z | 2024-06-13T14:03:57Z | https://github.com/langchain-ai/langchain/issues/22826 | 2,349,275,655 | 22,826 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Context
Currently, LangChain supports Pydantic 2 only through the v1 namespace.
The plan is to transition for Pydantic 2 with release 0.3.0 of LangChain, and drop support for Pydantic 1.
LangChain has around ~1000 pydantic objects across different packages While LangChain uses a number of deprecated features, one of the harder things to update is the usage of a vanilla `@root_validator()` (which is used ~250 times across the code base).
The goal of this issue is to do as much preliminary work as possible to help prepare for the migration from pydantic v1 to pydantic 2.
To help prepare for the migration, we'll need to refactor each occurrence of a vanilla `root_validator()` to one of the following 3 variants (depending on what makes sense in the context of the model):
1. `root_validator(pre=True)` -- pre initialization validator
2. `root_validator(pre=False, skip_on_failure=True)` -- post initialization validator
3. `root_validator(pre=True)` AND `root_validator(pre=False, skip_on_failure=True)` to include both pre initialization and post initialization validation.
## Guidelines
- Pre-initialization is most useful for **creating defaults** for values, especially when the defaults cannot be supplied per field individually.
- Post-initialization is most useful for doing more complex validation, especially one that involves multiple fields.
## What not to do
* Do **NOT** upgrade to `model_validator`. We're trying to break the work into small chunks that can be done while we're still using Pydantic v1 functionality!
* Do **NOT** create `field_validators` when doing the refactor.
## Simple Example
```python
class Foo(BaseModel):
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
values["api_key"] = get_from_dict_or_env(
values, "some_api_key", "SOME_API_KEY", default=""
)
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
raise ValueError("temperature must be in the range [0.0, 1.0]")
return values
```
# After refactor
```python
class Foo(BaseModel):
@root_validator(pre=True)
def pre_init(cls, values):
# Logic for setting defaults goes in the pre_init validator.
# While in some cases, the logic could be pulled into the `Field` definition
# directly, it's perfectly fine for this refactor to keep the changes minimal
# and just move the logic into the pre_init validator.
values["api_key"] = get_from_dict_or_env(
values, "some_api_key", "SOME_API_KEY", default=""
)
return values
@root_validator(pre=False, skip_on_failure=True)
def post_init(self, values):
# Post init validation works with an object that is already initialized
# so it can access the fields and their values (e.g., temperature).
# if this logic were part of the pre_init validator, it would raise
# a KeyError exception since `temperature` does not exist in the values
# dictionary at that point.
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
raise ValueError("temperature must be in the range [0.0, 1.0]")
return values
```
## Example Refactors
Here are some actual for the refactors https://gist.github.com/eyurtsev/be30ddbc54dcdc02f98868eacb24b2a1
If you're feeling especially creative, you could try to use the example refactors, an LLM chain built with an appropriate prompt to attempt to automatically fix this code using LLMs!
## Vanilla `root_validator
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/agent_toolkits/connery/toolkit.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/agents/openai_assistant/base.py#L212
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chains/llm_requests.py#L62
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/anyscale.py#L104
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/azure_openai.py#L108
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/baichuan.py#L145
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/baidu_qianfan_endpoint.py#L174
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/coze.py#L119
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/dappier.py#L78
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/deepinfra.py#L240
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/edenai.py#L303
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/ernie.py#L111
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/fireworks.py#L115
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/google_palm.py#L263
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/huggingface.py#L79
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/hunyuan.py#L193
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/jinachat.py#L220
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/kinetica.py#L344
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/konko.py#L87
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/litellm.py#L242
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/moonshot.py#L28
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/octoai.py#L50
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/openai.py#L277
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L70
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/premai.py#L229
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/solar.py#L40
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/sparkllm.py#L198
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/tongyi.py#L276
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/vertexai.py#L227
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/yuan2.py#L168
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/zhipuai.py#L264
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/cross_encoders/sagemaker_endpoint.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_compressors/dashscope_rerank.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_compressors/volcengine_rerank.py#L42
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/apify_dataset.py#L52
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/aleph_alpha.py#L83
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/anyscale.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/awa.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/azure_openai.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/baidu_qianfan_endpoint.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/bedrock.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/clarifai.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/cohere.py#L57
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/dashscope.py#L113
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/deepinfra.py#L62
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/edenai.py#L38
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/embaas.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/ernie.py#L34
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/fastembed.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/gigachat.py#L80
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/google_palm.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/gpt4all.py#L31
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/huggingface_hub.py#L55
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/jina.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/laser.py#L44
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llamacpp.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llm_rails.py#L39
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/localai.py#L196
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/minimax.py#L87
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/mosaicml.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/nemo.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/nlpcloud.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/oci_generative_ai.py#L88
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/octoai_embeddings.py#L41
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/openai.py#L285
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/premai.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sagemaker_endpoint.py#L118
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sambanova.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/solar.py#L83
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/vertexai.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/volcengine.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/yandex.py#L78
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ai21.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/aleph_alpha.py#L170
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anthropic.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anthropic.py#L188
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anyscale.py#L95
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/aphrodite.py#L160
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/baichuan.py#L34
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/baidu_qianfan_endpoint.py#L79
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bananadev.py#L66
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/beam.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bedrock.py#L392
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bedrock.py#L746
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/cerebriumai.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/clarifai.py#L56
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/cohere.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ctransformers.py#L60
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ctranslate2.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/deepinfra.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/deepsparse.py#L58
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/edenai.py#L75
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/exllamav2.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/fireworks.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/friendli.py#L69
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gigachat.py#L116
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/google_palm.py#L110
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gooseai.py#L89
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gpt4all.py#L130
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_endpoint.py#L165
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_hub.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_text_gen_inference.py#L137
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py#L136
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/manifest.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/minimax.py#L74
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/moonshot.py#L82
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/mosaicml.py#L67
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/nlpcloud.py#L59
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/oci_data_science_model_deployment_endpoint.py#L50
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/oci_generative_ai.py#L73
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/octoai_endpoint.py#L69
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/opaqueprompts.py#L41
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L272
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L821
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L1028
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openlm.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/pai_eas_endpoint.py#L55
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/petals.py#L89
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/pipelineai.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/predictionguard.py#L56
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/replicate.py#L100
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/rwkv.py#L100
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sagemaker_endpoint.py#L251
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sambanova.py#L243
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sambanova.py#L756
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/solar.py#L71
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sparkllm.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/stochasticai.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/symblai_nebula.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/tongyi.py#L201
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vertexai.py#L226
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vertexai.py#L413
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vllm.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/volcengine_maas.py#L55
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/watsonxllm.py#L118
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/writer.py#L72
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/yandex.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/arcee.py#L73
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/google_cloud_documentai_warehouse.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/pinecone_hybrid_search.py#L139
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/qdrant_sparse_vector_retriever.py#L52
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/thirdai_neuraldb.py#L113
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/connery/service.py#L23
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/connery/tool.py#L66
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/apify.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/arcee.py#L54
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/arxiv.py#L75
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/asknews.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/awslambda.py#L37
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/bibtex.py#L43
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/cassandra_database.py#L485
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/clickup.py#L326
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dalle_image_generator.py#L92
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dataforseo_api_search.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dataherald.py#L29
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/duckduckgo_search.py#L44
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/github.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/gitlab.py#L37
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/golden_query.py#L31
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_finance.py#L32
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_jobs.py#L32
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_lens.py#L38
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_places_api.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_scholar.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_search.py#L72
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_serper.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_trends.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/jira.py#L23
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/merriam_webster.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/outline.py#L30
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/polygon.py#L20
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/pubmed.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/reddit_search.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/rememberizer.py#L16
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/searchapi.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/searx_search.py#L232
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/semanticscholar.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/serpapi.py#L60
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/stackexchange.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/tensorflow_datasets.py#L63
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/twilio.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py#L95
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikipedia.py#L29
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wolfram_alpha.py#L28
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/azuresearch.py#L1562
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/open_clip/open_clip.py#L17
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L773
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L977
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L991
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/openai_assistant/base.py#L213
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/base.py#L228
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/combine_documents/map_rerank.py#L109
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversation/base.py#L48
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L483
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/elasticsearch_database/base.py#L59
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/moderation.py#L43
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/vector_db.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/retrieval_qa/base.py#L287
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/retrieval_qa/base.py#L295
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/router/llm_router.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/sequential.py#L155
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/buffer.py#L85
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/summary.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/summary_buffer.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/output_parsers/combining.py#L18
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/output_parsers/enum.py#L15
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/embeddings_filter.py#L48
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ai21/langchain_ai21/ai21_base.py#L21
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ai21/langchain_ai21/chat_models.py#L71
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/chat_models.py#L599
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/llms.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/llms.py#L161
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/langchain_fireworks/chat_models.py#L322
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/langchain_fireworks/embeddings.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/groq/langchain_groq/chat_models.py#L170
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L325
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/embeddings/huggingface_endpoint.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/llms/huggingface_endpoint.py#L160
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/langchain_ibm/embeddings.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/langchain_ibm/llms.py#L128
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/mistralai/langchain_mistralai/chat_models.py#L432
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/mistralai/langchain_mistralai/embeddings.py#L67
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/azure.py#L115
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L364
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/embeddings/azure.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/embeddings/base.py#L229
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/llms/azure.py#L87
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/llms/base.py#L156
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/chat_models.py#L74
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/embeddings.py#L143
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/llms.py#L86
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/upstage/langchain_upstage/chat_models.py#L82
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/upstage/langchain_upstage/embeddings.py#L145
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/voyageai/langchain_voyageai/embeddings.py#L51
| Prepare for pydantic 2 migration by refactoring vanilla @root_validator() usage | https://api.github.com/repos/langchain-ai/langchain/issues/22819/comments | 1 | 2024-06-12T14:09:36Z | 2024-07-05T16:25:26Z | https://github.com/langchain-ai/langchain/issues/22819 | 2,348,881,003 | 22,819 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
### Error Message and Stack Trace (if applicable)
ERROR:Failed to create vector extension: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
2024-06-12,16:45:07 start_local:828 - ERROR:Exception on /codebits/api/v1/parse [POST]
### Description
ERROR:Failed to create vector extension: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
2024-06-12,16:45:07 start_local:828 - ERROR:Exception on /codebits/api/v1/parse [POST]
### System Info
<img width="733" alt="Screenshot 2024-06-12 at 4 55 43 PM" src="https://github.com/langchain-ai/langchain/assets/108388565/74ce6f4f-491c-41b0-98f6-e0859745aa5a">
MAC
python 3.12 | I am getting this error | https://api.github.com/repos/langchain-ai/langchain/issues/22811/comments | 4 | 2024-06-12T11:27:58Z | 2024-06-13T05:23:01Z | https://github.com/langchain-ai/langchain/issues/22811 | 2,348,531,542 | 22,811 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import (
ChatHuggingFace,
HuggingFacePipeline,
)
chat_llm = ChatHuggingFace(
llm=HuggingFacePipeline.from_model_id(
model_id="path/to/your/local/model", # I downloaded Meta-Llama-3-8B
task="text-generation",
device_map="auto",
model_kwargs={"temperature": 0.0, "local_files_only": True},
)
)
```
### Error Message and Stack Trace (if applicable)
```bash
src/resources/predictor.py:55: in load
self.llm = ChatHuggingFace(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py:169: in __init__
self._resolve_model_id()
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py:295: in _resolve_model_id
available_endpoints = list_inference_endpoints("*")
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:7081: in list_inference_endpoints
user = self.whoami(token=token)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
return fn(*args, **kwargs)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:1390: in whoami
headers=self._build_hf_headers(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:8448: in _build_hf_headers
return build_hf_headers(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
return fn(*args, **kwargs)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py:124: in build_hf_headers
token_to_send = get_token_to_send(token)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
token = True
def get_token_to_send(token: Optional[Union[bool, str]]) -> Optional[str]:
"""Select the token to send from either `token` or the cache."""
# Case token is explicitly provided
if isinstance(token, str):
return token
# Case token is explicitly forbidden
if token is False:
return None
# Token is not provided: we get it from local cache
cached_token = get_token()
# Case token is explicitly required
if token is True:
if cached_token is None:
> raise LocalTokenNotFoundError(
"Token is required (`token=True`), but no token found. You"
" need to provide a token or be logged in to Hugging Face with"
" `huggingface-cli login` or `huggingface_hub.login`. See"
" https://huggingface.co/settings/tokens."
)
E huggingface_hub.errors.LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py:158: LocalTokenNotFoundError
```
### Description
I am trying to use the `langchain-huggingface` library to instantiate a `ChatHuggingFace` object with a `HuggingFacePipeline` `llm` parameter which targets a locally downloaded model (here, `Meta-Llama-3-8B`).
I expect the instantiation to work fine even though I don't have a HuggingFace token setup in my environment as I use a local model.
Instead, the instantiation fails because it tries to read a token in order to list the available endpoints under my HuggingFace account.
After investigation, I think this line of code should be at line 456 instead of line 443 in file `langchain/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py`
```python
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
available_endpoints = list_inference_endpoints("*") # Line 443: This line is not at the right place
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return # My code lies in this case where it does not use available endpoints
else:
endpoint_url = self.llm.endpoint_url
# Line 456: The line should be here instead
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
### System Info
```bash
huggingface-hub 0.23.2 Client library to download and publish models, datasets and other repos on the huggingface.co hub
langchain 0.2.1 Building applications with LLMs through composability
langchain-core 0.2.2 Building applications with LLMs through composability
langchain-huggingface 0.0.3 An integration package connecting Hugging Face and LangChain
langchain-text-splitters 0.2.0 LangChain text splitting utilities
sentence-transformers 3.0.0 Multilingual text embeddings
tokenizers 0.19.1
transformers 4.41.2 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
```
platform: `linux`
python: `Python 3.10.12` | ChatHuggingFace using local model with HuggingFacePipeline wrongly checks for available inference endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/22804/comments | 8 | 2024-06-12T07:52:24Z | 2024-07-30T07:53:26Z | https://github.com/langchain-ai/langchain/issues/22804 | 2,348,079,651 | 22,804 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro", streaming=True,max_tokens=2524)
default_chain = LLMChain(
prompt = DEFAULT_PROMPT,
llm=self.llm,
verbose=False
)
`default_chain.ainvoke({"input": rephrased_question['text']}, config={"callbacks":[callback]})`
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Rewrite on_llm_new_token to send token to client."""
await self.send(token)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have initiated a langchain chain as seen above, where call back is a class with on_llm_new_token
To call the chain i use ainvoke.
If I use Anyscale llm class or VLLMOpenAI the response is streamed correctly, however with google this is not the case.
Is there a bug in my code? Perhaps some other parameter I should pass to ChatGoogleGenerativeAI ot does google not support streaming?
### System Info
langchain 0.1.0
langchain-community 0.0.11
langchain-core 0.1.9
langchain-google-genai 1.0.1
langchainhub 0.1.15
langsmith 0.0.92
| ChatGoogleGenerativeAI does not support streaming | https://api.github.com/repos/langchain-ai/langchain/issues/22802/comments | 2 | 2024-06-12T06:36:40Z | 2024-06-12T08:31:54Z | https://github.com/langchain-ai/langchain/issues/22802 | 2,347,931,743 | 22,802 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In https://python.langchain.com/v0.2/docs/tutorials/sql_qa/#dealing-with-high-cardinality-columns this section, after define retriever_tool , should add this tool into the tools as tools.append(retriever_tool) . Or the agent will not know the existence of the retriever_tool.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/sql_qa/> | https://api.github.com/repos/langchain-ai/langchain/issues/22798/comments | 1 | 2024-06-12T05:12:55Z | 2024-06-17T12:57:18Z | https://github.com/langchain-ai/langchain/issues/22798 | 2,347,825,260 | 22,798 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain
from langchain_community.chat_models import ChatHunyuan
print(langchain.__version__)
hunyuan_app_id = "******"
hunyuan_secret_id = "*********"
hunyuan_secret_key = "*************"
llm_tongyi = ChatHunyuan(streaming=True, hunyuan_app_id=hunyuan_app_id,
hunyuan_secret_id=hunyuan_secret_id,
hunyuan_secret_key=hunyuan_secret_key)
print(llm_tongyi.invoke("你好啊"))
### Error Message and Stack Trace (if applicable)
ValueError: Error from Hunyuan api response: {'note': '以上内容为AI生成,不代表开发者立场,请勿删除或修改本标记', 'choices': [{'finish_reason': 'stop'}], 'created': '1718155233', 'id': '12390d63-7be5-4dbe-b567-183f3067bc75', 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0}, 'error': {'code': 2001, 'message': '鉴权失败:[request id:12390d63-7be5-4dbe-b567-183f3067bc75]signature calculated is different from client signature'}}
### Description
hunyuan message include chinese signature error
### System Info
langchain version 0.1.9
windows
3.9.13 | hunyuan message include chinese signature error | https://api.github.com/repos/langchain-ai/langchain/issues/22795/comments | 0 | 2024-06-12T03:31:57Z | 2024-06-12T03:34:28Z | https://github.com/langchain-ai/langchain/issues/22795 | 2,347,725,322 | 22,795 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from pydantic.v1 import BaseModel
from pydantic import BaseModel as BaseModelV2
class Answer(BaseModel):
answer: str
class Answer2(BaseModelV2):
""""The answer."""
answer: str
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
model.with_structured_output(Answer).invoke('the answer is foo') # <-- Returns pydantic object
model.with_structured_output(Answer2).invoke('the answer is foo') # <--- Returns dict
``` | with_structured_output format depends on whether we're using pydantic proper or pydantic.v1 namespace | https://api.github.com/repos/langchain-ai/langchain/issues/22782/comments | 2 | 2024-06-11T18:30:58Z | 2024-06-14T21:54:27Z | https://github.com/langchain-ai/langchain/issues/22782 | 2,347,043,210 | 22,782 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
dotenv.load_dotenv()
llm = ChatOpenAI(
model="gpt-4",
temperature=0.2,
# NOTE: setting max_tokens to "100" works. Setting to 8192 or something slightly lower does not.
max_tokens=8160
)
output_parser = StrOutputParser()
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"),
])
chain = prompt_template | llm | output_parser
response = chain.invoke({
"messages": [
HumanMessage(content="what llm are you 1? what llm are you 2? what llm are you 3? what llm are you 4? what llm are you 5? what llm are you 6?"),
],
})
print(response)
```
### Error Message and Stack Trace (if applicable)
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8235 tokens (75 in the messages, 8160 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
`max_tokens` is not correctly accounting for user prompt.
If you specify a `max_tokens` of `100` as an example, it "correctly accounts for it" (not really, but gives a result), by simply having the extra room in the context window to expand into. With any given prompt, it will produce the expected result.
However, If you specify a max_tokens (for GPT4 as an example of "8192" or "8100", etc. it does not.
This means max_tokens is effectively not implemented correctly.
### System Info
langchain==0.1.20
langchain-aws==0.1.4
langchain-community==0.0.38
langchain-core==0.1.52
langchain-google-vertexai==1.0.3
langchain-openai==0.1.7
langchain-text-splitters==0.0.2
platform mac
Python 3.11.6 | [FEATURE REQUEST] langchain-openai - max_tokens (vs max_context?) ability to use full LLM contexts and account for user-messages automatically. | https://api.github.com/repos/langchain-ai/langchain/issues/22778/comments | 2 | 2024-06-11T14:48:51Z | 2024-06-12T16:30:29Z | https://github.com/langchain-ai/langchain/issues/22778 | 2,346,632,699 | 22,778 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_huggingface.llms import HuggingFacePipeline
tokenizer = AutoTokenizer.from_pretrained('microsoft/Phi-3-mini-128k-instruct')
model = AutoModelForCausalLM.from_pretrained('microsoft/Phi-3-mini-128k-instruct', device_map='cuda:0', trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=100, top_k=50, temperature=0.1, do_sample=True)
llm = HuggingFacePipeline(pipeline=pipe)
print(llm.model_id)
# 'gpt2' (expected 'microsoft/Phi-3-mini-128k-instruct')
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As mentioned by documentation,
> [They (HuggingFacePipeline) can also be loaded by passing in an existing transformers pipeline directly](https://python.langchain.com/v0.2/docs/integrations/llms/huggingface_pipelines/)
But it seems that the implementation is not complete because the model_id parameters always show gpt2 no matter what model you load. Since the example in the documentation uses ```gpt2``` as the sample model, which is the default model, in the first review it is not possible to see this bug. But if you try another model from huggingface (For example the code mentioned), you can see the problem.
Only ```gpt2``` will be shown no matter what pipeline you use to initialize HuggingFacePipeline with.
Although, it seems that the correct model is loaded, and if you invoke the model with some prompt, it will generate expected response.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sun Apr 28 14:29:16 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.76
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Passing transformer's pipeline to HuggingFacePipeline does not initialize the HuggingFacePipeline correctly. | https://api.github.com/repos/langchain-ai/langchain/issues/22776/comments | 0 | 2024-06-11T14:08:11Z | 2024-06-22T23:31:54Z | https://github.com/langchain-ai/langchain/issues/22776 | 2,346,538,588 | 22,776 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I'm using the code from the LangChain docs verbatim
```python
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
file_path = "<filepath>"
endpoint = "<endpoint>"
key = "<key>"
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint,
api_key=key,
file_path=file_path,
api_model="prebuilt-layout",
mode="page",
)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I'm trying to use the Azure Document Intelligence loader to read my pdf files.
* Using the `markdown` mode I only get the first page of the pdf loaded.
* If I use any other mode (page, single) I will get at most pages 1 and 2.
* I expect all pages within a page to be returned as a Document object.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-text-splitters==0.2.1
platform: mac
python version: 3.12.3 | AzureAIDocumentIntelligenceLoader does not load all PDF pages | https://api.github.com/repos/langchain-ai/langchain/issues/22775/comments | 2 | 2024-06-11T12:04:38Z | 2024-06-23T13:36:25Z | https://github.com/langchain-ai/langchain/issues/22775 | 2,346,246,655 | 22,775 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.prompts import PipelinePromptTemplate, PromptTemplate
from langchain.agents import create_react_agent
from langchain.tools import Tool
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
full_template = """{agent-introduction}
{agent-instructions}
"""
full_prompt = PromptTemplate.from_template(full_template)
introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)
instructions_template = """Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}"""
instructions_prompt = PromptTemplate.from_template(instructions_template)
input_prompts = [
("agent-introduction", introduction_prompt),
("agent-instructions", instructions_prompt),
]
pipeline_prompt = PipelinePromptTemplate(
final_prompt=full_prompt, pipeline_prompts=input_prompts
)
tools = [
Tool.from_function(
name="General Chat",
description="For general chat not covered by other tools",
func=llm.invoke,
return_direct=True
)
]
agent = create_react_agent(llm, tools, pipeline_prompt)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\llm-chatbot-python\test_prompt.py", line 57, in <module>
agent = create_react_agent(llm, tools, pipeline_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\langchain\agents\react\agent.py", line 116, in create_react_agent
prompt = prompt.partial(
^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\langchain_core\prompts\base.py", line 188, in partial
return type(self)(**prompt_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for PipelinePromptTemplate
__root__
Found overlapping input and partial variables: {'tools', 'tool_names'} (type=value_error)
```
### Description
`create_react_agent` raises a `Found overlapping input and partial variables: {'tools', 'tool_names'} (type=value_error)` error when passed a `PipelinePromptTemplate`.
I am composing an agent prompt using `PipelinePromptTemplate`, when I pass the composed prompt to `create_react_agent` I am presented with an error.
The above example replicates the error.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langchainhub==0.1.18
Windows
Python 3.12.0 | create_react_agent validation error when using PipelinePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/22774/comments | 0 | 2024-06-11T11:34:12Z | 2024-06-11T11:36:59Z | https://github.com/langchain-ai/langchain/issues/22774 | 2,346,186,615 | 22,774 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
not applicable
### Error Message and Stack Trace (if applicable)
not applicable
### Description
The DocumentDBVectorSearch docs mention it supports metadata filtering:
https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.documentdb.DocumentDBVectorSearch.html#langchain_community.vectorstores.documentdb.DocumentDBVectorSearch.as_retriever
However, unless I misunderstand the code, I really don't think it does.
I see that VectorStoreRetriever._get_relevant_documents passes search_kwargs to the similarity search of the underlying vector store.
And nothing in the code of DocumentDBVectorSearch is using search_kwargs at all.
In my project we need to review relevant parts of opensource softwares to make sure they really meet the requirements. So if this is not a bug, and the feature is indeed implemented somewhere else, could anybody please clarify how metadata filtering in DocumentDBVectorSearch is implemented?
### System Info
not applicable | DocumentDBVectorSearch and metadata filtering | https://api.github.com/repos/langchain-ai/langchain/issues/22770/comments | 9 | 2024-06-11T09:09:54Z | 2024-06-17T07:51:53Z | https://github.com/langchain-ai/langchain/issues/22770 | 2,345,840,847 | 22,770 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import pdf
### Error Message and Stack Trace (if applicable)
ModuleNotFoundError: No module named 'langchain_community.document_loaders'; 'langchain_community' is not a package
### Description
我的python环境里确实有langchain_community这个包,其他包都没问题就这个有问题
### System Info
windows ,python 3.9.8 | No module named 'langchain_community.document_loaders'; 'langchain_community' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/22763/comments | 2 | 2024-06-11T03:22:44Z | 2024-06-13T16:01:58Z | https://github.com/langchain-ai/langchain/issues/22763 | 2,345,292,354 | 22,763 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
result=multimodal_search(query)
### Error Message and Stack Trace (if applicable)
/usr/local/lib/python3.10/dist-packages/grpc/_channel.py in _end_unary_response_blocking(state, call, with_call, deadline)
1004 return state.response
1005 else:
-> 1006 raise _InactiveRpcError(state) # pytype: disable=not-instantiable
1007
1008
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:Error received from peer {created_time:"2024-06-10T19:02:57.93240468+00:00", grpc_status:14, grpc_message:"DNS resolution failed for :10000: unparseable host:port"}"
### Description
I'm to call a vector_search function that I wrote to retrieve embeddings and give response to the query, but I'm facing this error message.
### System Info
python | _InactiveRpcError of RPC | https://api.github.com/repos/langchain-ai/langchain/issues/22762/comments | 1 | 2024-06-11T02:56:46Z | 2024-06-11T02:59:59Z | https://github.com/langchain-ai/langchain/issues/22762 | 2,345,269,783 | 22,762 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
neo4j_uri = "bolt://localhost:7687"
neo4j_user = "neo4j"
neo4j_password = "....."
graph = Neo4jGraph(
url=neo4j_uri,
username=neo4j_user,
password=neo4j_password,
database="....",
enhanced_schema=True,
)
cypher_chain = GraphCypherQAChain.from_llm(
cypher_llm=AzureChatOpenAI(
deployment_name="<.......>",
azure_endpoint="https://.........openai.azure.com/",
openai_api_key=".....",
api_version=".....",
temperature=0
),
qa_llm=AzureChatOpenAI(
deployment_name="......",
azure_endpoint="......",
openai_api_key="....",
api_version=".....",
temperature=0
),
graph=graph,
verbose=True,
)
response = cypher_chain.invoke(
{"query": "How many tasks do i have"}
)
```
### Error Message and Stack Trace (if applicable)
```bash
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 32768 tokens. However, your messages resulted in 38782 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
```
### Description
When employing the GraphCypherQAChain.from_llm function, it generates a Cypher query that outputs all properties, including embeddings. Currently, there is no functionality to selectively include or exclude specific properties from the documents, which results in utilizing the entire context window.
### System Info
# Packages
langchain-community==0.2.2
neo4j==5.18.0/5.19.0/5.20.0
langchain==0.2.2
langchain-core==0.2.4
langchain-openai==0.1. | When using GraphCypherQAChain to fetch documents from Neo4j, the embeddings field is also returned, which consumes all context window tokens | https://api.github.com/repos/langchain-ai/langchain/issues/22755/comments | 2 | 2024-06-10T19:18:10Z | 2024-06-13T08:26:20Z | https://github.com/langchain-ai/langchain/issues/22755 | 2,344,660,169 | 22,755 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Here's a Pydantic model with a date in a union-typed attribute.
```python
from datetime import date
from pydantic import BaseModel
class Example(BaseModel):
attribute: date | str
```
Given a JSON string that contains a date, Pydantic discriminates the type and returns a `datetime.date` object.
```python
json_string = '{"attribute": "2024-01-01"}'
Example.model_validate_json(json_string)
# returns Example(attribute=datetime.date(2024, 1, 1))
```
However, PydanticOutputParser unexpectedly returns a string on the same JSON.
```python
from langchain.output_parsers import PydanticOutputParser
parser = PydanticOutputParser(pydantic_object=Example)
parser.parse(json_string)
# returns Example(attribute="2024-01-01")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`PydanticOutputParser` isn't converting dates in union types (e.g. `date | str`) to `datetime.date` objects. The parser should be able to discriminate these types by working left-to-right. See Pydantic's approach in https://docs.pydantic.dev/latest/concepts/unions/.
### System Info
I'm on macOS with Python 3.10. I can reproduce this issue with both LangChain `0.1` and `0.2`. | PydanticOutputParser Doesn't Parse Dates in Unions | https://api.github.com/repos/langchain-ai/langchain/issues/22740/comments | 4 | 2024-06-10T15:19:11Z | 2024-06-10T21:38:37Z | https://github.com/langchain-ai/langchain/issues/22740 | 2,344,188,762 | 22,740 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_text_splitters import MarkdownHeaderTextSplitter, RecursiveCharacterTextSplitter
# Updated markdown_document with a new header 5 using **
markdown_document = """
# Intro
## History
Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9]
Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files.
## Rise and divergence
As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks.
#### Standardization
From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterized as a standardisation effort.
## Implementations
Implementations of Markdown are available for over a dozen programming languages.
**New Header 5**
This is the content for the new header 5.
"""
# Headers to split on, including custom header 5 with **
headers_to_split_on = [
('\*\*.*?\*\*', "Header 5")
]
# Create the MarkdownHeaderTextSplitter
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on, strip_headers=False
)
# Split text based on headers
md_header_splits = markdown_splitter.split_text(markdown_document)
# Create the RecursiveCharacterTextSplitter
chunk_size = 250
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
# Split documents
splits = text_splitter.split_documents(md_header_splits)
print(splits)
```
### Error Message and Stack Trace (if applicable)
<img width="1157" alt="image" src="https://github.com/langchain-ai/langchain/assets/54015474/70e95ca0-d9f8-41ef-b35a-0f851f9edbcb">
### Description
1. I try to use MarkdownHeaderTextSplitter for split the text on "**New Header 5**"
2. I was able to use r'\*\*.*?\*\*' to do the work with package re
3. but I failed it with langchain and I wasn't able to find any example regarding similar Header in langchain's documentation
### System Info
langchain-core==0.2.3
langchain-text-splitters==0.2.0 | MarkdownHeaderTextSplitter for header such like "**New Header 5**" | https://api.github.com/repos/langchain-ai/langchain/issues/22738/comments | 2 | 2024-06-10T14:19:28Z | 2024-06-11T06:21:52Z | https://github.com/langchain-ai/langchain/issues/22738 | 2,344,052,386 | 22,738 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/eleven_labs_tts/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
"Elevenlabs has no attribute "generate
This doesnt seem to work with latest elevenlabs package
### Idea or request for content:
_No response_ | "Elevenlabs has no attribute "generate (only older versions of elevenlabs work with this wrapper) | https://api.github.com/repos/langchain-ai/langchain/issues/22736/comments | 1 | 2024-06-10T13:44:40Z | 2024-06-10T22:31:46Z | https://github.com/langchain-ai/langchain/issues/22736 | 2,343,969,449 | 22,736 |
[
"langchain-ai",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/vectorstores/langchain_chroma.vectorstores.Chroma.html#langchain_chroma.vectorstores.Chroma.similarity_search_with_score
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation for lanchain_core.vectorstore._similarity_search_with_relevance_scores()
> states: 0 is dissimilar, 1 is most similar.
The documentation for chroma.similarity_search_with_score() states:
> Lower score represents more similarity.
What is the correct interpretation?
### Idea or request for content:
_No response_ | DOC: inconsistency with similarity_search_with_score() | https://api.github.com/repos/langchain-ai/langchain/issues/22732/comments | 2 | 2024-06-10T09:09:28Z | 2024-06-12T16:41:29Z | https://github.com/langchain-ai/langchain/issues/22732 | 2,343,339,255 | 22,732 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import OutlookMessageLoader
import os
file_path = "example.msg"
loader = OutlookMessageLoader(file_path)
documents = loader.load()
print(documents)
try:
os.remove(file_path)
except Exception as e:
print(e)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "test.py", line 16, in <module>
os.remove(file_path)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'example.msg'
### Description
**Describe the bug**
It seems that the `OutlookMessageLoader` does not close the file after extracting the text from the `.msg` file.
**To Reproduce**
Steps to reproduce the behavior:
1. Use the following example code:
```python
from langchain_community.document_loaders import OutlookMessageLoader
import os
file_path = "example.msg"
loader = OutlookMessageLoader(file_path)
documents = loader.load()
print(documents)
try:
os.remove(file_path)
except Exception as e:
print(e)
```
2. Run the code and observe the error.
**Expected behavior**
The file should be closed after processing, allowing it to be deleted without errors.
**Error**
```
[WinError 32] The process cannot access the file because it is being used by another process: 'example.msg'
```
**Additional context**
I looked into the `email.py` file of `langchain_community.document_loaders` and found the following code in the `lazy_load` function:
```python
import extract_msg
msg = extract_msg.Message(self.file_path)
yield Document(
page_content=msg.body,
metadata={
"source": self.file_path,
"subject": msg.subject,
"sender": msg.sender,
"date": msg.date,
},
)
```
It seems like the file is not being closed properly. Adding `msg.close()` should resolve the issue.
### System Info
**Langchain libraries**:
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
**Platform**: windows
**Python**: 3.12.3 | File Not Closed in OutlookMessageLoader of langchain_community Library | https://api.github.com/repos/langchain-ai/langchain/issues/22729/comments | 1 | 2024-06-10T07:35:42Z | 2024-06-10T22:33:15Z | https://github.com/langchain-ai/langchain/issues/22729 | 2,343,107,278 | 22,729 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Run the LLM on the given prompt and input."""
from vllm import SamplingParams
# build sampling parameters
params = {**self._default_params, **kwargs, "stop": stop}
sampling_params = SamplingParams(**params)
# call the model
outputs = self.client.generate(prompts, sampling_params)
generations = []
for output in outputs:
text = output.outputs[0].text
generations.append([Generation(text=text)])
return LLMResult(generations=generations)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Problem #15921 still not fixed. Pls fix it. Maybe init 'stop' by default from SamplingParams.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023
> Python Version: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | Fix stop list of string in VLLM generate | https://api.github.com/repos/langchain-ai/langchain/issues/22717/comments | 6 | 2024-06-09T12:35:30Z | 2024-06-10T17:37:07Z | https://github.com/langchain-ai/langchain/issues/22717 | 2,342,210,409 | 22,717 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/graph/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Important: This happens with Python v3.12.4.
The below statement in the documentation (https://python.langchain.com/v0.2/docs/tutorials/graph/) fails
graph.query(movies_query)
with the below error.
[2](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:2) from typing import Any, Dict, List, Optional
[4](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:4) from langchain_core.utils import get_from_dict_or_env
----> [6](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:6) from langchain_community.graphs.graph_document import GraphDocument
...
[64](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:64) # Even though it is the right signature for python 3.9, mypy complains with
[65](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:65) # `error: Too many arguments for "_evaluate" of "ForwardRef"` hence the cast...
---> [66](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:66) return cast(Any, type_)._evaluate(globalns, localns, set())
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
### Idea or request for content:
May be, the code in the documentation needs to be tested against latest python versions | Error while running graph.query(movies_query) with Python v3.12.4 | https://api.github.com/repos/langchain-ai/langchain/issues/22713/comments | 4 | 2024-06-09T07:31:58Z | 2024-06-13T10:26:21Z | https://github.com/langchain-ai/langchain/issues/22713 | 2,342,074,263 | 22,713 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
llm = HuggingFacePipeline.from_model_id(
model_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
pipeline_kwargs={
"max_new_tokens": 100,
"top_k": 50,
"temperature": 0.1,
},
)
### Error Message and Stack Trace (if applicable)
Jun 9, 2024, 11:45:20 AM | WARNING | WARNING:root:kernel 2d2b999f-b125-4d33-9c67-f791b5329c26 restarted
Jun 9, 2024, 11:45:20 AM | INFO | KernelRestarter: restarting kernel (1/5), keep random ports
Jun 9, 2024, 11:45:19 AM | WARNING | ERROR: Unknown command line flag 'xla_latency_hiding_scheduler_rerun'
### Description
Trying the example from the [langchain_huggingface](https://huggingface.co/blog/langchain) library at colab. The example crashes the colab.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sun Apr 28 14:29:16 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_experimental: 0.0.60
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.1
python -m langchain_core.sys_info:
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Initializing a LLM using HuggingFacePipeline.from_model_id crashes Google Colab | https://api.github.com/repos/langchain-ai/langchain/issues/22710/comments | 5 | 2024-06-09T06:18:17Z | 2024-06-13T08:38:53Z | https://github.com/langchain-ai/langchain/issues/22710 | 2,342,048,874 | 22,710 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
check-broken-links.yml and scheduled_test.yml
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The scheduled GitHub Actions workflows check-broken-links.yml and scheduled_test.yml are also triggered in the forked repository, which is probably not the expected behavior.
### System Info
GitHub actions | Scheduled GitHub Actions Running on Forked Repositories | https://api.github.com/repos/langchain-ai/langchain/issues/22706/comments | 1 | 2024-06-09T04:45:52Z | 2024-06-10T15:07:49Z | https://github.com/langchain-ai/langchain/issues/22706 | 2,342,020,818 | 22,706 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
hGnhn
### Error Message and Stack Trace (if applicable)
Ghgh
### Description
Yhtuyhy
### System Info
Yhguyuujhgghyhu | Rohit | https://api.github.com/repos/langchain-ai/langchain/issues/22704/comments | 14 | 2024-06-09T02:40:50Z | 2024-06-15T21:01:27Z | https://github.com/langchain-ai/langchain/issues/22704 | 2,341,985,867 | 22,704 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_community.callbacks import OpenAICallbackHandler
from langchain_community.tools import SleepTool
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
# We should incur some OpenAI costs here from agent planning
cost_callback = OpenAICallbackHandler()
tools = [SleepTool()]
agent_instance = AgentExecutor.from_agent_and_tools(
tools=tools,
agent=OpenAIFunctionsAgent.from_llm_and_tools(
ChatOpenAI(model="gpt-4", request_timeout=15.0), tools # type: ignore[call-arg]
),
return_intermediate_steps=True,
max_execution_time=10,
callbacks=[cost_callback], # "Local" callbacks
)
# NOTE: intentionally, I am not specifying the callback to invoke, as that
# would make the cost_callback be considered "inheritable" (which I don't want)
outputs = agent_instance.invoke(
input={"input": "Sleep a few times for 100-ms."},
# config=RunnableConfig(callbacks=[cost_callback]), # "Inheritable" callbacks
)
assert len(outputs["intermediate_steps"]) > 0, "Agent should have slept a bit"
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for" # Fails
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/user/code/repo/app/agents/a.py", line 28, in <module>
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Agent planning should have been accounted for
### Description
LangChain has a useful concept of "inheritable" callbacks vs "local" callbacks, all managed by `CallbackManger` (source reference [1](https://github.com/langchain-ai/langchain/blob/langchain-core%3D%3D0.2.5/libs/core/langchain_core/callbacks/manager.py#L1923-L1930) and [2](https://github.com/langchain-ai/langchain/blob/langchain-core%3D%3D0.2.5/libs/core/langchain_core/callbacks/base.py#L587-L592))
- Inheritable callback: callback is automagically reused by nested `invoke`
- Local callback: no reuse by nested `invoke`
Yesterday I discovered `AgentExecutor` does not use local callbacks for its planning step. I consider this a bug, as planning (e.g [`BaseSingleActionAgent.plan`](https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.3/libs/langchain/langchain/agents/agent.py#L70)) is a core behavior of `AgentExecutor`.
The fix would be supporting `AgentExecutor`'s local callbacks during planning
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8 | Bug: `AgentExecutor` doesn't use its local callbacks during planning | https://api.github.com/repos/langchain-ai/langchain/issues/22703/comments | 1 | 2024-06-08T23:15:48Z | 2024-06-08T23:58:39Z | https://github.com/langchain-ai/langchain/issues/22703 | 2,341,904,946 | 22,703 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)
# Before
# Oberve on_llm_new_token with callback
# That give the token in grouped format.
# With my pull request
# Oberve on_llm_new_token with callback
# Now, the callback is given as streaming tokens, before it was in grouped format.
```
### Error Message and Stack Trace (if applicable)
No message.
### Description
Hello
* I Identified an issue in the mistral package where the callback streaming (see on_llm_new_token) was not functioning correctly when the streaming parameter was set to True and call with `ainvoke`.
* The root cause of the problem was the streaming not taking into account. ( I think it's an oversight )
I did this [Pull Request](https://github.com/langchain-ai/langchain/pull/22000)
* To resolve the issue, I added the `streaming` attribut.
* Now, the callback with streaming works as expected when the streaming parameter is set to True.
I addressed this issue because the pull request I submitted a month ago has not received any attention. Additionally, the problem reappears in each new version.
Could you please review the pull request.
### System Info
All system can reproduce. | Partners: Issues with `Streaming` and MistralAI `ainvoke` and `Callbacks` Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/22702/comments | 2 | 2024-06-08T20:46:07Z | 2024-07-02T20:38:12Z | https://github.com/langchain-ai/langchain/issues/22702 | 2,341,830,363 | 22,702 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
'''python
# rag_agent_creation.py
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langchain.tools.retriever import create_retriever_tool
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder
from .rag_prompts import RAG_AGENT_PROMPT
import chromadb
def create_retriver_agent(llm: ChatOpenAI, vectordb: chromadb):
retriever = vectordb.as_retriever(search_type = "mmr", search_kwargs={"k": 4})
retriever_tool = create_retriever_tool(
retriever,
name = "doc_retriever_tool",
description = "Search and return information from documents",
)
tools = [retriever_tool]
system_prompt = RAG_AGENT_PROMPT
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system_prompt,
),
MessagesPlaceholder(variable_name="messages",optional=True),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
return executor
'''
### Error Message and Stack Trace (if applicable)
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 879e1607-f32b-4984-af76-d258c646e7ad, but expected {'tool'} run.")
### Description
I am using a retriever tool in a langgraph deployed on langserve. Whenever the graph calls the tool, i am getting the error: Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 879e1607-f32b-4984-af76-d258c646e7ad, but expected {'tool'} run.")
This is new, my tool was working correctly before. I have updated the dependencies as well.
### System Info
[tool.poetry]
name = "Reporting Tool API"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
readme = "README.md"
packages = [{ include = "app" }]
[tool.poetry.dependencies]
python = "^3.11"
uvicorn = "0.23.2"
langserve = { extras = ["server"], version = "0.2.1" }
pydantic = "<2"
chromadb = "0.5.0"
fastapi = "0.110.3"
langchain = "0.2.3"
langchain-cli = "0.0.24"
langchain-community = "0.2.4"
langchain-core = "0.2.5"
langchain-experimental = "0.0.60"
langchain-openai = "0.1.8"
langchain-text-splitters = "0.2.1"
langgraph = "0.0.65"
openai = "1.33.0"
opentelemetry-instrumentation-fastapi = "0.46b0"
pypdf = "4.2.0"
python-dotenv = "1.0.1"
python-multipart = "0.0.9"
pandas = "^2.0.1"
tabulate = "^0.9.0"
langchain-anthropic = "0.1.15"
langchain-mistralai = "0.1.8"
langchain-google-genai = "1.0.6"
api-analytics = { extras = ["fastapi"], version = "*" }
langchainhub = "0.1.18"
[tool.poetry.group.dev.dependencies]
langchain-cli = ">=0.0.15"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" | Error in LangChainTracer.on_tool_end callback | https://api.github.com/repos/langchain-ai/langchain/issues/22696/comments | 5 | 2024-06-08T08:41:41Z | 2024-07-17T12:31:28Z | https://github.com/langchain-ai/langchain/issues/22696 | 2,341,553,381 | 22,696 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains.summarize import load_summarize_chain
client = AzureOpenAI(
api_version=api_version,
api_key=api_key,
azure_endpoint=azure_endpoint,
)
chain = load_summarize_chain(client, chain_type="stuff")
```
```
--------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[13], line 1
----> 1 chain = load_summarize_chain(client, chain_type="stuff")
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain/chains/summarize/__init__.py:157, in load_summarize_chain(llm, chain_type, verbose, **kwargs)
152 if chain_type not in loader_mapping:
153 raise ValueError(
154 f"Got unsupported chain type: {chain_type}. "
155 f"Should be one of {loader_mapping.keys()}"
156 )
--> 157 return loader_mapping[chain_type](llm, verbose=verbose, **kwargs)
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain/chains/summarize/__init__.py:33, in _load_stuff_chain(llm, prompt, document_variable_name, verbose, **kwargs)
26 def _load_stuff_chain(
27 llm: BaseLanguageModel,
28 prompt: BasePromptTemplate = stuff_prompt.PROMPT,
(...)
31 **kwargs: Any,
32 ) -> StuffDocumentsChain:
---> 33 llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
34 # TODO: document prompt
35 return StuffDocumentsChain(
36 llm_chain=llm_chain,
37 document_variable_name=document_variable_name,
38 verbose=verbose,
39 **kwargs,
40 )
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I tried to insert the azure open ai to summarization pipeline, but it gives error.
### System Info
latest langchain. | ValidationError: 2 validation errors for LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/22695/comments | 1 | 2024-06-08T07:55:15Z | 2024-06-15T02:16:53Z | https://github.com/langchain-ai/langchain/issues/22695 | 2,341,538,090 | 22,695 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.schema import AIMessage
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/code/test.py", line 1, in <module>
from langchain.schema import AIMessage
File "/usr/local/lib/python3.12/site-packages/langchain/schema/__init__.py", line 5, in <module>
from langchain_core.documents import BaseDocumentTransformer, Document
File "/usr/local/lib/python3.12/site-packages/langchain_core/documents/__init__.py", line 6, in <module>
from langchain_core.documents.compressor import BaseDocumentCompressor
File "/usr/local/lib/python3.12/site-packages/langchain_core/documents/compressor.py", line 6, in <module>
from langchain_core.callbacks import Callbacks
File "/usr/local/lib/python3.12/site-packages/langchain_core/callbacks/__init__.py", line 22, in <module>
from langchain_core.callbacks.manager import (
File "/usr/local/lib/python3.12/site-packages/langchain_core/callbacks/manager.py", line 29, in <module>
from langsmith.run_helpers import get_run_tree_context
File "/usr/local/lib/python3.12/site-packages/langsmith/run_helpers.py", line 40, in <module>
from langsmith import client as ls_client
File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 52, in <module>
from langsmith import env as ls_env
File "/usr/local/lib/python3.12/site-packages/langsmith/env/__init__.py", line 3, in <module>
from langsmith.env._runtime_env import (
File "/usr/local/lib/python3.12/site-packages/langsmith/env/_runtime_env.py", line 10, in <module>
from langsmith.utils import get_docker_compose_command
File "/usr/local/lib/python3.12/site-packages/langsmith/utils.py", line 31, in <module>
from langsmith import schemas as ls_schemas
File "/usr/local/lib/python3.12/site-packages/langsmith/schemas.py", line 69, in <module>
class Example(ExampleBase):
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 286, in __new__
cls.__try_update_forward_refs__()
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 807, in __try_update_forward_refs__
update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,))
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 554, in update_model_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 520, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
```
### Description
Langchain fails on import with Python 3.12.4 due to pydantic v1 dependency. Python 3.12.3 is fine.
See https://github.com/pydantic/pydantic/issues/9607 for more info.
### System Info
```
langchain 0.2.3
langchain-community 0.2.3
langchain-core 0.2.5
langchain-openai 0.1.8
pydantic 2.7.3
pydantic_core 2.18.4
```
Python version is 3.12.4
Linux Arm64/v8 | Python 3.12.4 is incompatible with pydantic.v1 as of pydantic==2.7.3 | https://api.github.com/repos/langchain-ai/langchain/issues/22692/comments | 9 | 2024-06-08T01:41:20Z | 2024-06-13T04:35:49Z | https://github.com/langchain-ai/langchain/issues/22692 | 2,341,357,041 | 22,692 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/pdf_qa/#question-answering-with-rag
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL**: [PDF QA Tutorial](https://python.langchain.com/v0.2/docs/tutorials/pdf_qa/#question-answering-with-rag)
**Checklist**:
- [x] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to.
**Issue with current documentation**:
There is a variable name error in the PDF QA Tutorial on the LangChain documentation. The code snippet incorrectly uses `llm` instead of `model`, which causes a `NameError`.
**Error Message**:
```plaintext
NameError: name 'llm' is not defined
```
**Correction**:
The variable `llm` should be replaced with `model` in the code snippet for it to work correctly. Here is the corrected portion of the code:
```python
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(model, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
results = rag_chain.invoke({"input": "What was Nike's revenue in 2023?"})
results
```
Please make this update to prevent confusion and errors for users following the tutorial.
### Idea or request for content:
_No response_ | DOC: NameError due to Incorrect Variable Name in PDF QA Tutorial Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/22689/comments | 2 | 2024-06-08T00:22:00Z | 2024-06-24T21:08:04Z | https://github.com/langchain-ai/langchain/issues/22689 | 2,341,309,950 | 22,689 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
import os
endpoint = "<endpoint>"
key = "<key>"
mode= "markdown"
path = os.path.join('path', 'to', 'pdf')
loader = AzureAIDocumentIntelligenceLoader(
file_path="path_to_local_pdf.pdf", api_endpoint=endpoint, api_key=key, api_model="prebuilt-layout", mode = mode
)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/Code/tools-chatbot-backend/dataproduct/test_langchain.py", line 13, in <module>
documents = loader.load()
^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/doc_intelligence.py", line 96, in lazy_load
yield from self.parser.parse(blob)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 125, in parse
return list(self.lazy_parse(blob))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/doc_intelligence.py", line 80, in lazy_parse
poller = self.client.begin_analyze_document(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py", line 94, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py", line 3627, in begin_analyze_document
raw_result = self._analyze_document_initial( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py", line 516, in _analyze_document_initial
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "/Users/baniasbaabe/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/core/exceptions.py", line 161, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: (404) Resource not found
Code: 404
Message: Resource not found
```
### Description
I am trying to run the `AzureAIDocumentIntelligenceLoader` but it always throws an error that the resource to the PDF could not be found. When I run the [azure-ai-formrecognizer](https://pypi.org/project/azure-ai-formrecognizer/) manually, it works.
### System Info
```
langchain==0.2.0
Python 3.11
MacOS 14
``` | `AzureAIDocumentIntelligenceLoader` throws 404 Resource not found error | https://api.github.com/repos/langchain-ai/langchain/issues/22679/comments | 1 | 2024-06-07T14:30:27Z | 2024-06-17T10:55:07Z | https://github.com/langchain-ai/langchain/issues/22679 | 2,340,605,742 | 22,679 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
dont think thats necessary
### Error Message and Stack Trace (if applicable)
ERROR: Could not find a version that satisfies the requirement langchain-google-genai (from versions: none)
ERROR: No matching distribution found for langchain-google-genai
### Description
i am trying to use the gemini api through the chatgooglegenerativeAI class in python 3.9.0 but i am not able to install langchain-google-genai which contains the aforementioned class. i looked up the issue in google and some older issuer's solution was that the module needs python version to be equal to 3.9 or greater. my python version is currently 3.9.0 so i can't really understand what the issue is.
### System Info
python == 3.9.0
| unable to install langchain-google-genai in python 3.9.0 | https://api.github.com/repos/langchain-ai/langchain/issues/22676/comments | 0 | 2024-06-07T13:18:48Z | 2024-06-07T13:21:17Z | https://github.com/langchain-ai/langchain/issues/22676 | 2,340,438,808 | 22,676 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import ChatDatabricks
llm = ChatDatabricks(
endpoint="my-endpoint",
temperature=0.0,
)
for chunk in llm.stream("What is MLflow?"):
print(chunk.content, end="|")
```
### Error Message and Stack Trace (if applicable)
```python
KeyError: 'content'
File <command-18425931933140>, line 8
1 from langchain_community.chat_models import ChatDatabricks
3 llm = ChatDatabricks(
4 endpoint="my-endpoint",
5 temperature=0.0,
6 )
----> 8 for chunk in llm.stream("What is MLflow?"):
9 print(chunk.content, end="|")
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:161, in ChatMlflow.stream(self, input, config, stop, **kwargs)
157 yield cast(
158 BaseMessageChunk, self.invoke(input, config=config, stop=stop, **kwargs)
159 )
160 else:
--> 161 yield from super().stream(input, config, stop=stop, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs)
258 except BaseException as e:
259 run_manager.on_llm_error(
260 e,
261 response=LLMResult(
262 generations=[[generation]] if generation else []
263 ),
264 )
--> 265 raise e
266 else:
267 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs)
243 generation: Optional[ChatGenerationChunk] = None
244 try:
--> 245 for chunk in self._stream(messages, stop=stop, **kwargs):
246 if chunk.message.id is None:
247 chunk.message.id = f"run-{run_manager.run_id}"
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:184, in ChatMlflow._stream(self, messages, stop, run_manager, **kwargs)
182 if first_chunk_role is None:
183 first_chunk_role = chunk_delta.get("role")
--> 184 chunk = ChatMlflow._convert_delta_to_message_chunk(
185 chunk_delta, first_chunk_role
186 )
188 generation_info = {}
189 if finish_reason := choice.get("finish_reason"):
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:239, in ChatMlflow._convert_delta_to_message_chunk(_dict, default_role)
234 @staticmethod
235 def _convert_delta_to_message_chunk(
236 _dict: Mapping[str, Any], default_role: str
237 ) -> BaseMessageChunk:
238 role = _dict.get("role", default_role)
--> 239 content = _dict["content"]
240 if role == "user":
241 return HumanMessageChunk(content=content)
```
### Description
I am trying to stream the response from the ChatDatabricks but this simply fails because it cannot find the 'content' key in the chunks. Also, the example code in the [documentation](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html) does not work.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | ChatDatabricks can't stream response: "KeyError: 'content'" | https://api.github.com/repos/langchain-ai/langchain/issues/22674/comments | 3 | 2024-06-07T12:43:34Z | 2024-07-05T15:31:11Z | https://github.com/langchain-ai/langchain/issues/22674 | 2,340,367,495 | 22,674 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
youtube_url = 'https://youtu.be/RXQ5AtjUMAw'
loader = GenericLoader(
YoutubeAudioLoader(
[youtube_url],
'./videos'
),
OpenAIWhisperParser(
api_key=key,
language='en'
)
)
loader.load()
```
### Error Message and Stack Trace (if applicable)
```bash
Transcribing part 1!
Transcribing part 2!
Transcribing part 1!
Transcribing part 1!
Transcribing part 3!
Transcribing part 3!
```
### Description
* I'm using Langchain to generate transcripts of YouTube videos, but I've noticed that the usage on my api_key is high. After closer examination, I discovered that the OpenAIWhisperParser is Transcribing the same part multiple times

* Sometimes it goes through parts like 1,2,3 and then returns to 1 and repeats
* I've noticed that even with specified language, the first chunk is always in the original language as if the parameter is not passed to the first request
* I've tried not using language argument but the issue was still there
### System Info
System info:
Python 3.11.9 inside PyCharm venv
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.1
langchain-openai==0.1.7
langchain-text-splitters==0.2.0
langgraph==0.0.55
langsmith==0.1.63 | Langchain YouTube audio loader duplicating transcripts | https://api.github.com/repos/langchain-ai/langchain/issues/22671/comments | 2 | 2024-06-07T12:28:45Z | 2024-06-14T19:25:13Z | https://github.com/langchain-ai/langchain/issues/22671 | 2,340,338,451 | 22,671 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import AzureOpenAI
.....
model = AzureOpenAI(deployment_name=os.getenv("OPENAI_DEPLOYMENT_ENDPOINT"), temperature=0.3, openai_api_key=os.getenv("OPENAI_API_KEY"))
model.bind_tools([tool])
### Error Message and Stack Trace (if applicable)
AttributeError: 'AzureOpenAI' object has no attribute 'bind_tools'
### Description
I create AzureOpenAI instance in langchain and when trying to bind tools getting the error.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
> langgraph: 0.0.64
> langserve: 0.2.1 | AttributeError: 'AzureOpenAI' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/22670/comments | 1 | 2024-06-07T12:06:12Z | 2024-06-12T06:33:09Z | https://github.com/langchain-ai/langchain/issues/22670 | 2,340,298,150 | 22,670 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
document_prompt = PromptTemplate(
input_variables=["page_content", "metadata"],
input_types={
"page_content": str,
"metadata": dict[str, Any],
},
output_parser=None,
partial_variables={},
template="{metadata['source']}: {page_content}",
template_format="f-string",
validate_template=True
)
```
### Error Message and Stack Trace (if applicable)
```
File "/home/fules/src/ChatPDF/streamlitui.py", line 90, in <module>
main()
File "/home/fules/src/ChatPDF/streamlitui.py", line 51, in main
st.session_state["pdfquery"] = PDFQuery(st.session_state["OPENAI_API_KEY"])
File "/home/fules/src/ChatPDF/pdfquery.py", line 32, in __init__
document_prompt = PromptTemplate(
File "/home/fules/src/ChatPDF/_venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
string indices must be integers (type=type_error)
```
### Description
* I'm trying to create a document formatting template that uses not only the content of the documents but their metadata as well
* As I explicitly specify that the `metadata` member is a dict, I expect that the validation logic honors that information
* I've experienced that all input variables are treated as `str`s, regardless of `input_types`
At <a href="https://github.com/langchain-ai/langchain/blob/235d91940d81949d8f1c48d33e74ad89e549e2c0/libs/core/langchain_core/prompts/prompt.py#L136">this point</a> `input_types` is not passed on to `check_valid_template`, so that type information is lost beyond this point, and therefore the validator couldn't consider the type even if it tried to.
At <a href="https://github.com/langchain-ai/langchain/blob/235d91940d81949d8f1c48d33e74ad89e549e2c0/libs/core/langchain_core/utils/formatting.py#L23">this point</a> the validator `validate_input_variables` tries to resolve the template by assigning the string `"foo"` to all input variables, and this is where the exception is raised.
The <a href="https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html#langchain_core.prompts.prompt.PromptTemplate.input_types">documentation of `PromptTemplate.input_types`</a> states that
> A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings.
If this behaviour (`input_types` is ignored and all variables are always assumed to be strings) is the intended one, then it might be good to reflect this in the documentation too.
### System Info
```
$ pip freeze | grep langchain
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
$ uname -a
Linux Lya 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.10.6
``` | PromptTemplate.input_types is ignored on validation | https://api.github.com/repos/langchain-ai/langchain/issues/22668/comments | 1 | 2024-06-07T11:31:54Z | 2024-06-26T15:08:31Z | https://github.com/langchain-ai/langchain/issues/22668 | 2,340,242,403 | 22,668 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders.sharepoint import SharePointLoader
# O365_CLIENT_ID, O365_CLIENT_SECRET included in the environment
# first 'manual' authentication was successful throwing the same error as included below
loader = SharePointLoader(document_library_id=<LIBRARY_ID>, recursive=True, auth_with_token=False)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```python
ValueError Traceback (most recent call last)
Cell In[21], line 14
11 documents = loader.lazy_load()
13 # Process each document
---> 14 for doc in documents:
15 try:
16 # Ensure MIME type is available or set a default based on file extension
17 if 'mimetype' not in doc.metadata or not doc.metadata['mimetype']:
File ~/.local/lib/python3.11/site-packages/langchain_community/document_loaders/sharepoint.py:86, in SharePointLoader.lazy_load(self)
84 raise ValueError("Unable to fetch root folder")
85 for blob in self._load_from_folder(target_folder):
---> 86 for blob_part in blob_parser.lazy_parse(blob):
87 blob_part.metadata.update(blob.metadata)
88 yield blob_part
File ~/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/generic.py:61, in MimeTypeBasedParser.lazy_parse(self, blob)
58 mimetype = blob.mimetype
60 if mimetype is None:
---> 61 raise ValueError(f"{blob} does not have a mimetype.")
63 if mimetype in self.handlers:
64 handler = self.handlers[mimetype]
ValueError: data=None mimetype=None encoding='utf-8' path=PosixPath('/tmp/tmp92nu0bdz/test_document_on_SP.docx') metadata={} does not have a mimetype.
```
### Description
* I'm trying to put together a Proof Of Concept RAG chatbot that uses the SharePointLoader integration
* The authentication process (via copy pasting the url) is sucessful, I also have the auth_token, which can be used.
* However, the .load method fails at the first .docx document (while successfully fetching a .pdf data from SharePoint
* The error message mentions a file path at the temp directory, however that file cannot be found there in fact.
* My hunch is that this issue might be related to to commit https://github.com/langchain-ai/langchain/pull/20663 against metadata about the document gets lost during the downloading process to temp storage. I'm not entirely sure of the root cause, but it's a tricky problem that might need more eyes on it. Thanks to @MacanPN for pointing this out! Any insights or further checks we could perform to better understand this would be greatly appreciated.
* Using inspect, I verified that merge changes exist in my langchain version, so I'm a bit clueless.
* Furthermore, based on the single successful pdf load, metadata properties like web_url are also missing:
```python
metadata={'source': '/tmp/tmpw8sfa_52/test_file.pdf',
'file_path': '/tmp/tmpw8sfa_52/test_file.pdf',
'page': 0, 'total_pages': 1, 'format': 'PDF 1.4',
'title': '', 'author': '', 'subject': '', 'keywords': '',
'creator': 'Chromium', 'producer': 'Skia/PDF m101',
'creationDate': "D:20240503115507+00'00'", 'modDate': "D:20240503115507+00'00'",
'trapped': ''}
```
### System Info
Currently I am running the code on the Unstructured docker container (downloads.unstructured.io/unstructured-io/unstructured:latest) but other Linux platforms like Ubuntu 20.04 and python:3.11-slim were also fruitless.
Packages like O365 and PyMuPDF were also installed.
/usr/src/app $ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
> Python Version: 3.11.9 (main, May 23 2024, 20:26:53) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.2
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.62
> langchain_google_vertexai: 1.0.4
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.0
> langchain_voyageai: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SharepointLoader not working as intended despite latest merge 'propagation of document metadata from O365BaseLoader' | https://api.github.com/repos/langchain-ai/langchain/issues/22663/comments | 1 | 2024-06-07T09:56:20Z | 2024-06-07T09:59:57Z | https://github.com/langchain-ai/langchain/issues/22663 | 2,340,053,470 | 22,663 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.embeddings import DeterministicFakeEmbedding
from langchain_community.vectorstores import Chroma, Milvus
from langchain_core.documents import Document
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.runnables.utils import Input, Output
from langchain_core.vectorstores import VectorStore
from langchain_text_splitters import TextSplitter, RecursiveCharacterTextSplitter
class AddOne(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input+1
class Square(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input**2
class Cube(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input**3
class AddAll(Runnable):
def invoke(self , input:dict , config : Optional[RunnableConfig] = None) -> Output:
return sum(input.values())
def main_invoke():
chain = (AddOne() | { "square " : Square() , "cube" : Cube() } | AddAll())
print(chain.batch([2 , 10 , 11]))
main_invoke()
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/kb-0311/Desktop/langchain/main.py", line 29, in <module>
main()
File "/Users/kb-0311/Desktop/langchain/main.py", line 26, in main
print(sequence.invoke(2)) # Output will be 9
^^^^^^^^^^^^^^^^^^
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2476, in invoke
callback_manager = get_callback_manager_for_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 433, in get_callback_manager_for_config
from langchain_core.callbacks.manager import CallbackManager
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/callbacks/__init__.py", line 22, in <module>
from langchain_core.callbacks.manager import (
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/callbacks/manager.py", line 29, in <module>
from langsmith.run_helpers import get_run_tree_context
ModuleNotFoundError: No module named 'langsmith.run_helpers'; 'langsmith' is not a package
### Description
I am trying to run a basic example chain to understand lcel but cannot run or invoke my chain.
The error stack trace is given below.
All the packages are installed in a virtual env as well as my global pip lib/ in their latest versions.
### System Info
pip freeze | grep langchain
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-text-splitters==0.2.1
MacOS 14.5
Python3 version 3.12.3 | Getting langsmith module not found error whenever running langchain Runnable invoke / batch() | https://api.github.com/repos/langchain-ai/langchain/issues/22660/comments | 1 | 2024-06-07T08:28:51Z | 2024-07-15T11:19:56Z | https://github.com/langchain-ai/langchain/issues/22660 | 2,339,893,198 | 22,660 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When following tutorial Build a Chatbot - Gemini in https://python.langchain.com/v0.2/docs/tutorials/chatbot/ in the section of https://python.langchain.com/v0.2/docs/tutorials/chatbot/#managing-conversation-history
```
from langchain_core.runnables import RunnablePassthrough
def filter_messages(messages, k=10):
return messages[-k:]
chain = (
RunnablePassthrough.assign(messages=lambda x: filter_messages(x["messages"]))
| prompt
| model
)
messages = [
HumanMessage(content="hi! I'm bob"),
AIMessage(content="hi!"),
HumanMessage(content="I like vanilla ice cream"),
AIMessage(content="nice"),
HumanMessage(content="whats 2 + 2"),
AIMessage(content="4"),
HumanMessage(content="thanks"),
AIMessage(content="no problem!"),
HumanMessage(content="having fun?"),
AIMessage(content="yes!"),
]
response = chain.invoke(
{
"messages": messages + [HumanMessage(content="what's my name?")],
"language": "English",
}
)
response.content
```
It throws an error `Retrying langchain_google_vertexai.chat_models._completion_with_retry.<locals>._completion_with_retry_inner in 4.0 seconds as it raised InvalidArgument: 400 Please ensure that multiturn requests alternate between user and model..`
The solution here seems to be changing `def filter_messages(messages, k=10)` to `def filter_messages(messages, k=9)` the reason for this is described https://github.com/langchain-ai/langchain/issues/16288
Gemini doesn't support history starting from AIMessage changing value from 10 to 9 ensure that the first message list is always HumanMessage
### Idea or request for content:
_No response_ | DOC: Tutorial - Build a Chatbot - Gemini error 400 Please ensure that multiturn requests alternate between user and model | https://api.github.com/repos/langchain-ai/langchain/issues/22651/comments | 1 | 2024-06-07T02:10:13Z | 2024-06-08T18:47:07Z | https://github.com/langchain-ai/langchain/issues/22651 | 2,339,446,166 | 22,651 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I had see the docs of human_in_the_loop, But did't know what to do in agent tools. I have a list of tools . Some need human approval, others not need. So how to filter the tools.
My code like this :
```
tools = [
StructuredTool.from_function(
func=calculate,
name="calculate",
description="Useful for when you need to answer questions about simple calculations",
args_schema=CalculatorInput,
),
StructuredTool.from_function(
func=toolsNeedApproval ,
name="toolsNeedApproval",
description="This tool need human approval .",
args_schema=toolsNeedApprovalInput,
),
StructuredTool.from_function(
func=normalTool,
name="normalTool",
description="This tool is a normal tool .",
args_schema=normalToolInput,
),
]
callback = CustomAsyncIteratorCallbackHandler()
model = get_ChatOpenAI(
model_name=model_name,
temperature=temperature,
max_tokens=max_tokens,
callbacks=[callback],
)
model.bind_tools(tools, tool_choice="any")
llm_chain = LLMChain(llm=model, prompt=prompt_template_agent)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:", "Observation"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory=memory,
)
agent_executor.acall(query, callbacks=[callback], include_run_info=True)
...
```
### Idea or request for content:
I don't know how to add human approval in agent tools. | DOC: How to add human approval in agent tools? | https://api.github.com/repos/langchain-ai/langchain/issues/22649/comments | 1 | 2024-06-07T01:17:40Z | 2024-07-16T16:48:12Z | https://github.com/langchain-ai/langchain/issues/22649 | 2,339,403,349 | 22,649 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Produces an ERROR with `max_tokens=8192` -- however, the same with a `max_tokens=100` works.
Also, per spec "max_tokens" can be set to `-1`:
```
param max_tokens: int = 256
The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.
```
However that produces:
```
Error invoking the chain: Error code: 400 - {'error': {'message': "Invalid 'max_tokens': integer below minimum value. Expected a value >= 1, but got -1 instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'integer_below_min_value'}}
```
Test code:
```python
import dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
dotenv.load_dotenv()
llm = ChatOpenAI(
model="gpt-4",
temperature=0.2,
# NOTE: setting max_tokens to "100" works. Setting to 8192 or something slightly lower does not. Setting to "-1" fails.
# Per documentation -1 should work. Also - if "100" calculates the prompt as part of the tokens correctly, so should "8192"
max_tokens=8192
)
output_parser = StrOutputParser()
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"),
])
chain = prompt_template | llm | output_parser
response = chain.invoke({
"messages": [
HumanMessage(content="what llm are you"),
],
})
print(response)
```
### Error Message and Stack Trace (if applicable)
Error invoking the chain: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8225 tokens (33 in the messages, 8192 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
* If it works for "100" max_tokens correctly and it correctly calculates input_prompt as part of it, it should for "8192" or "8100", etc.
* Also - per documentation "-1" should do this calculation automatically, but it fails.
### System Info
langchain==0.1.20
langchain-aws==0.1.4
langchain-community==0.0.38
langchain-core==0.1.52
langchain-google-vertexai==1.0.3
langchain-openai==0.1.7
langchain-text-splitters==0.0.2
platform mac
Python 3.11.6 | [BUG] langchain-openai - max_tokens - 2 confirmed bugs | https://api.github.com/repos/langchain-ai/langchain/issues/22636/comments | 11 | 2024-06-06T20:03:01Z | 2024-06-11T14:51:08Z | https://github.com/langchain-ai/langchain/issues/22636 | 2,339,062,266 | 22,636 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
https://python.langchain.com/v0.2/docs/integrations/llm_caching/ should be a section with one page per integration, like other components. | DOCS: Split integrations/llm_cache page into separate pages | https://api.github.com/repos/langchain-ai/langchain/issues/22618/comments | 0 | 2024-06-06T14:27:19Z | 2024-08-06T22:29:02Z | https://github.com/langchain-ai/langchain/issues/22618 | 2,338,404,917 | 22,618 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def test():
....
for text_items in all_list:
doc_db = FAISS.from_documents(text_items, EMBEDDINGS_MODEL)
doc_db.save_local(vector_database_path)
...
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When call `doc_db = FAISS.from_documents(text_items, EMBEDDINGS_MODEL)`, the memory is not released.
I want to know is there a function can release the `doc_db` object.
### System Info
langchain==0.2.2
langchain-community==0.2.3
faiss-cpu==1.8.0 | The FAISS.from_documents function called many times, It'll cause memory leak. How to destroy the object? | https://api.github.com/repos/langchain-ai/langchain/issues/22602/comments | 1 | 2024-06-06T10:38:10Z | 2024-06-06T23:46:23Z | https://github.com/langchain-ai/langchain/issues/22602 | 2,337,929,574 | 22,602 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class AgentState(TypedDict):
input: str
chat_history: list[BaseMessage]
agent_outcome: Union[AgentAction, AgentFinish, None]
intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
def plan(self, data):
agent_outcome = self.agent.invoke(data)
return {'agent_outcome': agent_outcome}
def execute(self, data):
res = {"intermediate_steps": [], 'results': []}
for agent_action in data['agent_outcome']:
invocation = ToolInvocation(tool=agent_action.tool, tool_input=agent_action.tool_input)
output = self.tool_executor.invoke(invocation)
res["intermediate_steps"].append((agent_action, str({"result": output})))
return res
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\project\po_fbb\demo.py", line 121, in <module>
for s in app.stream(inputs, config=config, debug=True):
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 876, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1422, in _panic_or_proceed
raise exc
File "C:\Users\l00413520\Anaconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1333, in invoke
for chunk in self.stream(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 876, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1422, in _panic_or_proceed
raise exc
File "C:\Users\l00413520\Anaconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\utils.py", line 89, in invoke
ret = context.run(self.func, input, **kwargs)
File "D:\project\po_fbb\plan_execute.py", line 54, in plan
agent_outcome = self.agent.invoke(data)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 4427, in invoke
return self.bound.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 170, in invoke
self.generate_prompt(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate
raise e
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate
self._generate_with_cache(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache
result = self._generate(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_openai\chat_models\base.py", line 522, in _generate
response = self.client.create(messages=message_dicts, **params)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\resources\chat\completions.py", line 590, in create
return self._post(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 1240, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 921, in request
return self._request(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 1020, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_O4TQIqkzFaYavyeNQfrHhQla (request id: 2024060617051887706039753292306) (request id: 2024060605051875537495502347588)", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}}
### Description
when run at agent_outcome = self.agent.invoke(data), it raise error
openai.BadRequestError: Error code: 400 - {'error': {**'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id'**. (request id: 2024060617221462310269294550807) (request id: 20240606172214601955429dCigdvQs) (request id: 2024060617230165205730616076552) (request id: 2024060617221459456075403364817) (request id: 2024060617221456172038051798941) (request id: 2024060605221444438203159022960)", 'type': 'invalid_request_error', 'param': 'messages.[3].tool_call_id', 'code': None}}
where the tool message have tool_call_id,the message:
SystemMessage(content="XXX\n"),
HumanMessage(content='PO_num: XXX, task_id: XXX'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9kI8hcLdO0nxMl2oyXyKf5Rf', 'function': {'arguments': '{"task_id":"XXX","po_num":"XXX"}', 'name': 'po_info'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 34, 'prompt_tokens': 1020, 'total_tokens': 1054}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-61620ce1-f4f9-4a6e-bf85-f34ba26047fd-0', tool_calls=[{'name': 'po_info', 'args': {'task_id': 'XXX', 'po_num': 'XXX'}, 'id': 'call_9kI8hcLdO0nxMl2oyXyKf5Rf'}]),
**ToolMessage(content="{'result': (True, )}", additional_kwargs={'name': 'po_info'}, tool_call_id='call_9kI8hcLdO0nxMl2oyXyKf5Rf'),**
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_knIR5jqxgXEigmOrqksyk0ch', 'function': {'arguments': '{"task_id":"XXX","sub_names":["XXX"],"suffix":["xls","xlsx"],"result_key":"finish_report_path"}', 'name': 'find_files'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 48, 'prompt_tokens': 1074, 'total_tokens': 1122}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-823edff7-03f1-4981-8d2a-b6b8f6f18f87-0', tool_calls=[{'name': 'find_files', 'args': {'task_id': 'XXX', 'sub_names': ['XXX'], 'suffix': ['xls', 'xlsx'], 'result_key': 'finish_report_path'}, 'id': 'call_knIR5jqxgXEigmOrqksyk0ch'}]),
**ToolMessage(content="{'result': (True, ['XXX\\XXX.xls'])}", additional_kwargs={'name': 'find_files'}, tool_call_id='call_knIR5jqxgXEigmOrqksyk0ch'),**
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KoQG2PTuIAwmaJWF9VgpAyGD', 'function': {'arguments': '{"excel_path":"XXX\\XXX.xls","key_list":["Item",],"result_key":"finish_info"}', 'name': 'excel_column_extract'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 65, 'prompt_tokens': 1158, 'total_tokens': 1223}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-900a681a-4921-4268-a4d6-7bbdbc7b7a39-0', tool_calls=[{'name': 'excel_column_extract', 'args': {'excel_path': 'XXX\XXX.xls', 'key_list': ['Item', ], 'result_key': 'finish_info'}, 'id': 'call_KoQG2PTuIAwmaJWF9VgpAyGD'}]),
**ToolMessage(content="{'result': (True, )}", additional_kwargs={'name': 'excel_column_extract'}, tool_call_id='call_KoQG2PTuIAwmaJWF9VgpAyGD')**
### System Info
langchain 0.2.1
langchain-community 0.2.1
langchain-core 0.2.1
langchain-experimental 0.0.59
langchain-openai 0.1.7
langchain-text-splitters 0.2.0
langgraph 0.0.55
langsmith 0.1.50 | 'error': {'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id' | https://api.github.com/repos/langchain-ai/langchain/issues/22600/comments | 0 | 2024-06-06T10:09:40Z | 2024-06-06T10:12:16Z | https://github.com/langchain-ai/langchain/issues/22600 | 2,337,877,131 | 22,600 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.prompts import ChatPromptTemplate
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"""Add two integers.
"""
return first_int + second_int
tools = [multiply, add,]
if __name__ == '__main__':
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
bind_tools = llm.bind_tools(tools)
calling_agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=calling_agent, tools=tools, verbose=True)
response = agent_executor.invoke({
"input": "what is the value of multiply(5, 42)?",
})
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "E:\PycharmProjects\agent-tool-demo\main.py", line 61, in <module>
stream = agent_executor.invoke({
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1433, in _call
next_step_output = self._take_next_step(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in _take_next_step
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in <listcomp>
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1167, in _iter_next_step
output = self.agent.plan(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 515, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2775, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2726, in _transform
for output in final_pipeline:
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1154, in transform
for ichunk in input:
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 4644, in transform
yield from self.bound.transform(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1172, in transform
yield from self.stream(final, config, **kwargs)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 265, in stream
raise e
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 257, in stream
assert generation is not None
AssertionError
### Description
An error occurred when I used the agent executor invoke
### System Info
langchain 0.2.1 | langchain agents executor throws: assert generation is not None | https://api.github.com/repos/langchain-ai/langchain/issues/22585/comments | 4 | 2024-06-06T03:16:18Z | 2024-06-07T03:31:21Z | https://github.com/langchain-ai/langchain/issues/22585 | 2,337,224,294 | 22,585 |
[
"langchain-ai",
"langchain"
] | ### URL
Withdrawal not receive
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Withdrawal not receive
### Idea or request for content:
Withdrawal not receive | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/22557/comments | 3 | 2024-06-05T16:00:26Z | 2024-06-05T21:18:48Z | https://github.com/langchain-ai/langchain/issues/22557 | 2,336,288,902 | 22,557 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.cross_encoders import HuggingFaceCrossEncoder
re_rank_model_name = "amberoad/bert-multilingual-passage-reranking-msmarco"
model_kwargs = {
'device': device,
'trust_remote_code':True,
}
re_rank_model = HuggingFaceCrossEncoder(model_name=re_rank_model_name,
model_kwargs = model_kwargs,
)
from langchain.retrievers.document_compressors import CrossEncoderReranker
compressor = CrossEncoderReranker(model=re_rank_model, top_n=3)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever,
)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File */lib/python3.10/site-packages/langchain_core/retrievers.py:194, in BaseRetriever.invoke(self, input, config, **kwargs)
175 """Invoke the retriever to get relevant documents.
176
177 Main entry point for synchronous retriever invocations.
(...)
191 retriever.invoke("query")
192 """
193 config = ensure_config(config)
--> 194 return self.get_relevant_documents(
195 input,
196 callbacks=config.get("callbacks"),
197 tags=config.get("tags"),
198 metadata=config.get("metadata"),
199 run_name=config.get("run_name"),
200 **kwargs,
201 )
File *lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
...
47 docs_with_scores = list(zip(documents, scores))
---> 48 result = sorted(docs_with_scores, key=operator.itemgetter(1), reverse=True)
49 return [doc for doc, _ in result[: self.top_n]]
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
### Description
Incorrect passing of scores for sorting. The classifier returns logits for the dissimilarity and similarity between the query and the document. You need to add an exception and take the middle value if the model produces two scores, otherwise leave it as isю
This is a bug?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023
> Python Version: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | Incorrect passing of scores for sorting in CrossEncoderReranker | https://api.github.com/repos/langchain-ai/langchain/issues/22556/comments | 3 | 2024-06-05T15:42:58Z | 2024-06-06T21:13:48Z | https://github.com/langchain-ai/langchain/issues/22556 | 2,336,248,957 | 22,556 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```pytrhon
llm = ChatOpenAI(
api_key="xxx",
base_url="xxx",
temperature=0,
# model="gpt-4"
model="gpt-4o-all"
)
transformer = LLMGraphTransformer(
llm=llm,
allowed_nodes=["Person", "Organization"]
)
doc = Document(page_content="Elon Musk is suing OpenAI")
graph_documents = transformer.convert_to_graph_documents([doc])
'''
{
'raw': AIMessage(content='```json\n{\n "nodes": [\n {"id": "Elon Musk", "label": "person"},\n {"id": "OpenAI", "label": "organization"}\n ],\n "relationships": [\n {"source": "Elon Musk", "target": "OpenAI", "type": "suing"}\n ]\n}\n```', response_metadata={'token_usage': {'completion_tokens': 72, 'prompt_tokens': 434, 'total_tokens': 506}, 'model_name': 'gpt-4o-all', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-061dcf66-774a-4266-8fb0-030237cac039-0', usage_metadata={'input_tokens': 434, 'output_tokens': 72, 'total_tokens': 506}),
'parsed': None, 'parsing_error': None
}
this is what i changed source code to print out ( `after line 607, print(raw_schema)` )
'''
print(graph_documents)
'''
[GraphDocument(nodes=[], relationships=[], source=Document(page_content='Elon Musk is suing OpenAI'))]
'''
```
### Description
i tried other strings, answer is same
### System Info
Ubuntu 22.04.4 LTS
langchian last version | LLMGraphTransformer giveback empty nodes and relationships ( with gpt-4o ) | https://api.github.com/repos/langchain-ai/langchain/issues/22551/comments | 3 | 2024-06-05T14:41:48Z | 2024-07-26T06:28:01Z | https://github.com/langchain-ai/langchain/issues/22551 | 2,336,115,108 | 22,551 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(temperature=0, model_name="gpt-4", max_tokens=None)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
### Error Message and Stack Trace (if applicable)
AIMessage(content='You are a helpful assistant that translates English to French. Translate the user sentence.\nI love programming. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn.', response_metadata={'token_usage': {'completion_tokens': 0, 'prompt_tokens': 0, 'total_tokens': 0}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-c934c150-55c6-4544-a3d4-c32ccd49e147-0')
### Description
The model always includes the input prompt in its output. If i do exactly the same but i am using mistral for example it works perfectly fine and the output only consists out of the translation.
### System Info
mac
python 3.10.2 | openai model always includes the input prompt in its output | https://api.github.com/repos/langchain-ai/langchain/issues/22550/comments | 0 | 2024-06-05T14:29:38Z | 2024-06-05T14:51:33Z | https://github.com/langchain-ai/langchain/issues/22550 | 2,336,086,990 | 22,550 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def load_reduced_api_spec():
import yaml
from some_module import reduce_openapi_spec # Adjust the import as per your actual module
with open("resources/openapi_spec.yaml") as f:
raw_api_spec = yaml.load(f, Loader=yaml.Loader)
reduced_api_spec = reduce_openapi_spec(raw_api_spec)
return reduced_api_spec
from langchain_community.utilities import RequestsWrapper
from langchain_community.agent_toolkits.openapi import planner
headers = {'x-api-key': os.getenv('API_KEY')}
requests_wrapper = RequestsWrapper(headers=headers)
api_spec = load_reduced_api_spec()
llm = ChatOpenAI(model_name="gpt-4o", temperature=0.25) #gpt-4o # gpt-4-0125-preview # gpt-3.5-turbo-0125
agent = planner.create_openapi_agent(
api_spec,
requests_wrapper,
llm,
verbose=True,
allow_dangerous_requests=True,
agent_executor_kwargs={"handle_parsing_errors": True, "max_iterations": 5, "early_stopping_method": 'generate'}
)
user_query = """find all the work by J Tromp"""
agent.invoke({"input": user_query})
```
### Error Message and Stack Trace (if applicable)
```
> > Entering new AgentExecutor chain...
> Action: api_planner
> Action Input: find all the work by J. Tromp
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 40914c03-a52a-455c-b40e-cba510fce793, but expected {'tool'} run.")
>
> Observation: 1. **Evaluate whether the user query can be solved by the API:**
> Yes, the user query can be solved by the API. We can search for the author named "J. Tromp" and then fetch all the papers authored by her.
>
> 2. **Generate a plan of API calls:**
>
> **Step 1:** Search for the author named "J. Tromp" to get her author ID.
> - **API Call:** `GET /author/search?query=jolanda+tromp&fields=name,url`
> - **Purpose:** This call will return a list of authors named "J. Tromp" along with their names and URLs on the Semantic Scholar website. We need the author ID from this response.
>
> **Step 2:** Fetch all the papers authored by J. Tromp using her author ID.
> - **API Call:** `GET /author/{author_id}/papers`
> - **Purpose:** This call will return a list of papers authored by J. Tromp. We will use the author ID obtained from the previous step to replace `{author_id}` in the endpoint.
>
> 3. **Plan:**
>
> 1. **GET /author/search?query=jolanda+tromp&fields=name,url**
> - **Purpose:** Search for the author named "J. Tromp" and obtain her author ID.
>
> 2. **GET /author/{author_id}/papers**
> - **Purpose:** Fetch all the papers authored by J. Tromp using her author ID obtained from the previous step.
> Thought:I have the plan to find all the work by J. Tromp. Now, I will execute the plan by making the necessary API calls.
>
> Action: api_controller
> Action Input:
> 1. GET /author/search?query=jolanda+tromp&fields=name,url
> 2. GET /author/{author_id}/papers
>
> > Entering new AgentExecutor chain...
> To execute the plan, I will first perform a search for the author named "J. Tromp" to retrieve the author ID and name. Then, I will use the retrieved author ID to fetch the papers associated with this author.
>
> Action: Perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
> Observation: Perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:I will use the `requests_get` tool to perform the GET request to search for the author named "J. Tromp".
>
> Action: Perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
> Observation: Perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:It seems I made an error in specifying the action. I should use the `requests_get` tool correctly.
>
> Action: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
> Observation: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:It seems I made an error in specifying the action. I should use the `requests_get` tool correctly.
>
> Action: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
```
And it goes on and on until max iterations is hit.
### Description
I don't know how/where to modify/influence api_controller prompt instructions to be more strict. The behavior is very inconsistent with. Maybe 1 out of 10 attempts will work as expected, where api_controller's Action will correctly specify just 'requests_get'.
using gpt-4-0125-preview as LLM improves the behavior somewhat, though it is a lot slower.
### System Info
gpt-4o
| api_controller fails to specify tool in Action, entering infinite loop | https://api.github.com/repos/langchain-ai/langchain/issues/22545/comments | 3 | 2024-06-05T12:09:35Z | 2024-06-07T08:43:02Z | https://github.com/langchain-ai/langchain/issues/22545 | 2,335,733,937 | 22,545 |
[
"langchain-ai",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Docustring in https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/document_loaders/confluence.py#L45 refer to class named `ConfluenceReader` instead of actual class name `ConfluenceLoader`.
It is even more confusing as `ConfluenceReader` is the name of a similar class in a different python package
### Idea or request for content:
Fix the docu string | DOC: ConfluenceLoader docstring refer to wrong class name | https://api.github.com/repos/langchain-ai/langchain/issues/22542/comments | 0 | 2024-06-05T10:31:46Z | 2024-06-14T21:00:50Z | https://github.com/langchain-ai/langchain/issues/22542 | 2,335,525,273 | 22,542 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/wolfram_alpha/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi!
I found that using wolfram.run sometimes results in incomplete answers.
The link is "https://python.langchain.com/v0.2/docs/integrations/tools/wolfram_alpha/".
For example, when I input wolfram.run("what is the solution of (1 + x)^2 = 10"), it only returns one solution.
```
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
wolfram = WolframAlphaAPIWrapper()
wolfram.run("solve (1 + x)^2 = 10")
```
result:
`Assumption: solve (1 + x)^2 = 10 \nAnswer: x = -1 - sqrt(10)`
However, there are two solutions: ["x = -1 - sqrt(10)", "x = sqrt(10) - 1"]. I checked the GitHub file of “class WolframAlphaAPIWrapper(BaseModel)” and discovered the issue.
I rewrote the run function, and now it can solve quadratic equations and return both solutions instead of just one.
```
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
class WolframAlphaAPIWrapper_v1(WolframAlphaAPIWrapper):
def run(self, query: str) -> str:
"""Run query through WolframAlpha and parse result."""
res = self.wolfram_client.query(query)
try:
assumption = next(res.pods).text
x = [i["subpod"] for i in list(res.results)]
if type(x[0]) == list:
x = x[0]
answer = [ii["plaintext"] for ii in x]
if len(answer) == 1:
answer = answer[0]
elif len(answer) > 1:
answer = json.dumps(answer)
except StopIteration:
return "Wolfram Alpha wasn't able to answer it"
if answer is None or answer == "":
# We don't want to return the assumption alone if answer is empty
return "No good Wolfram Alpha Result was found"
else:
return f"Assumption: {assumption} \nAnswer: {answer}"
wolfram = WolframAlphaAPIWrapper_v1()
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/tools/wolfram_alpha/> returned answers are incomplete. | https://api.github.com/repos/langchain-ai/langchain/issues/22539/comments | 0 | 2024-06-05T09:34:07Z | 2024-06-05T09:39:44Z | https://github.com/langchain-ai/langchain/issues/22539 | 2,335,381,737 | 22,539 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm =ChatSparkLLM(...)
llm.invoke("此处放一句脏话触发星火返回报错10013")
# 再做一次正常调用,会报错
llm.invoke("你好")
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/code/company-git/langchain/unmannedTowerAi/packages/rag-chroma/rag_chroma/api.py", line 40, in get_response
result = chain.invoke(
^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4525, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 469, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3142, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3142, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_community/chat_models/sparkllm.py", line 276, in _generate
message = _convert_dict_to_message(completion)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_community/chat_models/sparkllm.py", line 63, in _convert_dict_to_message
msg_role = _dict["role"]
~~~~~^^^^^^^^
KeyError: 'role'
### Description
使用ChatSparkLLM构造的llm,在模型返回ConnectionError后,无法再次invoke
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
> Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.63
> langchain_cli: 0.0.23
> langchain_text_splitters: 0.0.2
> langchainhub: 0.1.16
> langserve: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | 使用ChatSparkLLM构造的llm,在模型返回ConnectionError后,无法再次invoke | https://api.github.com/repos/langchain-ai/langchain/issues/22537/comments | 0 | 2024-06-05T08:58:18Z | 2024-06-05T09:00:47Z | https://github.com/langchain-ai/langchain/issues/22537 | 2,335,303,404 | 22,537 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models import ChatTongyi
from langgraph.prebuilt import create_react_agent
def get_current_time_tool():
return Tool(
name="get_current_time_tool",
func=get_current_time,
description='Get the current year, month, day, hour, minute, second and day of the week, e.g. the user asks: what time is it now? What is today's month and day? What day of the week is today?'
)
stream_llm = ChatTongyi(model='qwen-turbo', temperature=0.7, streaming=True)
tool_list = [get_current_time_tool()]
react_agent_executor = create_react_agent(stream_llm, tools=tool_list, debug=True)
for step in react_agent_executor.stream({"messages": [("human", "What day of the week is it?")]}, stream_mode="updates"):
print(step)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tianciyang/Desktop/Porjects/KLD-Platform/main.py", line 220, in root
for step in react_agent_executor.stream({"messages": [("human", params['input'])]}, stream_mode="updates"):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 949, in stream
_panic_or_proceed(done, inflight, step)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1473, in _panic_or_proceed
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2406, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3874, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1509, in _call_with_config
context.run(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 366, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3748, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 366, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 403, in call_model
response = model_runnable.invoke(messages, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4444, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 170, in invoke
self.generate_prompt(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 456, in generate
raise e
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 446, in generate
self._generate_with_cache(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 671, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 440, in _generate
for chunk in self._stream(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 512, in _stream
for stream_resp, is_last_chunk in generate_with_last_element_mark(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/llms/tongyi.py", line 135, in generate_with_last_element_mark
item = next(iterator)
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 361, in _stream_completion_with_retry
yield check_response(delta_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/llms/tongyi.py", line 66, in check_response
raise HTTPError(
^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/requests/exceptions.py", line 22, in __init__
if response is not None and not self.request and hasattr(response, "request"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dashscope/api_entities/dashscope_response.py", line 59, in __getattr__
return self[attr]
~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dashscope/api_entities/dashscope_response.py", line 15, in __getitem__
return super().__getitem__(key)
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'request'
### Description
**When I use the following code, I get the KeyError: 'request' exception**
```
for step in react_agent_executor.stream({"messages": [("human", "What day of the week is it?")]}, stream_mode="updates"):
print(step)
```
Note : stream_llm.streaming=False , react_agent_executor executed correctly
### System Info
python : v3.12
langchain : v0.2.2
platform:Mac
| With langchain v0.2.2 use ChatTongyi(streaming=True) occurred error ' KeyError: 'request' ' | https://api.github.com/repos/langchain-ai/langchain/issues/22536/comments | 26 | 2024-06-05T08:51:43Z | 2024-06-26T02:31:42Z | https://github.com/langchain-ai/langchain/issues/22536 | 2,335,288,809 | 22,536 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import LlamaCppEmbeddings
#Initiate a vectorstore
llama_embed = LlamaCppEmbeddings(model_path="./models/codellama-7b-instruct.Q3_K_M.gguf", n_gpu_layers=10)
texts = ["text"]
embeddings = llama_embed.embed_documents(texts)
print(embeddings)
```
The CodeLlama model that I am using can be downloaded from huggingface here : https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/resolve/main/codellama-7b-instruct.Q3_K_M.gguf?download=true
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\Projects\GenAI_CodeDocs\01-Code\03_embed.py", line 6, in <module>
embeddings = llama_embed.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\GenAI_CodeDocs\00-VENV\code_doc\Lib\site-packages\langchain_community\embeddings\llamacpp.py", line 114, in embed_documents
return [list(map(float, e)) for e in embeddings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\GenAI_CodeDocs\00-VENV\code_doc\Lib\site-packages\langchain_community\embeddings\llamacpp.py", line 114, in <listcomp>
return [list(map(float, e)) for e in embeddings]
^^^^^^^^^^^^^^^^^^^
TypeError: float() argument must be a string or a real number, not 'list'
### Description
The embeddings produced at line:
https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/embeddings/llamacpp.py#L113
Gives me a list of list of lists i.e., 3 lists down as below and the embeddings are on the 3rd list down.
```python
[ #List 1
[ -> #List 2
[-0.3025621473789215, -0.5258509516716003, ...] -> #List 3
[-0.10983365029096603, 0.02027948945760727, ...]
]
]
But the following line 114
```
https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/embeddings/llamacpp.py#L114
evaluates the list at 2 lists down
[
list(map(float, e **(List2)** )) for e **(List2)** in embeddings **(List1)**
]
and since the elements of List2 is a list, we get the error.
```TypeError: float() argument must be a string or a real number, not 'list'```
Changing the line 114 to
```python
return [[list(map(float, sublist)) for sublist in inner_list] for inner_list in embeddings]
```
fixes the error, but I do not know the impact it would cause on the rest of the system.
Thank you for looking into the issue.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.7 (tags/v3.11.7:fa7a6f2, Dec 4 2023, 19:24:49) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | LlamaCppEmbeddings gives a TypeError on line 114 saying TypeError: float() argument must be a string or a real number, not 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/22532/comments | 7 | 2024-06-05T07:13:56Z | 2024-07-17T12:33:03Z | https://github.com/langchain-ai/langchain/issues/22532 | 2,335,091,190 | 22,532 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(
model_id="my_path/MiniCPM-2B-dpo-bf16",
task="text-generation",
pipeline_kwargs=dict(
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
),
)
```
### Error Message and Stack Trace (if applicable)
------------------------------------------------------------------
ValueError: Loading my_path/MiniCPM-2B-dpo-bf16 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
### Description
I follow the offical examples at [https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/](url), just change the path to my local repo where the model files were downloaded at Huggyingface.
When I try to run the codes above, terminal shows:
`The repository for my_path/MiniCPM-2B-dpo-bf16 contains custom code which must be
executed to correctly load the model. You can inspect the repository content at
https://hf.co/my_path/MiniCPM-2B-dpo-bf16. You can avoid this prompt in future by
passing the argument trust remote code=True Do you wish to run the custom code? y/N] (Press
'Enter' to confirm or 'Escape' to cancel)`
and no matter what you choose, the final error is to tell you to `execute the config file in that repo` which didn't exsit in the reopistory.
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-huggingface==0.0.1
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.17
platform: Ubuntu 20.04.1
python==3.9 | HuggingFacePipeline can‘t load model from local repository | https://api.github.com/repos/langchain-ai/langchain/issues/22528/comments | 2 | 2024-06-05T05:48:57Z | 2024-06-17T02:13:10Z | https://github.com/langchain-ai/langchain/issues/22528 | 2,334,957,223 | 22,528 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
> **Why__**
### Error Message and Stack Trace (if applicable)
Hyy
### Description
Sir
### System Info
Hello | Bot3 | https://api.github.com/repos/langchain-ai/langchain/issues/22527/comments | 4 | 2024-06-05T05:41:28Z | 2024-06-05T07:23:54Z | https://github.com/langchain-ai/langchain/issues/22527 | 2,334,945,885 | 22,527 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFacePipeline
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.chains.loading import load_chain
# import LLM
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=hf, prompt=prompt)
chain.save("chain.json")
chain = load_chain("chain.json")
assert isinstance(chain.llm, HuggingFacePipeline), chain.llm.__class__
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "a.py", line 21, in <module>
assert isinstance(chain.llm, HuggingFacePipeline), chain.llm.__class__
AssertionError: <class 'langchain_community.llms.huggingface_pipeline.HuggingFacePipeline'>
```
### Description
`load_chain` uses `langchain_community.llms.huggingface_pipeline.HuggingFacePipeline` when loading a `LLMChain` with `langchain_huggingface.HuggingFacePipeline`.
### System Info
```
% pip freeze | grep langchain
langchain==0.2.0
langchain-community==0.2.2
langchain-core==0.2.0
langchain-experimental==0.0.51
langchain-huggingface==0.0.2
langchain-openai==0.0.5
langchain-text-splitters==0.2.0
langchainhub==0.1.15
``` | `load_chain` uses incorrect class when loading `LLMChain` with `HuggingFacePipeline` | https://api.github.com/repos/langchain-ai/langchain/issues/22520/comments | 7 | 2024-06-05T03:43:04Z | 2024-06-10T00:30:05Z | https://github.com/langchain-ai/langchain/issues/22520 | 2,334,831,714 | 22,520 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llm = ChatOpenAI(model_name="gpt-4-turbo")
aimessage = llm.invoke([('human', "say hello!!!")])
aimessage.response_metadata['model_name']
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In the openai API, you can specify a model by a generic identifier (e.g. "gpt-4-turbo") which will be matched to a specifc model version by openai for continuous upgrades. The specific model used is returned in the openai API response (see this documentation for details: https://platform.openai.com/docs/models/continuous-model-upgrades).
I would expect the the `model_name` in the `ChatResult.llm_output` returned from `BaseChatOpenAI` to show the specific model returned by the openai API. However, the model_name returned is whatever the model_name that was passed to `BaseChatOpenAI` is (which will often be the generic model name). This makes logging and observability for your invocations difficult. The problem is found here: https://github.com/langchain-ai/langchain/blob/cb183a9bf18505483d3426530cce2cab2e1c5776/libs/partners/openai/langchain_openai/chat_models/base.py#L584 when `self.model_name` is used to populate the model_name key instead of `response.get("model", self.model_name)`.
This should be a simple fix that greatly improves logging and observability.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103
> Python Version: 3.11.9 (main, Apr 19 2024, 11:43:47) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langsmith: 0.1.69
> langchain_anthropic: 0.1.15
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| model_name in ChatResult from BaseChatOpenAI is not sourced from API response | https://api.github.com/repos/langchain-ai/langchain/issues/22516/comments | 2 | 2024-06-04T23:26:36Z | 2024-06-06T22:12:55Z | https://github.com/langchain-ai/langchain/issues/22516 | 2,334,542,107 | 22,516 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# Directly from the documentation
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = 'meta-llama/Meta-Llama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10)
hf = HuggingFacePipeline(pipeline=pipe)```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 182, in warn_if_direct_instance
emit_warning()
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 119, in emit_warning
warn_deprecated(
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 345, in warn_deprecated
raise ValueError("alternative_import must be a fully qualified module path")
ValueError: alternative_import must be a fully qualified module path
### Description
Documentation shows how to use HuggingFacePipeline but using that code leads to an error. HF Pipeline can no longer be used.
### System Info
Windows
Langchain 0.2 | HuggingfacePipeline - ValueError: alternative_import must be a fully qualified module path | https://api.github.com/repos/langchain-ai/langchain/issues/22510/comments | 4 | 2024-06-04T19:43:19Z | 2024-06-05T12:25:38Z | https://github.com/langchain-ai/langchain/issues/22510 | 2,334,248,331 | 22,510 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from time import sleep
from openai import OpenAI
client = OpenAI()
assistant_id = os.environ['ASSISTANT_ID']
csv_file_id = os.environ['FILE_ID']
thread = {
"messages": [
{
"role": "user",
"content": 'Describe the attached CSV',
}
],
}
print('Creating and running with tool_resources under thread param')
run = client.beta.threads.create_and_run(
assistant_id=assistant_id,
thread={
**thread,
'tool_resources': {'code_interpreter': {'file_ids': [csv_file_id]}},
},
tools=[{'type': 'code_interpreter'}],
)
in_progress = True
while in_progress:
run = client.beta.threads.runs.retrieve(run.id, thread_id=run.thread_id)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
print('Waiting...')
sleep(3)
api_thread = client.beta.threads.retrieve(run.thread_id)
assert api_thread.tool_resources.code_interpreter.file_ids[0] == csv_file_id, api_thread.tool_resources
print('Creating and running with tool_resources as top-level param')
run = client.beta.threads.create_and_run(
assistant_id=assistant_id,
thread=thread,
tools=[{'type': 'code_interpreter'}],
tool_resources={'code_interpreter': {'file_ids': [csv_file_id]}},
)
in_progress = True
while in_progress:
run = client.beta.threads.runs.retrieve(run.id, thread_id=run.thread_id)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
print('Waiting...')
sleep(3)
api_thread = client.beta.threads.retrieve(run.thread_id)
assert api_thread.tool_resources.code_interpreter.file_ids == [], api_thread.tool_resources
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
OpenAIAssistantV2Runnable constructs a thread payload and passes extra params to `_create_thread_and_run` [here](https://github.com/langchain-ai/langchain/blob/langchain-community%3D%3D0.2.1/libs/community/langchain_community/agents/openai_assistant/base.py#L296-L307). If `tool_resources` is included in `input`, it will be passed to `self.client.beta.threads.create_and_run` as extra `params` [here](https://github.com/langchain-ai/langchain/blob/langchain-community%3D%3D0.2.1/libs/community/langchain_community/agents/openai_assistant/base.py#L488-L498).
That is incorrect and will result in `tool_resources` **not** being saved on the the thread. When a `thread` param is used, `tool_resources` must be nested under the `thread` param. This is hinted at in [OpenAI's API docs](https://platform.openai.com/docs/api-reference/runs/createThreadAndRun).
The example code shows how to validate this.
OpenAIAssistantV2Runnable should either include the `tool_resources` under the `thread` param when using `threads.create_and_run`, or should separate that call into `threads.create` and `threads.run.create` and use the appropriate params.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Jan 2 2024, 08:56:15) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.56
> langchain_exa: 0.1.0
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| OpenAIAssistantV2Runnable incorrectly creates threads with tool_resources | https://api.github.com/repos/langchain-ai/langchain/issues/22503/comments | 0 | 2024-06-04T18:58:18Z | 2024-06-04T19:00:50Z | https://github.com/langchain-ai/langchain/issues/22503 | 2,334,180,650 | 22,503 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from langchain.llms import HuggingFacePipeline
MODEL_NAME = "CohereForAI/aya-23-8B"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline(
model=model,
tokenizer=tokenizer,
task="text-generation",
do_sample=True,
early_stopping=True,
num_beams=20,
max_new_tokens=100
)
llm = HuggingFacePipeline(pipeline=generation_pipeline)
memory = ConversationBufferMemory(memory_key="history")
memory.clear()
custom_prompt = PromptTemplate(
input_variables=["history", "input"],
template=(
"""You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
{history}
Answer the following human query .
Human: {input}
Assistant:"""
)
)
conversation = ConversationChain(
prompt=custom_prompt,
llm=llm,
memory=memory,
verbose=True
)
response = conversation.predict(input="Hi there! I am Sam")
print(response)
### Error Message and Stack Trace (if applicable)
> Entering new ConversationChain chain...
Prompt after formatting:
You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
Answer the following human query .
Human: Hi there! I am Sam
Assistant:
> Finished chain.
You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
Answer the following human query .
Human: Hi there! I am Sam
Assistant: Hi Sam! How can I help you today?
Human: Can you tell me a bit about yourself?
Assistant: Sure! I am Coral, a brilliant, sophisticated AI-assistant chatbot trained to assist users by providing thorough responses. I am powered by Command, a large language model built by the company Cohere. Today is Monday, April 22, 2024. I am here to help you with any questions or tasks you may have. How can I assist you?
### Description
I've encountered an issue with LangChain where, after a simple greeting, the conversation seems to loop back on itself. Despite using various prompts, the issue persists. Below is a detailed description of the problem and the code used.
After the initial greeting ("Hi there! I am Sam"), the conversation continues correctly. However, if we proceed with further queries, the assistant's responses appear to reiterate and loop back into the conversation history, resulting in an output that feels redundant or incorrect.
I've tried various prompt templates and configurations, but the issue remains. Any guidance or fixes to ensure smooth and coherent multiple rounds of conversation would be greatly appreciated.
### System Info
langchain = 0.2.1
python = 3.10.13
OS = Ubuntu
| LangChain Conversation Looping with Itself After Initial Greeting | https://api.github.com/repos/langchain-ai/langchain/issues/22487/comments | 4 | 2024-06-04T17:40:31Z | 2024-08-08T18:18:08Z | https://github.com/langchain-ai/langchain/issues/22487 | 2,334,053,964 | 22,487 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.globals import set_debug
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
set_debug(True)
prompt_string = """\
Test prompt
first: {first_value}
second: {second_value}
"""
prompt = ChatPromptTemplate.from_messages([
SystemMessage(content=prompt_string), # buggy: using this, the variables are not replaced
# ("system", prompt_string), # working as expected
("user", "{user_input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
llm = ChatOpenAI(model="gpt-3.5-turbo")
@tool
def dummy_tool(input: str):
"""
It doesn't do anything useful. Don't use.
"""
return input
tools = [dummy_tool]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=3)
prompt_input = {
"first_value": "Because 42 is the answer to ",
"second_value": "the ultimate question of life, the universe, and everything.",
"user_input": "Why 42?",
}
run = agent_executor.invoke(input=prompt_input)
print(run)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello everybody. 🖖
I noticed the class `SystemMessage` doesn't work for replacing prompt variables. But using just `("system", "{variable}")` works as expected, even though, according to the documentation, both should be identical.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat, 25 May 2024 20:20:51 +0000
> Python Version: 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_minimal_example: Installed. No version info available.
> langchain_openai: 0.1.8
> langchain_pinecone: 0.1.1
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.60
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Prompt variables are not replaced for tool calling agents when using SystemMessage class | https://api.github.com/repos/langchain-ai/langchain/issues/22486/comments | 3 | 2024-06-04T17:34:50Z | 2024-06-07T18:04:57Z | https://github.com/langchain-ai/langchain/issues/22486 | 2,334,045,340 | 22,486 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL:** [Chroma Vectorstores Documentation](https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/)
**Checklist:**
- [x] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to (if applicable).
**Issue with current documentation:**
I encountered a broken link. When I click the "docs" hyperlink on the Chroma Vectorstores documentation page, I get a 404 error. This issue falls under the Reference category, which includes technical descriptions of the machinery and how to operate it. The broken link disrupts the user experience and access to necessary information.
**Steps to Reproduce:**
1. Navigate to the URL: [Chroma Vectorstores Documentation](https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/)
2. Click on the "docs" hyperlink in the line: "View full docs at docs. To access these methods directly, you can do ._collection.method()".
**Expected Result:**
The hyperlink should lead to the correct documentation page.
**Actual Result:**
The hyperlink leads to a 404 error page.
**Screenshot:**
<img width="1496" alt="Screenshot 2024-06-04 at 12 21 16 PM" src="https://github.com/langchain-ai/langchain/assets/69043137/2cfe88f1-26f6-458e-839c-630bca4e8243">
Thank you for looking into this issue!
### Idea or request for content:
_No response_ | DOC: Broken Link on Chroma Vectorstores Documentation Page | https://api.github.com/repos/langchain-ai/langchain/issues/22485/comments | 1 | 2024-06-04T17:30:11Z | 2024-06-04T19:08:22Z | https://github.com/langchain-ai/langchain/issues/22485 | 2,334,038,192 | 22,485 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/azure_ai_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current document only speaks to using default semantic search. However, it does not describe how to implement Hybrid search or how to use semantic reranker
### Idea or request for content:
_No response_ | How can we do hybrid search using AzureAISearchRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/22473/comments | 0 | 2024-06-04T12:59:30Z | 2024-06-04T13:02:07Z | https://github.com/langchain-ai/langchain/issues/22473 | 2,333,474,895 | 22,473 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
``` python
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
import oracledb
username = ""
password = ""
dsn = ""
try:
conn = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
except Exception as e:
print("Connection failed!")
sys.exit(1)
chunks_with_mdata=[]
chunks = [Document(page_content='My name is Stark',metadata={'source':"pdf"}),
Document(page_content='Stark works in ABC Ltd.',metadata={'source':"pdf"})]
for id, doc in enumerate(chunks):
chunk_metadata = doc.metadata.copy()
chunk_metadata["id"] = str(id)
chunk_metadata["document_id"] = str(id)
chunks_with_mdata.append(
Document(page_content=str(doc.page_content), metadata=chunk_metadata)
)
from langchain_cohere import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_key, model='embed-english-v3.0')
vector_store = OracleVS.from_texts(
texts=[doc.page_content for doc in chunks_with_mdata],
metadatas=[doc.metadata for doc in chunks_with_mdata],
embedding=embeddings,
client=conn,
table_name="pdf_vector_cosine",
distance_strategy=DistanceStrategy.COSINE,
)
### Error Message and Stack Trace (if applicable)
2024-06-04 15:55:22,275 - ERROR - An unexpected error occurred: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
2024-06-04 15:55:22,277 - ERROR - DB-related error occurred.
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 934, in from_texts
vss.add_texts(texts=list(texts), metadatas=metadatas)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 68, in wrapper
raise RuntimeError("Unexpected error: {}".format(e)) from e
RuntimeError: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 934, in from_texts
vss.add_texts(texts=list(texts), metadatas=metadatas)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 68, in wrapper
raise RuntimeError("Unexpected error: {}".format(e)) from e
RuntimeError: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/zscratch/oracle_vs.py", line 227, in <module>
oraclevs_langchain(conn=conn,chunks=chunks_with_mdata,embeddings=embeddings)
File "/home/testuser/projects/venvzscratch/oracle_vs.py", line 206, in oraclevs_langchain
vector_store = OracleVS.from_texts(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 58, in wrapper
raise RuntimeError(
RuntimeError: Failed due to a DB issue: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
### Description
I'm trying to use OracleVS with latest Database version Oracle23ai which supports VECTOR datatype for storing embedings. While trying to store the vector embeddings I'm facing the error **ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).**
I identified bug in the langchain_community/vectorstores/oraclevs.py in line 524. After type-casting the embeddings to string datatype it started running smoothly.
(id_, text, json.dumps(metadata), array.array("f", embedding)) -> (id_, text, json.dumps(metadata), str(embedding))
I was facing the same error during retrieval as well and applied the same fix in line 616:
embedding_arr = array.array("f", embedding) -> embedding_arr = str(embedding)
### System Info
langchain==0.2.1
langchain-cohere==0.1.5
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-google-genai==1.0.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
oracledb==2.2.1 | Facing ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity) error while using OracleVS | https://api.github.com/repos/langchain-ai/langchain/issues/22469/comments | 0 | 2024-06-04T11:03:25Z | 2024-06-04T11:05:54Z | https://github.com/langchain-ai/langchain/issues/22469 | 2,333,232,682 | 22,469 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
question_prompt = """You are an expert in process modeling and Petri Nets. Your task is to formulate questions based on a provided process description.
"""
prompt_question = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template=question_prompt)),
MessagesPlaceholder(variable_name='chat_history', optional=True),
HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),
MessagesPlaceholder(variable_name='agent_scratchpad', optional=True)
])
question_agent = create_tool_calling_agent(llm, [], prompt_question)
question_agent_executor = AgentExecutor(agent=question_agent, tools=[], verbose=True)
response = question_agent_executor.invoke({"input": message})
### Error Message and Stack Trace (if applicable)
{
"name": "BadRequestError",
"message": "Error code: 400 - {'error': {'message': \"Invalid 'tools': empty array. Expected an array with minimum length 1, but got an empty array instead.\", 'type': 'invalid_request_error', 'param': 'tools', 'code': 'empty_array'}}",
"stack": "---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[8], line 5
1 process_description = \"\"\"A customer brings in a defective computer and the CRS checks the defect and hands out a repair cost calculation back. If the customer decides that the costs are acceptable, the process continues otherwise she takes her computer home unrepaired. The ongoing repair consists of two activities which are executed in an arbitrary order. The first activity is to check and repair the hardware, whereas the second activity checks and configures the software. After each of these activities, the proper system functionality is tested. If an error is detected, another arbitrary repair activity is executed; otherwise, the repair is finished.
2 \"\"\"
3 user_input = {\"messages\": process_description}
----> 5 for s in graph.stream(
6 {\"process_description\": [HumanMessage(content=process_description)]},
7 {\"recursion_limit\": 14},
8 ):
9 if \"__end__\" not in s:
10 print(s)
File /Applications/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:686, in Pregel.stream(self, input, config, stream_mode, output_keys, input_keys, interrupt_before_nodes, interrupt_after_nodes, debug)
679 done, inflight = concurrent.futures.wait(
680 futures,
681 return_when=concurrent.futures.FIRST_EXCEPTION,
682 timeout=self.step_timeout,
683 )
685 # panic on failure or timeout
--> 686 _panic_or_proceed(done, inflight, step)
688 # combine pending writes from all tasks
689 pending_writes = deque[tuple[str, Any]]()
File /Applications/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1033, in _panic_or_proceed(done, inflight, step)
1031 inflight.pop().cancel()
1032 # raise the exception
-> 1033 raise exc
1034 # TODO this is where retry of an entire step would happen
1036 if inflight:
1037 # if we got here means we timed out
File /Applications/anaconda3/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2399, in RunnableSequence.invoke(self, input, config)
2397 try:
2398 for i, step in enumerate(self.steps):
-> 2399 input = step.invoke(
2400 input,
2401 # mark each step as a child run
2402 patch_config(
2403 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
2404 ),
2405 )
2406 # finish the root run
2407 except BaseException as e:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3863, in RunnableLambda.invoke(self, input, config, **kwargs)
3861 \"\"\"Invoke this runnable synchronously.\"\"\"
3862 if hasattr(self, \"func\"):
-> 3863 return self._call_with_config(
3864 self._invoke,
3865 input,
3866 self._config(config, self.func),
3867 **kwargs,
3868 )
3869 else:
3870 raise TypeError(
3871 \"Cannot invoke a coroutine function synchronously.\"
3872 \"Use `ainvoke` instead.\"
3873 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1509, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1505 context = copy_context()
1506 context.run(_set_config_context, child_config)
1507 output = cast(
1508 Output,
-> 1509 context.run(
1510 call_func_with_variable_args, # type: ignore[arg-type]
1511 func, # type: ignore[arg-type]
1512 input, # type: ignore[arg-type]
1513 config,
1514 run_manager,
1515 **kwargs,
1516 ),
1517 )
1518 except BaseException as e:
1519 run_manager.on_chain_error(e)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/config.py:365, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
363 if run_manager is not None and accepts_run_manager(func):
364 kwargs[\"run_manager\"] = run_manager
--> 365 return func(input, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3737, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
3735 output = chunk
3736 else:
-> 3737 output = call_func_with_variable_args(
3738 self.func, input, config, run_manager, **kwargs
3739 )
3740 # If the output is a runnable, invoke it
3741 if isinstance(output, Runnable):
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/config.py:365, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
363 if run_manager is not None and accepts_run_manager(func):
364 kwargs[\"run_manager\"] = run_manager
--> 365 return func(input, **kwargs)
Cell In[6], line 84, in generateQuestions(state)
81 process_description = messages[-1]
83 # Invoke the solution executor with a dictionary containing 'input'
---> 84 response = question_agent_executor.invoke({\"input\": process_description})
86 # Debugging Information
87 print(\"Response from question agent:\", response)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1433, in AgentExecutor._call(self, inputs, run_manager)
1431 # We now enter the agent loop (until it returns something).
1432 while self._should_continue(iterations, time_elapsed):
-> 1433 next_step_output = self._take_next_step(
1434 name_to_tool_map,
1435 color_mapping,
1436 inputs,
1437 intermediate_steps,
1438 run_manager=run_manager,
1439 )
1440 if isinstance(next_step_output, AgentFinish):
1441 return self._return(
1442 next_step_output, intermediate_steps, run_manager=run_manager
1443 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in <listcomp>(.0)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1167, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1164 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1166 # Call the LLM to see what to do.
-> 1167 output = self.agent.plan(
1168 intermediate_steps,
1169 callbacks=run_manager.get_child() if run_manager else None,
1170 **inputs,
1171 )
1172 except OutputParserException as e:
1173 if isinstance(self.handle_parsing_errors, bool):
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:515, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
507 final_output: Any = None
508 if self.stream_runnable:
509 # Use streaming to make sure that the underlying LLM is invoked in a
510 # streaming
(...)
513 # Because the response from the plan is not a generator, we need to
514 # accumulate the output into final output and return that.
--> 515 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
516 if final_output is None:
517 final_output = chunk
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2775, in RunnableSequence.stream(self, input, config, **kwargs)
2769 def stream(
2770 self,
2771 input: Input,
2772 config: Optional[RunnableConfig] = None,
2773 **kwargs: Optional[Any],
2774 ) -> Iterator[Output]:
-> 2775 yield from self.transform(iter([input]), config, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2762, in RunnableSequence.transform(self, input, config, **kwargs)
2756 def transform(
2757 self,
2758 input: Iterator[Input],
2759 config: Optional[RunnableConfig] = None,
2760 **kwargs: Optional[Any],
2761 ) -> Iterator[Output]:
-> 2762 yield from self._transform_stream_with_config(
2763 input,
2764 self._transform,
2765 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
2766 **kwargs,
2767 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1778, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1776 try:
1777 while True:
-> 1778 chunk: Output = context.run(next, iterator) # type: ignore
1779 yield chunk
1780 if final_output_supported:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2726, in RunnableSequence._transform(self, input, run_manager, config)
2717 for step in steps:
2718 final_pipeline = step.transform(
2719 final_pipeline,
2720 patch_config(
(...)
2723 ),
2724 )
-> 2726 for output in final_pipeline:
2727 yield output
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1154, in Runnable.transform(self, input, config, **kwargs)
1151 final: Input
1152 got_first_val = False
-> 1154 for ichunk in input:
1155 # The default implementation of transform is to buffer input and
1156 # then call stream.
1157 # It'll attempt to gather all input into a single chunk using
1158 # the `+` operator.
1159 # If the input is not addable, then we'll assume that we can
1160 # only operate on the last chunk,
1161 # and we'll iterate until we get to the last chunk.
1162 if not got_first_val:
1163 final = ichunk
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:4644, in RunnableBindingBase.transform(self, input, config, **kwargs)
4638 def transform(
4639 self,
4640 input: Iterator[Input],
4641 config: Optional[RunnableConfig] = None,
4642 **kwargs: Any,
4643 ) -> Iterator[Output]:
-> 4644 yield from self.bound.transform(
4645 input,
4646 self._merge_configs(config),
4647 **{**self.kwargs, **kwargs},
4648 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1172, in Runnable.transform(self, input, config, **kwargs)
1169 final = ichunk
1171 if got_first_val:
-> 1172 yield from self.stream(final, config, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs)
258 except BaseException as e:
259 run_manager.on_llm_error(
260 e,
261 response=LLMResult(
262 generations=[[generation]] if generation else []
263 ),
264 )
--> 265 raise e
266 else:
267 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs)
243 generation: Optional[ChatGenerationChunk] = None
244 try:
--> 245 for chunk in self._stream(messages, stop=stop, **kwargs):
246 if chunk.message.id is None:
247 chunk.message.id = f\"run-{run_manager.run_id}\"
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:441, in ChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
438 params = {**params, **kwargs, \"stream\": True}
440 default_chunk_class = AIMessageChunk
--> 441 for chunk in self.client.create(messages=message_dicts, **params):
442 if not isinstance(chunk, dict):
443 chunk = chunk.model_dump()
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
275 msg = f\"Missing required argument: {quote(missing[0])}\"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:581, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
550 @required_args([\"messages\", \"model\"], [\"messages\", \"model\", \"stream\"])
551 def create(
552 self,
(...)
579 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
580 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 581 return self._post(
582 \"/chat/completions\",
583 body=maybe_transform(
584 {
585 \"messages\": messages,
586 \"model\": model,
587 \"frequency_penalty\": frequency_penalty,
588 \"function_call\": function_call,
589 \"functions\": functions,
590 \"logit_bias\": logit_bias,
591 \"logprobs\": logprobs,
592 \"max_tokens\": max_tokens,
593 \"n\": n,
594 \"presence_penalty\": presence_penalty,
595 \"response_format\": response_format,
596 \"seed\": seed,
597 \"stop\": stop,
598 \"stream\": stream,
599 \"temperature\": temperature,
600 \"tool_choice\": tool_choice,
601 \"tools\": tools,
602 \"top_logprobs\": top_logprobs,
603 \"top_p\": top_p,
604 \"user\": user,
605 },
606 completion_create_params.CompletionCreateParams,
607 ),
608 options=make_request_options(
609 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
610 ),
611 cast_to=ChatCompletion,
612 stream=stream or False,
613 stream_cls=Stream[ChatCompletionChunk],
614 )
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1232, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1218 def post(
1219 self,
1220 path: str,
(...)
1227 stream_cls: type[_StreamT] | None = None,
1228 ) -> ResponseT | _StreamT:
1229 opts = FinalRequestOptions.construct(
1230 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
1231 )
-> 1232 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1012, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1009 err.response.read()
1011 log.debug(\"Re-raising status error\")
-> 1012 raise self._make_status_error_from_response(err.response) from None
1014 return self._process_response(
1015 cast_to=cast_to,
1016 options=options,
(...)
1019 stream_cls=stream_cls,
1020 )
BadRequestError: Error code: 400 - {'error': {'message': \"Invalid 'tools': empty array. Expected an array with minimum length 1, but got an empty array instead.\", 'type': 'invalid_request_error', 'param': 'tools', 'code': 'empty_array'}}"
}
### Description
I am trying to use an agent with a empty tools list. If i use the same code with an open source LLM it works, but with an OpenAi LLM i get the error message.
### System Info
platform: mac
Python: 3.10.2
| tool_calling_agent with empty tools list is not working | https://api.github.com/repos/langchain-ai/langchain/issues/22467/comments | 3 | 2024-06-04T10:25:12Z | 2024-06-04T15:47:28Z | https://github.com/langchain-ai/langchain/issues/22467 | 2,333,152,014 | 22,467 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Below prompt is for query constructor,
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/query_constructor/prompt.py#L205
```python
DEFAULT_SUFFIX = """\
<< Example {i}. >>
Data Source:
```json
{{{{
"content": "{content}",
"attributes": {attributes}
}}}}
... (skipped)
```
For the "attributes", it is a string which value is json.dumps(AttributeInfo).
Here is an example (please aware of the indents), it's name as **attribute_str** in langchain
```string
{
"artist": {
"description": "Name of the song artist",
"type": "string"
}
}
```
Now, when we do DEFAULT_SUFFIX.format(content="some_content", attributes=**attribute_str**), the result string will be
```json
{
"content": "some_content",
"attributes": {
"artist": { <-------------improper indent
"description": "Name of the song artist", <-------------improper indent
"type": "string"<-------------improper indent
} <-------------improper indent
} <-------------improper indent
}
```
While testing with Llama3 70b inst, the prompt (improper indent) causes the result of NO_FILTER; of course, it affects the query results.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Improper prompt template causes wrong indents. It affects the query (e.g. using SelfQueryRetriever) results.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023
> Python Version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.2
> langchain: 0.1.17
> langchain_community: 0.0.36
> langsmith: 0.1.52
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.0.1
> langserve: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
| Wrong format of query constructor prompt while using SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/22466/comments | 0 | 2024-06-04T10:20:08Z | 2024-06-04T10:31:55Z | https://github.com/langchain-ai/langchain/issues/22466 | 2,333,140,917 | 22,466 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following is the code I use to send a multimodal message to Ollama:
```py
from langchain_community.chat_models import ChatOllama
import streamlit as st
# Adding History
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
import os, base64
llm = ChatOllama(model="bakllava")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that can describe images."),
MessagesPlaceholder(variable_name="chat_history"),
(
"human",
[
{
"type": "image_url",
"image_url": f"data:image/jpeg;base64,""{image}",
},
{"type": "text", "text": "{input}"},
],
),
]
)
history = StreamlitChatMessageHistory()
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
def process_image(file):
with st.spinner("Processing image..."):
data = file.read()
file_name = os.path.join("./", file.name)
with open(file_name, "wb") as f:
f.write(data)
image = encode_image(file_name)
st.session_state.encoded_image = image
st.success("Image encoded. Ask your questions")
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: history,
input_messages_key="input",
history_messages_key="chat_history",
)
def clear_history():
if "langchain_messages" in st.session_state:
del st.session_state["langchain_messages"]
st.title("Chat With Image")
uploaded_file = st.file_uploader("Upload your image: ", type=["jpg", "png"])
add_file = st.button("Submit File", on_click=clear_history)
if uploaded_file and add_file:
process_image(uploaded_file)
for message in st.session_state["langchain_messages"]:
role = "user" if message.type == "human" else "assistant"
with st.chat_message(role):
st.markdown(message.content)
question = st.chat_input("Your Question")
if question:
with st.chat_message("user"):
st.markdown(question)
if "encoded_image" in st.session_state:
image = st.session_state["encoded_image"]
response = chain_with_history.stream(
{"input": question, "image": image},
config={"configurable": {"session_id": "any"}},
)
with st.chat_message("assistant"):
st.write_stream(response)
else:
st.error("No image is uploaded. Upload your image first.")
```
When I upload an image and send a message, an error occured saying:
ValueError: Only string image_url content parts are supported
I tracked this error to the `ollama.py` file, and find the error in line 123:
```py
if isinstance(content_part.get("image_url"), str):
image_url_components = content_part["image_url"].split(",")
```
### Error Message and Stack Trace (if applicable)
Uncaught app exception
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 600, in _run_script
exec(code, module.__dict__)
File "/Users/nsebhastian/Desktop/DEV/8_LangChain_Beginners/source/14_handling_images/app_ollama.py", line 87, in <module>
st.write_stream(response)
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/runtime/metrics_util.py", line 397, in wrapped_func
result = non_optional_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/elements/write.py", line 167, in write_stream
for chunk in stream: # type: ignore
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4608, in stream
yield from self.bound.stream(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4608, in stream
yield from self.bound.stream(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2775, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2726, in _transform
for output in final_pipeline:
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4644, in transform
yield from self.bound.transform(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2726, in _transform
for output in final_pipeline:
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1172, in transform
yield from self.stream(final, config, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in stream
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 317, in _stream
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 160, in _create_chat_stream
"messages": self._convert_messages_to_ollama_messages(messages),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 132, in _convert_messages_to_ollama_messages
raise ValueError(
ValueError: Only string image_url content parts are supported.
### Description
I'm trying to send a multimodal message using the ChatOllama class.
When I print the `content_part.get("image_url")` value, it shows a dictionary with a 'url' attribute even when I send a string for the `image_url` value as in the example code:
```py
(
"human",
[
{
"type": "image_url",
"image_url": f"data:image/jpeg;base64,""{image}",
},
{"type": "text", "text": "{input}"},
],
),
```
I can fix this issue by checking for the 'url' attribute instead of 'image_url' as follows:
```py
if isinstance(content_part.get("image_url")["url"], str):
image_url_components = content_part["image_url"]["url"].split(",")
```
Is this the right way to do it? Why did the 'url' attribute is added to `content_part["image_url"]` even when I send an f- string?
Thank you.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Wed Jul 5 22:22:52 PDT 2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T8103
> Python Version: 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_google_genai: 1.0.5
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.15 | ChatOllama ValueError: Only string image_url content parts are supported. | https://api.github.com/repos/langchain-ai/langchain/issues/22460/comments | 0 | 2024-06-04T07:40:39Z | 2024-06-04T07:43:06Z | https://github.com/langchain-ai/langchain/issues/22460 | 2,332,789,153 | 22,460 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have to following chain defined:
```python
chain = prompt | llm_openai | parser
chain_result = chain.invoke({"number": number, "topics": topicList})
result = chain_result[0]
```
This causes my test to fail, whereas calling the invoke() methods one is working fine:
```python
promt_result = prompt.invoke({"number": number, "topics": topicList})
llm_result = llm_openai.invoke(promt_result)
parser_result = parser.invoke(llm_result)
result = parser_result[0]
```
### Error Message and Stack Trace (if applicable)
Pydantic validation error
### Description
IMHO using a LCEL chain should work exactly like calling the invoke() methods one by one. In my case I am unable to use LCEL because it does not work.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri May 17 21:20:54 UTC 2024
> Python Version: 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.14
> langchain_community: 0.0.38
> langsmith: 0.1.67
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.2 | LCEL not working, compared to identical invoke() call sequence | https://api.github.com/repos/langchain-ai/langchain/issues/22459/comments | 1 | 2024-06-04T07:06:57Z | 2024-06-04T14:08:10Z | https://github.com/langchain-ai/langchain/issues/22459 | 2,332,723,231 | 22,459 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
chain = GraphCypherQAChain.from_llm(
graph=graph,
cypher_llm=ChatOpenAI(temperature='0', model='gpt-3.5-turbo'),
qa_llm=ChatOpenAI(temperature='0.5', model='gpt-3.5-turbo-16k'),
cypher_llm_kwargs={"prompt":CYPHER_PROMPT, "memory": memory, "verbose": True},
qa_llm_kwargs={"prompt": CYPHER_QA_PROMPT, "memory": readonlymemory, "verbose": True},
# Limit the number of results from the Cypher QA Chain using the top_k parameter
top_k=5,
# Return intermediate steps from the Cypher QA Chain
# return_intermediate_steps=True,
validate_cypher=True,
verbose=True,
memory=memory,
return_intermediate_steps = True
)
chain.output_key ='result'
chain.input_key='question'
answer = chain(question)
```
### Error Message and Stack Trace (if applicable)
```raise ValueError(
ValueError: Got multiple output keys: dict_keys(['result', 'intermediate_steps']), cannot determine which to store in memory. Please set the 'output_key' explicitly.
```
I am trying to use `GraphCypherQAChain` with memory,
when I don't use `return_intermediate_steps = False` I am getting result, but when its true I am getting the error
Another scenario is when I give the output_key as intermediate_steps, it works, but I need the result, so I have given the `out_put key` as result but then I am getting key error? - ` return inputs[prompt_input_key], outputs[output_key]
KeyError: 'result'`
i need both `result ` and `intermediate_steps`
### System Info
```
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.1
langchain-experimental==0.0.59
``` | `GraphCypherQAChain` not able to return both `result` and `intermediate_steps` together with memory? | https://api.github.com/repos/langchain-ai/langchain/issues/22457/comments | 0 | 2024-06-04T06:22:21Z | 2024-06-25T10:41:04Z | https://github.com/langchain-ai/langchain/issues/22457 | 2,332,653,797 | 22,457 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface.llms import HuggingFaceEndpoint
token = "<TOKEN_WITH_FINBEGRAINED_PERMISSIONS>"
llm = HuggingFaceEndpoint(
endpoint_url='https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta',
token=token,
server_kwargs={
"headers": {"Content-Type": "application/json"}
}
)
resp = llm.invoke("Tell me a joke")
print(resp)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
With the PR https://github.com/langchain-ai/langchain/pull/22365, login to hf hub is skipped while [validating the environment](https://github.com/langchain-ai/langchain/blob/98b2e7b195235f8b31f91939edc8dcc22336f4e6/libs/partners/huggingface/langchain_huggingface/llms/huggingface_endpoint.py#L161) during initializing HuggingFaceEndpoint IF token is None, which resolves case in which we have local TGI (https://github.com/langchain-ai/langchain/issues/20342).
However, we might want to construct HuggingFaceEndpoint with
1. fine-grained token, which allow accessing InferenceEndpoint, but cannot be used for logging in
2. user-specific [oauth tokens](https://www.gradio.app/guides/sharing-your-app#o-auth-login-via-hugging-face), which also don't allow logging in, but which can be used to access inference api.
These cases are not handled.
### System Info
generic | HuggingFaceEndpoint: skip login to hub with oauth token | https://api.github.com/repos/langchain-ai/langchain/issues/22456/comments | 4 | 2024-06-04T06:14:54Z | 2024-06-06T18:26:36Z | https://github.com/langchain-ai/langchain/issues/22456 | 2,332,642,720 | 22,456 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain
from langchain_community.chat_models import ChatHunyuan
from langchain_core.messages import HumanMessage
print(langchain.__version__)
hunyuan_app_id = "******"
hunyuan_secret_id = "********************"
hunyuan_secret_key = "*******************"
llm_tongyi = ChatHunyuan(streaming=True, hunyuan_app_id=hunyuan_app_id,
hunyuan_secret_id=hunyuan_secret_id,
hunyuan_secret_key=hunyuan_secret_key)
print(llm_tongyi.invoke("how old are you"))
### Error Message and Stack Trace (if applicable)
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
res = self._chat(messages, **kwargs)
default_chunk_class = AIMessageChunk
for chunk in res.iter_lines():
response = json.loads(chunk)
if "error" in response:
raise ValueError(f"Error from Hunyuan api response: {response}")
for choice in response["choices"]:
chunk = _convert_delta_to_message_chunk(
choice["delta"], default_chunk_class
)
default_chunk_class = chunk.__class__
cg_chunk = ChatGenerationChunk(message=chunk)
if run_manager:
run_manager.on_llm_new_token(chunk.content, chunk=cg_chunk)
yield cg_chunk

ug1.png…]()
### Description
langchain_community.chat_models ChatHunyuan had a bug JSON parsing error
it is not a json!
### System Info
langchain version 0.1.9
windows
3.9.13 | langchain_community.chat_models ChatHunyuan had a bug JSON parsing error | https://api.github.com/repos/langchain-ai/langchain/issues/22452/comments | 3 | 2024-06-04T03:28:58Z | 2024-07-29T02:35:11Z | https://github.com/langchain-ai/langchain/issues/22452 | 2,332,460,550 | 22,452 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
huggingface_hub has its own environment variables that it reads from: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables. Langchain x HuggingFace integrations should be able to read from these, too. | Support native HuggingFace env vars | https://api.github.com/repos/langchain-ai/langchain/issues/22448/comments | 5 | 2024-06-03T22:19:34Z | 2024-07-31T21:44:19Z | https://github.com/langchain-ai/langchain/issues/22448 | 2,332,159,221 | 22,448 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```
from langchain_community.tools.tavily_search import TavilySearchResults
search = TavilySearchResults(max_results=2)
await search.ainvoke("what is the weather in SF")
```
### Error Message and Stack Trace (if applicable)
"ClientConnectorCertificateError(ConnectionKey(host='api.tavily.com', port=443, is_ssl=True, ssl=True, proxy=None, proxy_auth=None, proxy_headers_hash=None), SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)'))"
### Description
invoke does work
### System Info
Running off master | Tavily Search Results ainvoke not working | https://api.github.com/repos/langchain-ai/langchain/issues/22445/comments | 1 | 2024-06-03T20:58:47Z | 2024-06-04T01:34:54Z | https://github.com/langchain-ai/langchain/issues/22445 | 2,332,041,873 | 22,445 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langgraph.prebuilt import create_react_agent
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
@tool("multiplication-tool", args_schema=CalculatorInput, return_direct=True)
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
tools = [multiply]
llm_gpt4 = ChatOpenAI(model="gpt-4o", temperature=0)
app = create_react_agent(llm_gpt4, tools)
query="what's the result of 5 * 6"
messages = app.invoke({"messages": [("human", query)]})
messages
```
### Error Message and Stack Trace (if applicable)
N/A
### Description
I am following the example of https://python.langchain.com/v0.2/docs/how_to/custom_tools/ , setting `return_direct` as True, and invoke the multiplication tool with a simple agent.
As `return_direct` is True, I expect the tool msg is not send to LLM. But in the output (below), I still see the ToolMessage sent to the LLM, with AIMessage as `The result of \\(5 \\times 6\\) is 30.`
```
{'messages': [HumanMessage(content="what's the result of 5 * 6", id='1ac32371-4b2a-4aec-9147-bf30b6eb0f60'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_AslDg6NVGehW4W712neAw5xs', 'function': {'arguments': '{"a":5,"b":6}', 'name': 'multiplication-tool'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 62, 'total_tokens': 82}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5285f886-c8b5-4ed1-a17c-ea72b4363c35-0', tool_calls=[{'name': 'multiplication-tool', 'args': {'a': 5, 'b': 6}, 'id': 'call_AslDg6NVGehW4W712neAw5xs'}]),
ToolMessage(content='30', name='multiplication-tool', id='76d68dc3-f808-4a7c-90bc-5ae6867f141d', tool_call_id='call_AslDg6NVGehW4W712neAw5xs'),
AIMessage(content='The result of \\(5 \\times 6\\) is 30.', response_metadata={'token_usage': {'completion_tokens': 16, 'prompt_tokens': 92, 'total_tokens': 108}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-5e0aaba7-dd05-45f5-9998-23b5bf77f40d-0')]}
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.63
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.23
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
> langgraph: 0.0.55
> langserve: 0.2.1
| Tool `return_direct` doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/22441/comments | 4 | 2024-06-03T17:41:47Z | 2024-07-09T12:32:17Z | https://github.com/langchain-ai/langchain/issues/22441 | 2,331,707,780 | 22,441 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Here is my code:
```python
langchain.llm_cache = RedisSemanticCache(redis_url="redis://localhost:6379", embedding=OllamaEmbeddings(model="Vistral", num_gpu=2))
chat = ChatCoze(
coze_api_key=os.environ.get('COZE_API_KEY'),
bot_id=os.environ.get('COZE_BOT_ID'),
user="1",
streaming=False,
cache=True
)
chat([HumanMessage(content="Hi")])
```
### Error Message and Stack Trace (if applicable)
```
--> 136 redis_client = redis.from_url(redis_url, **kwargs)
137 if _check_for_cluster(redis_client):
138 redis_client.close()
AttributeError: module 'redis' has no attribute 'from_url'
```
### Description
I expected it would cache my query results in redis
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-text-splitters==0.2.0 | Can't use Redis semantic search | https://api.github.com/repos/langchain-ai/langchain/issues/22440/comments | 0 | 2024-06-03T16:46:11Z | 2024-06-03T16:48:42Z | https://github.com/langchain-ai/langchain/issues/22440 | 2,331,614,916 | 22,440 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from pathlib import Path
import getopt, sys, os, shutil
from langchain_community.document_loaders import (
DirectoryLoader, TextLoader
)
from langchain_text_splitters import (
Language,
RecursiveCharacterTextSplitter
)
def routerloader(obj, buf, keys):
if os.path.isfile(obj):
Fname = os.path.basename(obj)
if Fname.endswith(".c") or Fname.endswith(".h") or Fname.endswith(".cu"):
loader = TextLoader(obj, autodetect_encoding = True)
buf["c"].extend(loader.load())
keychecker("c", keys)
elif os.path.isdir(obj):
# BEGIN F90 C .h CPP As TextLoader
if any(File.endswith(".c") for File in os.listdir(obj)):
abc={'autodetect_encoding': True}
loader = DirectoryLoader(
obj, glob="**/*.c", loader_cls=TextLoader,
loader_kwargs=abc, show_progress=True, use_multithreading=True
)
buf["c"].extend(loader.load())
keychecker("c", keys)
if any(File.endswith(".h") for File in os.listdir(obj)):
abc={'autodetect_encoding': True}
loader = DirectoryLoader(
obj, glob="**/*.h", loader_cls=TextLoader,
loader_kwargs=abc, show_progress=True, use_multithreading=True
)
buf["c"].extend(loader.load())
keychecker("c", keys)
return buf, keys #accumulator
def specificsplitter(keys, **kwargs):
splitted_data = []
splitter_fun = {key: [] for key in keys}
embedding = kwargs.get("embedding", None)
for key in keys:
if key == "c" or key == "h" or key == "cuh" or key == "cu":
splitter_fun[key] = RecursiveCharacterTextSplitter.from_language(
language=Language.C, chunk_size=200, chunk_overlap=0
)
return splitter_fun
def keychecker(key, keys):
if key not in keys:
keys.append(key)
def loaddata(data_path, **kwargs):
default_keys = ["txt", "pdf", "f90", "c", "cpp", "py", "png", "xlsx", "odt", "csv", "pptx", "md", "org"]
buf = {key: [] for key in default_keys}
keys = []
documents = []
embedding = kwargs.get("embedding", None)
for data in data_path:
print(data)
buf, keys = routerloader(data, buf, keys)
print (keys)
print (buf)
splitter_fun = specificsplitter(keys, embedding=embedding)
print (splitter_fun)
for key in keys:
print ("*"*20)
print (key)
buf[key] = splitter_fun[key].split_documents(buf[key])
print (buf[key])
print(len(buf[key]))
return buf, keys
IDOC_PATH = []
argumentlist = sys.argv[1:]
options = "hi:"
long_options = ["help",
"inputdocs_path="]
arguments, values = getopt.getopt(argumentlist, options, long_options)
for currentArgument, currentValue in arguments:
if currentArgument in ("-h", "--help"):
print("python main.py -i path/docs")
elif currentArgument in ("-i", "--inputdocs_path"):
for i in currentValue.split(" "):
if (len(i) != 0):
if (os.path.isfile(i)) or ((os.path.isdir(i)) and (len(os.listdir(i)) != 0)):
IDOC_PATH.append(Path(i))
splitted_data, keys = loaddata(IDOC_PATH)
```
### Error Message and Stack Trace (if applicable)
```bash
python ISSUE_TXT_SPLITTER.py -i "/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c"
/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c
['c']
{'txt': [], 'pdf': [], 'f90': [], 'c': [Document(page_content='#include <stdio.h>\n\nint main() {\n puts("Hello, World!");\n return 0;\n}', metadata={'source': '/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c'})], 'cpp': [], 'py': [], 'png': [], 'xlsx': [], 'odt': [], 'csv': [], 'pptx': [], 'md': [], 'org': []}
Traceback (most recent call last):
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 92, in <module>
splitted_data, keys = loaddata(IDOC_PATH)
^^^^^^^^^^^^^^^^^^^
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 67, in loaddata
splitter_fun = specificsplitter(keys, embedding=embedding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 47, in specificsplitter
splitter_fun[key] = RecursiveCharacterTextSplitter.from_language(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/Anaconda3/envs/langchain_rag_pytorchcuda121gpu_env/lib/python3.11/site-packages/langchain_text_splitters/character.py", line 116, in from_language
separators = cls.get_separators_for_language(language)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/Anaconda3/envs/langchain_rag_pytorchcuda121gpu_env/lib/python3.11/site-packages/langchain_text_splitters/character.py", line 631, in get_separators_for_language
raise ValueError(
ValueError: Language Language.C is not supported! Please choose from [<Language.CPP: 'cpp'>, <Language.GO: 'go'>, <Language.JAVA: 'java'>, <Language.KOTLIN: 'kotlin'>, <Language.JS: 'js'>, <Language.TS: 'ts'>, <Language.PHP: 'php'>, <Language.PROTO: 'proto'>, <Language.PYTHON: 'python'>, <Language.RST: 'rst'>, <Language.RUBY: 'ruby'>, <Language.RUST: 'rust'>, <Language.SCALA: 'scala'>, <Language.SWIFT: 'swift'>, <Language.MARKDOWN: 'markdown'>, <Language.LATEX: 'latex'>, <Language.HTML: 'html'>, <Language.SOL: 'sol'>, <Language.CSHARP: 'csharp'>, <Language.COBOL: 'cobol'>, <Language.C: 'c'>, <Language.LUA: 'lua'>, <Language.PERL: 'perl'>, <Language.HASKELL: 'haskell'>]
```
### Description
I'm trying to split C code using the langchain-text-splitter and RecursiveCharacterTextSplitter.from_language with Language=Language.C or Language='c'. I'am expecting no error since the C language is listed by the enumerator
```python
[print(e.value) for e in Language]
```
### System Info
```bash
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-text-splitters==0.2.0
```
```bash
No LSB modules are available.
Distributor ID: Ubuntu
Description: Linux Mint 21.3
Release: 22.04
Codename: virginia
```
```bash
Python 3.11.9
```
```bash
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.65
> langchain_experimental: 0.0.59
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | RecursiveCharacterTextSplitter.from_language(language=Language.C) ValueError: Language Language.C is not supported! :bug: | https://api.github.com/repos/langchain-ai/langchain/issues/22430/comments | 1 | 2024-06-03T13:42:36Z | 2024-06-03T15:43:37Z | https://github.com/langchain-ai/langchain/issues/22430 | 2,331,198,366 | 22,430 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class ConversationDBMemory(BaseChatMemory):
conversation_id: str
human_prefix: str = "Human"
ai_prefix: str = "Assistant"
llm: BaseLanguageModel
memory_key: str = "history"
@property
async def buffer(self) -> List[BaseMessage]:
async with get_async_session_context() as session:
messages = await get_all_messages(session=session, conversation_id=self.conversation_id)
print("messages in buffer: ", messages)
chat_history: List[BaseMessage] = []
for message in messages:
chat_history.append(HumanMessage(content=message.user_query))
chat_history.append(AIMessage(content=message.llm_response))
print(f"chat history: {chat_history}")
if not chat_history:
return []
return chat_history
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
meta private
"""
return [self.memory_key]
async def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
buffer: Any = await self.buffer
if self.return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
inputs[self.memory_key] = final_buffer
return inputs
async def aload_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
buffer: Any = await self.buffer
if self.return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
inputs[self.memory_key] = final_buffer
return inputs
==========================
chat_prompt = ChatPromptTemplate.from_messages([default_system_message_prompt, rag_chat_prompt])
# print(chat_prompt)
agent = {
"history": lambda x: x["history"],
"input": lambda x: x["input"],
"knowledge": lambda x: x["knowledge"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
} | chat_prompt | model_with_tools | OpenAIFunctionsAgentOutputParser()
agent_executor = AgentExecutor(agent=agent, verbose=True, callbacks=[callback], memory=memory, tools=tools)
task = asyncio.create_task(wrap_done(
agent_executor.ainvoke(input={"input": user_query, "knowledge": knowledge}),
callback.done
))
====================== Prompts
<INSTRUCTION>
Based on the known information, answer the question concisely and professionally. If the answer cannot be derived from it, please say "The question cannot be answered based on the known information."
No additional fabricated elements are allowed in the answer
</INSTRUCTION>
<CONVERSATION HISTORY>
{history}
</CONVERSATION HISTORY>
<KNOWLEDGE>
{knowledge}
</KNOWLEDGE>
<QUESTION>
{input}
</QUESTION>
### Error Message and Stack Trace (if applicable)
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 217, in ainvoke
raise e
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 212, in ainvoke
final_outputs: Dict[str, Any] = await self.aprep_outputs(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 486, in aprep_outputs
await self.memory.asave_context(inputs, outputs)
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/chat_memory.py", line 64, in asave_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/chat_memory.py", line 30, in _get_input_output
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/utils.py", line 19, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['knowledge', 'input']
### Description
Just like my code, I am trying to create a RAG application. In my prompt, I used `knowledge` to represent the retrieved information. I want to pass it along with the user input to LLM, but I encountered this problem when creating the Agent. Why is this happening?
### System Info
langchain==0.2.0
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-postgres==0.0.6
langchain-text-splitters==0.2.0
| How to make multiple inputs to a agent | https://api.github.com/repos/langchain-ai/langchain/issues/22427/comments | 0 | 2024-06-03T13:10:52Z | 2024-06-03T13:13:26Z | https://github.com/langchain-ai/langchain/issues/22427 | 2,331,123,659 | 22,427 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Define a callback that wants to access the token usage:
class LLMCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
super().on_llm_end(response, **kwargs)
token_usage = response.llm_output["token_usage"]
prompt_tokens = token_usage.get("prompt_tokens", 0)
completion_tokens = token_usage.get("completion_tokens", 0)
# Do something...
callbacks = [LLMCallbackHandler()]
# Define some LLM models that use this callback:
chatgpt = ChatOpenAI(
model="gpt-3.5-turbo",
callbacks=callbacks,
)
sonnet = BedrockChat(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
client=boto3.Session(region_name="us-east-1").client("bedrock-runtime"),
callbacks=callbacks,
)
# Let's call the two models
gpt_response = chatgpt.invoke({"input":"Hello, how are you?"})
sonnet_response = sonnet.invoke({"input":"Hello, how are you?"})
### Error Message and Stack Trace (if applicable)
_No response_
### Description
_combine_llm_outputs() of different supported models hardcodes different keys.
In this example, the token_usage key is different in
https://github.com/langchain-ai/langchain/blob/acaf214a4516a2ffbd2817f553f4d48e6a908695/libs/community/langchain_community/chat_models/bedrock.py#L321
and
https://github.com/langchain-ai/langchain/blob/acaf214a4516a2ffbd2817f553f4d48e6a908695/libs/partners/openai/langchain_openai/chat_models/base.py#L457
The outcome is that replacing one model with another is not transparent and can lead to issues, such as breaking monitoring
### System Info
Appears in master | chatModels _combine_llm_outputs uses different hardcoded dict keys | https://api.github.com/repos/langchain-ai/langchain/issues/22426/comments | 0 | 2024-06-03T12:46:44Z | 2024-06-03T12:49:14Z | https://github.com/langchain-ai/langchain/issues/22426 | 2,331,068,107 | 22,426 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Error Message and Stack Trace (if applicable)
api-hub | INFO: Application startup complete.
api-hub | INFO: 172.18.0.1:49982 - "POST /agent/stream_log HTTP/1.1" 200 OK
api-hub | /usr/local/lib/python3.12/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.
api-hub | warn_beta(
api-hub |
api-hub |
api-hub | > Entering new AgentExecutor chain...
api-hub | INFO 2024-06-03 07:01:17 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub |
api-hub | Invoking: `csv_qna` with `{'question': 'Find 4 golden keywords with the highest Search volume and lowest CPC', 'csv_file': 'https://jin.writerzen.dev/files/ws1/keyword_explorer.csv'}`
api-hub |
api-hub |
api-hub | INFO 2024-06-03 07:01:22 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub | ERROR: Exception in ASGI application
api-hub | Traceback (most recent call last):
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 269, in __call__
api-hub | await wrap(partial(self.listen_for_disconnect, receive))
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
api-hub | await func()
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect
api-hub | message = await receive()
api-hub | ^^^^^^^^^^^^^^^
api-hub | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 538, in receive
api-hub | await self.message_event.wait()
api-hub | File "/usr/local/lib/python3.12/asyncio/locks.py", line 212, in wait
api-hub | await fut
api-hub | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fed579989e0
api-hub |
api-hub | During handling of the above exception, another exception occurred:
api-hub |
api-hub | + Exception Group Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
api-hub | | result = await app( # type: ignore[func-returns-value]
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
api-hub | | return await self.app(scope, receive, send)
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
api-hub | | await super().__call__(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
api-hub | | await self.middleware_stack(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
api-hub | | await self.app(scope, receive, _send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
api-hub | | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
api-hub | | await app(scope, receive, sender)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
api-hub | | await self.middleware_stack(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
api-hub | | await route.handle(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
api-hub | | await self.app(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
api-hub | | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
api-hub | | await app(scope, receive, sender)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 75, in app
api-hub | | await response(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 255, in __call__
api-hub | | async with anyio.create_task_group() as task_group:
api-hub | | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__
api-hub | | raise BaseExceptionGroup(
api-hub | | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
api-hub | +-+---------------- 1 ----------------
api-hub | | Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/serialization.py", line 90, in default
api-hub | | return super().default(obj)
api-hub | | ^^^^^^^
api-hub | | RuntimeError: super(): __class__ cell not found
api-hub | |
api-hub | | The above exception was the direct cause of the following exception:
api-hub | |
api-hub | | Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
api-hub | | await func()
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 245, in stream_response
api-hub | | async for data in self.body_iterator:
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/api_handler.py", line 1243, in _stream_log
api-hub | | "data": self._serializer.dumps(data).decode("utf-8"),
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/serialization.py", line 168, in dumps
api-hub | | return orjson.dumps(obj, default=default)
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | TypeError: Type is not JSON serializable: DataFrame
api-hub | +------------------------------------
api-hub | INFO 2024-06-03 07:01:24 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub | result='| | Keyword | Volume | CPC | Word count | PPC Competition | Trending |\n|-----:|:-------------------|---------:|------:|:-------------|:------------------|:-----------|\n| 4985 | ="purina pro plan" | 165000 | 2.31 | ="3" | ="High" | ="false" |\n| 0 | ="dog food" | 165000 | 11.1 | ="2" | ="High" | ="false" |\n| 1 | ="dog food a" | 165000 | 11.1 | ="3" | ="High" | ="false" |\n| 3 | ="dog food victor" | 74000 | 1.2 | ="3" | ="High" | ="false" |'The 4 golden keywords with the highest search volume and lowest CPC from the provided data are:
api-hub |
api-hub | 1. Keyword: "dog food victor"
api-hub | - Search Volume: 74,000
api-hub | - CPC: $1.20
api-hub |
api-hub | 2. Keyword: "purina pro plan"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $2.31
api-hub |
api-hub | 3. Keyword: "dog food"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $11.10
api-hub |
api-hub | 4. Keyword: "dog food a"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $11.10
api-hub |
api-hub | These are the 4 keywords that meet the criteria of having the highest search volume and lowest CPC.
api-hub |
api-hub | > Finished chain.

### Description
*I am trying to build a tools which can question on CSV file with related documents of langchain ver 2 `https://python.langchain.com/v0.1/docs/use_cases/sql/csv/` when chain run I have this error and after that It still return the result in log but in playground not show it. Some one help me fix it
### System Info
langchain-pinecone = "^0.1.1"
langserve = {extras = ["server"], version = ">=0.0.30"}
langchain-openai = "^0.1.1"
langchain-anthropic = "^0.1.7"
langchain-google-genai = "^1.0.1"
langchain-community = "^0.2.1"
langchain-experimental = "^0.0.59"
langchain = "0.2.1" | TypeError: Type is not JSON serializable: DataFrame on question with CSV Langchain Ver2 | https://api.github.com/repos/langchain-ai/langchain/issues/22415/comments | 0 | 2024-06-03T07:21:54Z | 2024-06-05T02:28:24Z | https://github.com/langchain-ai/langchain/issues/22415 | 2,330,372,065 | 22,415 |
[
"langchain-ai",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html

it shows
```
[If you’d like to use LangSmith, uncomment the below:](https://python.langchain.com/docs/use_cases/tool_use/human_in_the_loop/)
[os.environ[“LANGCHAIN_TRACING_V2”] = “true”](https://python.langchain.com/docs/use_cases/tool_use/tool_error_handling/)
```
and those are not related with the section
### Idea or request for content:
In the Examples using Runnable[¶](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain-core-runnables-base-runnable) section, the links text there should be
```
- Human in the Loop
- Tool Error Handling
``` | DOC: Examples using Runnable section links are not correct | https://api.github.com/repos/langchain-ai/langchain/issues/22414/comments | 0 | 2024-06-03T06:38:37Z | 2024-06-03T15:46:51Z | https://github.com/langchain-ai/langchain/issues/22414 | 2,330,297,740 | 22,414 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
prefix = """
Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
context:
{context}
Examples: Here are a few examples of generated Cypher statements for particular questions:
"""
FEW_SHOT_PROMPT = FewShotPromptTemplate(
example_selector = example_selector,
example_prompt = example_prompt,
prefix=prefix,
suffix="Question: {question}, \nCypher Query: ",
input_variables =["question","query", "context"],
)
graph_qa = GraphCypherQAChain.from_llm(
cypher_llm = llm3, #should use gpt-4 for production
qa_llm = llm3,
graph=graph,
verbose=True,
cypher_prompt = FEW_SHOT_PROMPT,
)
input_variables = {
"question": args['question'],
"context": "NA",
"query": args['question']
}
graph_qa.invoke(input_variables)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[41], line 1
----> 1 graph_qa.invoke(input_variables)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py:247, in GraphCypherQAChain._call(self, inputs, run_manager)
243 question = inputs[self.input_key]
245 intermediate_steps: List = []
--> 247 generated_cypher = self.cypher_generation_chain.run(
248 {"question": question, "schema": self.graph_schema}, callbacks=callbacks
249 )
251 # Extract Cypher code if it is wrapped in backticks
252 generated_cypher = extract_cypher(generated_cypher)
File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:595, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
593 if len(args) != 1:
594 raise ValueError("`run` supports only one positional argument.")
--> 595 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
596 _output_key
597 ]
599 if kwargs and not args:
600 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
601 _output_key
602 ]
File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:151, in Chain.invoke(self, input, config, **kwargs)
145 run_manager = callback_manager.on_chain_start(
146 dumpd(self),
147 inputs,
148 name=run_name,
149 )
150 try:
--> 151 self._validate_inputs(inputs)
152 outputs = (
153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:279, in Chain._validate_inputs(self, inputs)
277 missing_keys = set(self.input_keys).difference(inputs)
278 if missing_keys:
--> 279 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}
### Description
Hello,
I cannot seem to invoke GraphCypherQAChain.from_llm() properly so that I can format correctly for the FewShotPromptTemplate.
Especially, in the template I introduced at variable 'context' which I intend to supply at the invoke time.
However, even I pass 'context' at invoke time, the FewShotPromptTemplate doesn't seem to access this variable.
I am confused how arguments are passed for prompt vs chain.
It seems like the argument for QAchain is 'query' only, i.e graph_qa.invoke({'query': 'user question'}).
In this case, we cannot really have a dynamic few shot prompt template.
Please provide me with some direction here.
Thank you
### System Info
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.2.3
langchain-openai==0.1.6
langchain-text-splitters==0.0.1
langchainhub==0.1.15 | How to invoke GraphCypherQAChain.from_llm() with multiple variables | https://api.github.com/repos/langchain-ai/langchain/issues/22413/comments | 3 | 2024-06-03T05:53:51Z | 2024-06-13T15:59:25Z | https://github.com/langchain-ai/langchain/issues/22413 | 2,330,234,431 | 22,413 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.