issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
An example missing for snowfalke on how a good table description and column description look like for snowflake, where primary key and foreign key is not there.
### Idea or request for content:
How does the table description and column descriptions gets feeded to SQLDatabaseChain for snowflake? Is it only through DDL or any other way? | DOC: table description for SQLDatabaseChain for snowflake | https://api.github.com/repos/langchain-ai/langchain/issues/20626/comments | 17 | 2024-04-18T22:47:27Z | 2024-07-28T16:07:09Z | https://github.com/langchain-ai/langchain/issues/20626 | 2,251,711,415 | 20,626 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/20599
<div type='discussions-op-text'>
<sup>Originally posted by **Martyniqo** April 18, 2024</sup>
### Checked
- [X] I searched existing ideas and did not find a similar one
- [X] I added a very descriptive title
- [X] I've clearly described the feature request and motivation for it
### Feature request
I'm using Claude 3 Sonnet on Amazon Bedrock and storing chat history in DynamoDB.
However, LangChain does not support **storing images in the chat history** and there is no way to add them as simply as the text itself: https://python.langchain.com/docs/use_cases/question_answering/chat_history/
The following code completely ignores the uploaded image in the chat history and saves only the text from the user's question and the model's answer:
`
human_message = []
for attachment_uri in self.request.attachments:
s3_bucket_name, s3_key = attachment_uri.replace("s3://", "").split("/", 1)
encoded_image = load_image_from_s3_and_encode(s3_bucket_name, s3_key)
file_extension = Path(s3_key).suffix
mime_type = get_mime_type(file_extension)
if encoded_image:
logger.debug("Image detected")
image_message = {
"type": "image_url",
"image_url": {
"url": f"data:{mime_type};base64,{encoded_image}",
},
}
logger.debug(image_message)
human_message.append(image_message)
system_message = """You are chat assistant, friendly and polite to the user.
You use history to get additional context. History might by empty, in case of new conversation.
"""
human_message.append({"type": "text", "text": "The user question is <question>{question}</question>."})
template = [
("system", system_message),
MessagesPlaceholder(variable_name="history"),
("human", human_message),
]
prompt = ChatPromptTemplate.from_messages(template)
chain = prompt | bedrock_chat | StrOutputParser()
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: DynamoDBChatMessageHistory(
table_name=DYNAMODB_TABLE_NAME, session_id=session_id
),
input_messages_key="question",
history_messages_key="history",
)
config = {"configurable": {"session_id": self.request.session_id}}
response = chain_with_history.invoke({"question": "What's on the previous image?"}, config=config)
`
Probably it will be necessary to store images somewhere else, and in DynamoDB only references to them.
Has anyone had a similar problem before and has "an easy" solution for it?
### Motivation
The model doesn't save the image in chat history so doesn't know about which image I'm asking.
### Proposal (If applicable)
_No response_</div> | Support for adding images to the chat history (Claude 3 Sonnet, Bedrock) | https://api.github.com/repos/langchain-ai/langchain/issues/20623/comments | 3 | 2024-04-18T21:06:28Z | 2024-07-26T16:07:27Z | https://github.com/langchain-ai/langchain/issues/20623 | 2,251,562,359 | 20,623 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Please support https://yandex.cloud/en/docs/yandexgpt/operations/disable-logging
`YandexGPT API logs all request data by default. If you provide personal data, confidential information, or any kind of sensitive information in your requests, disable logging. To do this, add x-data-logging-enabled: false to the header of a REST API request or gRPC call metadata. Requests transmitted with logging disabled will not be saved on Yandex Cloud servers.`
cc @tyumentsev4
| Community: YandexGPT pass x-data-logging-enabled:false | https://api.github.com/repos/langchain-ai/langchain/issues/20622/comments | 0 | 2024-04-18T21:05:26Z | 2024-07-25T16:09:18Z | https://github.com/langchain-ai/langchain/issues/20622 | 2,251,560,129 | 20,622 |
[
"langchain-ai",
"langchain"
] | We need to investigate whether we have an issue with the ollama integration, and if so why?
### Discussed in https://github.com/langchain-ai/langchain/discussions/18515
<div type='discussions-op-text'>
<sup>Originally posted by **gosforth** March 4, 2024</sup>
I'm playing with Langchain and Ollama. My source text is 90 lines poem (each line max 50 characters).
First I load it into vector db (Chroma):
```
from langchain_community.llms import Ollama
from langchain.chains import RetrievalQA
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("c:/test/some_source.txt", encoding="utf8")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2500, chunk_overlap=0, separator=".")
docs = text_splitter.split_documents(documents)
# Create Ollama embeddings and vector store
embeddings = OllamaEmbeddings(model="mistral")
# load it into Chroma
db = Chroma.from_documents(docs, embeddings, persist_directory="c:/test/Ollama/RAG/data")
# save db
db.persist()
```
Execution time is about 25 seconds. Why so long?(!) For instance generating embeddings with SBERT is way shorter.
Then I use these vectors with Ollama model:
```
from langchain_community.llms import Ollama
from langchain.chains import RetrievalQA
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import Chroma
# reset DB variable
db=None
embeddings = OllamaEmbeddings(model="mistral")
# read from Chroma
db = Chroma(persist_directory="c:/test/Ollama/RAG/data", embedding_function=embeddings)
llm = Ollama(base_url='http://localhost:11434', model="mistral", temperature=0)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=db.as_retriever(search_type="similarity", search_kwargs={"k": 2})
)
question = "Here comes the question text?"
result = qa_chain.invoke({"query": question})
result["result"]
print(result)
# delete collection
db.delete_collection()
```
Execution time is... 26 seconds. Huge amount of time (really short text).
My hardware: Ryzen 7 5700x, 48GB RAM, gtx 1050ti
I tried different settings for chunk size, separator. Differences are trivial. Is there any trick I can speed it up?
Looks like GPU load is max 50%, CPU similar, RAM piratically not used.
Something wrong with the code?
Any suggestion appreciated,
Best
</div> | Why is ollama running slowly? | https://api.github.com/repos/langchain-ai/langchain/issues/20621/comments | 8 | 2024-04-18T20:57:18Z | 2024-08-06T16:07:20Z | https://github.com/langchain-ai/langchain/issues/20621 | 2,251,548,067 | 20,621 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
#Langchain arguments to the tool are not passing correctly
### Error
Im having this error with all tools, basically the llm is unable to pass whole arguments to the tools in my case only a is being passed.
`Traceback (most recent call last):
File "d:\chatbots\360 agent - Jarvis\main.py", line 47, in <module>
main()
File "d:\chatbots\360 agent - Jarvis\main.py", line 45, in main
chain.invoke("What is 3 * 12?")
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3523, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3397, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "d:\chatbots\360 agent - Jarvis\main.py", line 39, in call_tools
tool_call["output"] = tool_map[tool_call["function"]["name"]].invoke(tool_call["function"]["arguments"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 240, in invoke
return self.run(
^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 382, in run
raise e
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 373, in run
parsed_input = self._parse_input(tool_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 276, in _parse_input
input_args.validate({key_: tool_input})
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 711, in validate
return cls(**value)
^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for multiplySchema
b
field required (type=value_error.missing)`
### My code:
`from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.messages import AIMessage
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableMap,
RunnablePassthrough,
)
import json
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a, b):
"""Multiplies a and b."""
print(b)
return b
@tool
def exponentiate(base, exponent):
"""Exponentiate the base to the exponent power."""
return base**exponent
def main():
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", openai_api_key="your_key")
tools = [multiply, exponentiate, add]
llm_with_tools = llm.bind_tools(tools)
def call_tools(msg: AIMessage) -> Runnable:
"""Simple sequential tool calling helper."""
tool_map = {tool.name: tool for tool in tools}
tool_calls = msg.additional_kwargs['tool_calls']
for tool_call in tool_calls:
print(tool_map[tool_call["function"]["name"]], 'tool_call["function"]["naasdme"as]')
tool_call["output"] = tool_map[tool_call["function"]["name"]].invoke(tool_call["function"]["arguments"])
print(tool_calls, ':tool_calls')
return tool_calls
chain = llm_with_tools | call_tools
chain.invoke("What is 3 * 12?")
if '__main__':
main()`
I would also love if you can explain the code a bit the langchain documentation explains nothing.
Docs that I followed [https://python.langchain.com/docs/use_cases/tool_use/multiple_tools/](https://python.langchain.com/docs/use_cases/tool_use/multiple_tools/)
### Idea or request for content:
You should I also provide json and step by step code on how the agent, chatbot or functions are working. | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/20619/comments | 1 | 2024-04-18T18:48:04Z | 2024-07-31T16:07:30Z | https://github.com/langchain-ai/langchain/issues/20619 | 2,251,335,105 | 20,619 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
## Code:
```python
from langchain_mistralai import MistralAIEmbeddings
import assistant.settings as settings
def getMistralEmbeddings():
return MistralAIEmbeddings(mistral_api_key=settings.MISTRAL_API_KEY) #well defined variable from env, works on my personnal machine at the time i'm publishing the issue
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/app/assistant_api.py", line 37, in <module>
retriever = obtain_full_qdrant_tmdb()
File "/app/assistant/rag/retrievers/qdrant_connector.py", line 30, in obtain_full_qdrant_tmdb
embeddings = getMistralEmbeddings()
File "/app/assistant/rag/embeddings/mistral_embeddings.py", line 5, in getMistralEmbeddings
return MistralAIEmbeddings(mistral_api_key=settings.MISTRAL_API_KEY)
File "/usr/local/lib/python3.10/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/usr/local/lib/python3.10/site-packages/pydantic/v1/main.py", line 1100, in validate_model
values = validator(cls_, values)
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/embeddings.py", line 86, in validate_environment
values["tokenizer"] = Tokenizer.from_pretrained(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1403, in hf_hub_download
raise head_call_error
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download
metadata = get_hf_file_metadata(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1674, in get_hf_file_metadata
r = _request_wrapper(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 369, in _request_wrapper
response = _request_wrapper(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 393, in _request_wrapper
hf_raise_for_status(response)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 321, in hf_raise_for_status
raise GatedRepoError(message, response) from e
huggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-662165b4-2224fae43a813b360dc7b222;20b14ba7-ef96-4d6a-8bef-1fa42c4f9291)
Cannot access gated repo for url https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.json.
Repo model mistralai/Mixtral-8x7B-v0.1 is gated. You must be authenticated to access it.
Traceback (most recent call last):
### Description
This error, and this stackTrace occur when deployed on a kubernetes server since today afternoon.
It's seems to me it's a bug because i cannot recreate the error on my personnal machine, even when i deleted the virtual environment, and the pycaches folders, and then reinstalled everything from the requirements.txt
I know i should authenticate, but firstly, why, and secondly how ?
i came across some solutions where you have to put your huggingface token inside the header of the request, but i don't really know where to inject a token like this when using langchain-mistralai
### System Info
aiohttp>=3.9.3
aiosignal>=1.3.1
annotated-types>=0.6.0
anyio>=4.3.0
async-timeout>=4.0.3
attrs>=23.2.0
certifi>=2024.2.2
charset-normalizer>=3.3.2
click>=8.1.7
dataclasses-json>=0.6.4
exceptiongroup>=1.2.0
faiss-cpu>=1.8.0
fastapi>=0.110.1
filelock>=3.13.4
frozenlist>=1.4.1
fsspec>=2024.3.1
greenlet>=3.0.3
grpcio>=1.62.1
grpcio-tools>=1.62.1
h11>=0.14.0
h2>=4.1.0
hpack>=4.0.0
httpcore>=1.0.5
httpx>=0.25.2
httpx-sse>=0.4.0
huggingface-hub>=0.22.2
hyperframe>=6.0.1
idna>=3.6
Jinja2>=3.1.3
joblib>=1.4.0
jsonpatch>=1.33
jsonpointer>=2.4
langchain>=0.1.15
langchain-community>=0.0.32
langchain-core>=0.1.41
langchain-mistralai>=0.1.1
langchain-text-splitters>=0.0.1
langsmith>=0.1.43
MarkupSafe>=2.1.5
marshmallow>=3.21.1
mistralai>=0.1.8
mpmath>=1.3.0
multidict>=6.0.5
mypy-extensions>=1.0.0
networkx>=3.3
numpy>=1.26.4
nvidia-cublas-cu12>=12.1.3.1
nvidia-cuda-cupti-cu12>=12.1.105
nvidia-cuda-nvrtc-cu12>=12.1.105
nvidia-cuda-runtime-cu12>=12.1.105
nvidia-cudnn-cu12>=8.9.2.26
nvidia-cufft-cu12>=11.0.2.54
nvidia-curand-cu12>=10.3.2.106
nvidia-cusolver-cu12>=11.4.5.107
nvidia-cusparse-cu12>=12.1.0.106
nvidia-nccl-cu12>=2.19.3
nvidia-nvjitlink-cu12>=12.4.127
nvidia-nvtx-cu12>=12.1.105
orjson>=3.10.0
packaging>=23.2
pandas>=2.2.1
pillow>=10.3.0
portalocker>=2.8.2
protobuf>=4.25.3
pyarrow>=15.0.2
pydantic>=2.6.4
pydantic_core>=2.16.3
python-dateutil>=2.9.0.post0
python-dotenv>=1.0.1
pytz>=2024.1
PyYAML>=6.0.1
qdrant-client>=1.8.2
redis>=5.0.3
regex>=2023.12.25
requests>=2.31.0
safetensors>=0.4.2
scikit-learn>=1.4.2
scipy>=1.13.0
sentence-transformers>=2.6.1
six>=1.16.0
sniffio>=1.3.1
SQLAlchemy>=2.0.29
starlette>=0.37.2
sympy>=1.12
tenacity>=8.2.3
threadpoolctl>=3.4.0
tokenizers>=0.15.2
torch>=2.2.2
tqdm>=4.66.2
transformers>=4.39.3
triton>=2.2.0
typing-inspect>=0.9.0
typing_extensions>=4.11.0
tzdata>=2024.1
urllib3>=2.2.1
uvicorn>=0.29.0
yarl>=1.9.4
| langchain-mistralai cannot pull tokenizer from huggingface 401 | https://api.github.com/repos/langchain-ai/langchain/issues/20618/comments | 9 | 2024-04-18T18:46:40Z | 2024-07-19T12:47:28Z | https://github.com/langchain-ai/langchain/issues/20618 | 2,251,331,965 | 20,618 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(
model_path="mistral-7b-instruct-v0.2.Q4_K_M.gguf",
chat_format="llama-2",
n_ctx=8192,
n_threads=6,
n_gpu_layers=-1,
max_tokens=8192,
)
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_community.graphs import Neo4jGraph
from langchain.document_loaders import WikipediaLoader
from langchain.text_splitter import TokenTextSplitter
from langchain_experimental.graph_transformers import LLMGraphTransformer
from neo4j import GraphDatabase
from langchain_community.vectorstores import Neo4jVector
from langchain_community.vectorstores.neo4j_vector import remove_lucene_chars
from langchain_core.runnables import ConfigurableField, RunnableParallel, RunnablePassthrough
. . .
text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
documents = text_splitter.split_documents(raw_documents[:3])
llm_transformer = LLMGraphTransformer(llm=llm)
graph_documents = llm_transformer.convert_to_graph_documents(documents)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last): File "1.py", line 68, in <module> llm_transformer = LLMGraphTransformer(llm=llm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_experimental\graph_transformers\llm.py", line 216, in __init__ structured_llm = llm.with_structured_output(schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\_api\beta_decorator.py", line 110, in warning_emitting_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\base.py", line 204, in with_structured_output raise NotImplementedError() NotImplementedError
### Description
seems like LLMGraphTransformer uses with_structured_output method of llm but LlamaCPP model backend don't have this method implemented.
### System Info
Windows 11.
Python 3.11.9 | NotImplementedError for method with_structured_output than I use Local Model with LlamaCPP as suggested in docs. And pass it to LLMGraphTransformer. | https://api.github.com/repos/langchain-ai/langchain/issues/20606/comments | 3 | 2024-04-18T14:12:12Z | 2024-05-16T19:22:20Z | https://github.com/langchain-ai/langchain/issues/20606 | 2,250,806,377 | 20,606 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
openai_api_version = "2024-02-01"
embeddings = AzureOpenAIEmbeddings(
deployment=os.getenv('EMBEDDING_DEPLOYMENT_NAME'),
openai_api_version=openai_api_version,
)
index_name: str = "my-index-name"
fields = [...]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.getenv('VECTOR_STORE_ADDRESS'),
azure_search_key=os.getenv('VECTOR_STORE_PASSWORD'),
index_name=index_name,
embedding_function=embedding_function,
fields=fields,
# needed for semantic ranking
semantic_configuration_name = 'my-config',
)
retriever = vector_store.as_retriever(search_kwargs={"k": 5}, search_type="semantic_hybrid")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use "semantic_hybrid" as `search_type` but it is not supported, as `as_retriever` doesn't return an instance of the class `AzureSearchVectorStoreRetriever` but a `VectorStoreRetriever`.
### System Info
- | `AzureSearch` vectorstore should be converted to `AzureSearchVectorStoreRetriever` when calling `as_retriever` | https://api.github.com/repos/langchain-ai/langchain/issues/20600/comments | 11 | 2024-04-18T12:21:14Z | 2024-05-15T15:51:00Z | https://github.com/langchain-ai/langchain/issues/20600 | 2,250,555,588 | 20,600 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import json
import uuid
from datetime import datetime, timezone
from langchain.chains import create_extraction_chain
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_mistralai.chat_models import ChatMistralAI
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
token = '...'
str = "Why did the hipster burn his mouth? He drank the coffee before it was cool."
llm = ChatMistralAI(
endpoint='https://id-serverless.francecentral.inference.ai.azure.com/v1/',
mistral_api_key=token,
)
structured_llm = llm.with_structured_output(Joke)
result = structured_llm.invoke(str)
print(result)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/proj/main.py", line 24, in <module>
result = structured_llm.invoke(str)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4511, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 469, in _generate
return self._create_chat_result(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 476, in _create_chat_result
message=_convert_mistral_chat_message_to_message(res["message"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 125, in _convert_mistral_chat_message_to_message
return AIMessage(
^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 47, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/home/proj/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AIMessage
invalid_tool_calls -> 0 -> args
str type expected (type=type_error.str)
### Description
I'm trying to use the structured output with Azure's Mistral Endpoint.
Going through the backgtrace, the call for AIMessage() has these parameters:
```
{'tool_calls': [{'function': {'arguments': {'setup': 'Why did the hipster burn his mouth?'}, 'call_id': None, 'name': 'Joke'}, 'id': 'call_Joke_0', 'type': 'function'}]}
[]
[{'name': 'Joke', 'args': {'setup': 'Why did the hipster burn his mouth?'}, 'id': 'call_Joke_0', 'error': 'the JSON object must be str, bytes or bytearray, not dict'}]
```
The invalid_tool_calls is due to the exception `the JSON object must be str, bytes or bytearray, not dict` when parsing the function arguments
### System Info
System Information
------------------
> OS: Linux
> OS Version: #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
> Python Version: 3.11.3 (main, Jul 27 2023, 10:19:30) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.1.44
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.48
> langchain_mistralai: 0.1.2
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatMistralAI on Azure with_structured_output error when parsing function arguments: the JSON object must be str, bytes or bytearray, not dict | https://api.github.com/repos/langchain-ai/langchain/issues/20596/comments | 0 | 2024-04-18T10:53:39Z | 2024-07-25T16:09:08Z | https://github.com/langchain-ai/langchain/issues/20596 | 2,250,381,910 | 20,596 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/use_cases/sql/agents/
The copy mechanism for code does not seem to be working.
### Idea or request for content:
When I click on the copy button, the code should be copied. | DOC: Code Copy is not working inside of sql/agents in the python page | https://api.github.com/repos/langchain-ai/langchain/issues/20584/comments | 2 | 2024-04-18T05:27:55Z | 2024-04-19T12:32:14Z | https://github.com/langchain-ai/langchain/issues/20584 | 2,249,782,137 | 20,584 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is the code I am trying to run:
```
python
from langchain.prompts.example_selector import (
MaxMarginalRelevanceExampleSelector,
SemanticSimilarityExampleSelector,
)
SemanticSimilarityExampleSelector.from_examples(
query_examples,
OpenAIEmbeddings(),
FAISS,
k=5,
input_keys=["input"],
)
```
### Error Message and Stack Trace (if applicable)
Below the full error:
example_selector = SemanticSimilarityExampleSelector.from_examples(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/langchain_core/example_selectors/semantic_similarity.py", line 133, in from_examples
vectorstore = vectorstore_cls.from_texts(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py", line 930, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.7/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 517, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.7/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 300, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/tiktoken/model.py", line 75, in encoding_for_model
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/tiktoken/registry.py", line 60, in get_encoding
ValueError: Unknown encoding cl100k_base
### Description
I have an issue when trying to use SemanticSimilarityExampleSelector in an app hosted in Azure App Services. When running locally it works, I get the following error:
Unknown encoding cl100k_base.
### System Info
python version: 3.11
platform: linux
aiohttp==3.8.5
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
asn1crypto==1.5.1
async-timeout==4.0.3
attrs==23.2.0
azure-ai-contentsafety==1.0.0
azure-core==1.30.1
banal==1.0.6
beartype==0.18.2
beautifulsoup4==4.12.3
binaryornot==0.4.4
boolean.py==4.0
botbuilder-core==4.14.8
botbuilder-integration-aiohttp==4.14.8
botbuilder-schema==4.14.8
botframework-connector==4.14.8
botframework-streaming==4.14.8
certifi==2024.2.2
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
commoncode==31.0.3
container-inspector==32.0.1
cryptography==42.0.5
dataclasses-json==0.6.4
debian_inspector==31.1.0
distro==1.9.0
dockerfile-parse==2.0.1
dparse2==0.7.0
extractcode==31.0.0
extractcode-7z==16.5.210531
extractcode-libarchive==3.5.1.210531
faiss-cpu==1.8.0
fasteners==0.19
filelock==3.13.3
fingerprints==1.2.3
frozenlist==1.4.1
ftfy==6.2.0
gemfileparser2==0.9.3
greenlet==3.0.3
h11==0.14.0
html5lib==1.1
httpcore==1.0.4
httpx==0.27.0
idna==3.6
importlib_metadata==7.1.0
intbitset==3.1.0
isodate==0.6.1
jaraco.functools==4.0.0
javaproperties==0.8.1
Jinja2==3.1.3
jsonpatch==1.33
jsonpickle==1.4.2
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jsonstreams==0.6.0
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.40
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langsmith==0.1.40
license-expression==30.3.0
lxml==5.2.1
MarkupSafe==2.1.5
marshmallow==3.21.1
more-itertools==10.2.0
msal==1.27.0
msrest==0.7.1
multidict==6.0.5
mypy-extensions==1.0.0
normality==2.5.0
numpy==1.26.4
oauthlib==3.2.2
openai==1.13.3
orjson==3.10.0
packageurl-python==0.15.0
packaging==23.2
packvers==21.5
pandas==2.2.1
parameter-expansion-patched==0.3.1
pdfminer.six==20231228
pefile==2023.2.7
pip-requirements-parser==32.0.1
pkginfo2==30.0.0
platformdirs==3.11.0
pluggy==1.4.0
plugincode==32.0.0
ply==3.11
publicsuffix2==2.20191221
pyahocorasick==2.1.0
pyarrow==15.0.2
pycparser==2.21
pydantic==2.6.3
pydantic_core==2.16.3
pygmars==0.8.0
Pygments==2.17.2
PyJWT==2.8.0
pymaven-patch==0.3.2
pyOpenSSL==24.1.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
rdflib==7.0.0
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.18.0
saneyaml==0.6.0
scancode-toolkit==32.1.0
semantic-version==2.10.0
six==1.16.0
sniffio==1.3.1
snowflake-connector-python==3.7.1
snowflake-sqlalchemy==1.5.1
sortedcontainers==2.4.0
soupsieve==2.5
spdx-tools==0.8.2
SQLAlchemy==1.4.52
teams-ai==1.0.0
tenacity==8.2.3
text-unidecode==1.3
tiktoken==0.6.0
toml==0.10.2
tomlkit==0.12.4
tqdm==4.66.2
typecode==30.0.1
typecode-libmagic==5.39.210531
types-PyYAML==6.0.12.12
typing-inspect==0.9.0
typing_extensions==4.10.0
tzdata==2024.1
Unidecode==1.3.8
uritools==4.0.2
urllib3==1.26.18
urlpy==0.5
wcwidth==0.2.13
webencodings==0.5.1
xmltodict==0.13.0
yarl==1.9.4
zipp==3.18.1
| Dynamic Few Shot Prompt - SemanticSimilarityExampleSelector - Unknown encoding cl100k_base | https://api.github.com/repos/langchain-ai/langchain/issues/20567/comments | 2 | 2024-04-17T20:22:14Z | 2024-07-29T16:07:43Z | https://github.com/langchain-ai/langchain/issues/20567 | 2,249,208,659 | 20,567 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
In future I want to replace the pre-trained model with fine-tuned model
### Example Code
```
import re
import os
import requests
from PIL import Image
import gradio as gr
from langchain_groq import ChatGroq
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationChain
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain.agents import Tool, initialize_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFaceHub
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
from langchain_community.llms import HuggingFaceEndpoint
#auth_token = os.environ.get("HUGGINGFACEHUB_API_TOKEN")
from google.colab import userdata
HUGGINGFACE_TOKEN=userdata.get('HUGGINGFACE_TOKEN')
from transformers import AutoModelForCausalLM, AutoTokenizer,pipeline
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain import HuggingFaceHub
import warnings
warnings.filterwarnings("ignore")
from transformers import pipeline
import torch
llm = HuggingFacePipeline.from_model_id(
model_id="mistralai/Mistral-7B-v0.1",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 1000},
)
wikipedia = WikipediaAPIWrapper()
search = DuckDuckGoSearchRun()
wikipedia_tool = Tool(
name='wikipedia',
func= wikipedia.run,
description="This tool leverages Wikipedia to gather information about Ingredients name ,description of the dish , Allergens , additional information . It's particularly useful for obtaining detailed and reliable information on various topics"
)
duckduckgo_tool = Tool(
name='DuckDuckGo Search',
func= search.run,
description="Useful for when you need to do a search on the internet to find information that another tool can't find. Always be specific with your input."
)
tools = [
Tool(
name = "DuckDuckGo Search",
func=duckduckgo_tool.run,
description="useful for when you need answer questions from internet"
)
]
tools.append(wikipedia_tool)
zero_shot_agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
handle_parsing_errors=True,
max_iterations=10,
)
def menu_prompt(title):
prompt_menu = f'''
As a restaurant menu manager, your role is to gather below informations based on input data {title} (Name of the dish).
generate the output
### information to be extracted :
<Ingredients>: Only Ingredients included in the dish.
<Description>: Briefly describe the dish.
<Allergens>: Only Choose relevant options from this list - [Cereals, Crustaceans, Egg, Fish, Peanuts, SOYBEAN, Latte, Nuts, Celery, Mustard, Sesame seeds, Sulfur dioxide and sulphites, Shell, Clams].
<Additional Information>: Only Choose relevant options from this list - [Spicy, Vegan, Gluten free, Vegetarian].
### Output Format
"""
"ingredients": All Ingredients in a List,
"description": Description in a string,
"allergen": All allergen in a List,
"Additional_information": All Additional_information in a List
"""
### Input data:
{title}
### Output:
'''
return prompt_menu
def get_router(title):
prompt_menu=menu_prompt(title)
prompt_infos = [
{
"name": "Menu Manager",
"description": "Good for answering questions about Italian Dish[ingredients,description,allergens,additional_information]",
"prompt_template": prompt_menu,
}
]
# map destination chains
destination_chains = {}
for prompt_info in prompt_infos:
name = prompt_info["name"]
prompt_template = prompt_info["prompt_template"]
prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
print("prompt: ", prompt)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
default_chain = ConversationChain(llm=llm)
# Creating LLMRouterChain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
# creating the router chain
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
# Multiple Prompt Chain
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
)
# Get response from the agent
response = chain.run(title)
return response
response=get_router("Pizza Margherita")
response
```
```
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT (remember to include the ```json and ```)>>"""
```
### Error Message and Stack Trace (if applicable)
> Entering new MultiPromptChain chain...
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/router/llm_router.py](https://localhost:8080/#) in parse(self, text)
98 expected_keys = ["destination", "next_inputs"]
---> 99 parsed = parse_and_check_json_markdown(text, expected_keys)
100 if not isinstance(parsed["destination"], str):
16 frames
OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {}
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/router/llm_router.py](https://localhost:8080/#) in parse(self, text)
114 return parsed
115 except Exception as e:
--> 116 raise OutputParserException(
117 f"Parsing text\n{text}\n raised following error:\n{e}"
118 )
OutputParserException: Parsing text
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revisingit will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is notwell suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
Menu Manager: Good for answering questions about Italian Dish[ingredients,description,allergens,additional_information]
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margh
raised following error:
Got invalid return object. Expected key `destination` to be present, but got {}
### Description
OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {}
### System Info
!pip install langchain openai tiktoken transformers accelerate cohere gradio langchain_groq wikipedia duckduckgo-search bitsandbytes accelerate transformers --quiet
!pip install transformers==4.34.0
!pip install datasets==2.16.0
!pip install --upgrade langchain
!pip install bitsandbytes
!pip install -U peft
!pip install accelerate
!pip install -U trl
!pip install wandb
!pip install vllm
!pip install langchain transformers | Issue with HuggingFace pipeline with RouterOutputParser OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {} | https://api.github.com/repos/langchain-ai/langchain/issues/20563/comments | 0 | 2024-04-17T18:12:29Z | 2024-07-24T16:08:42Z | https://github.com/langchain-ai/langchain/issues/20563 | 2,248,929,911 | 20,563 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import OpenSearchVectorSearch
from langchain_community.document_loaders import TextLoader
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.documents import Document
from langchain.chains.query_constructor.base import AttributeInfo
import torch
embeddings = HuggingFaceEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
38 "rating": 9.9,
39 "director": "Andrei Tarkovsky",
40 "genre": "science fiction",
41 },
42 ),
43 ]
44
45
46
47 vectorstore = OpenSearchVectorSearch.from_documents(
48 docs,
49 embeddings,
50 index_name="opensearch-self-query-demo",
51 opensearch_url="https://admin:admin@localhost:9200",use_ssl = False, verify_certs = False
52 )
53
54 model_id = "lmsys/vicuna-13b-v1.5"
55 tokenizer = AutoTokenizer.from_pretrained(model_id)
56 model = AutoModelForCausalLM.from_pretrained(model_id)
57 pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=200, device_map="auto",torch_dtype=torch.float16)
58 llm = HuggingFacePipeline(pipeline=pipe)
59
60 metadata_field_info = [
61 AttributeInfo(
62 name="genre",
63 description="The genre of the movie",
64 type="string or list[string]",
65 ),
66 AttributeInfo(
67 name="year",
68 description="The year the movie was released",
69 type="integer",
70 ),
71 AttributeInfo(
72 name="director",
73 description="The name of the movie director",
74 type="string",
75 ),
76 AttributeInfo(
77 name="rating", description="A 1-10 rating for the movie", type="float"
78 ),
79 ]
80 document_content_description = "Brief summary of a movie"
81
82
83 retriever = SelfQueryRetriever.from_llm(
84 llm, vectorstore, document_content_description, metadata_field_info, verbose=True
85 )
86
87 pol = retriever.get_relevant_documents("What are some movies about dinosaurs")
88 print(pol)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 175, in parse_and_check_json_markdown
json_obj = parse_json_markdown(text)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 157, in parse_json_markdown
parsed = parser(json_str)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 125, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/local/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 14 (char 15)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 50, in parse
parsed = parse_and_check_json_markdown(text, expected_keys)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 177, in parse_and_check_json_markdown
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
langchain_core.exceptions.OutputParserException: Got invalid JSON object. Error: Expecting value: line 2 column 14 (char 15)
### Description
I am following this documentation-:
https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Jan 10 22:58:54 UTC 2024
> Python Version: 3.10.7 (main, Feb 29 2024, 10:06:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
Package Information
-------------------
> langchain_core: 0.1.33
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.31
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SelfQueryRetriever with an OpenSearch vector store doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/20562/comments | 2 | 2024-04-17T17:08:59Z | 2024-07-31T16:07:25Z | https://github.com/langchain-ai/langchain/issues/20562 | 2,248,784,865 | 20,562 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
```
from langchain_community.llms import OpenAI, Ollama, MLXPipeline
model, tokenizer = load("mlx-community/dolphin-2.8-mistral-7b-v02")
self.mlx = MLXPipeline(
model=model,
tokenizer=tokenizer,
pipeline_kwargs={"temp":0.7, "max_tokens":10}
)
```
on the following prompt
```
Collect and summarize recent news articles, press releases, and market analyses related to the company. Pay special attention to any significant events, market, sentiments, and analysts' opinions.
Your final answer MUST be a report that includes a comprehensive identification of key points makrting oriented following the Marketing 5Ps (Product, Place, Price, Promotion, People)
If you do your BEST WORK, I will give you a $10,000 commission!
Make sure to use the most recent data as possible.
Selected company by the customer is Tesla
```
Lead to an error during execution.
### Error Message and Stack Trace (if applicable)
```
File "/opt/homebrew/lib/python3.10/site-packages/langchain_community/llms/mlx_pipeline.py", line 189, in _stream
text = self.tokenizer.decode(token.item())
AttributeError: 'int' object has no attribute 'item'
```
### Description
Hi
* I am trying to use Langchain for loading a MLX model (cf code) on a given prompt.
* I face the error available in the error section: `AttributeError: 'int' object has no attribute 'item'`
Removing the `.items()` on the line 182 unlock the issue however I have nothing as result.
So my idea is not correct.
The file `libs/community/langchain_community/llms/mlx_pipeline.py` has been added last week so it is very new.
Could you take a look @Blaizzy ?
Thank you
### System Info
here is the version I use:
Python 3.10
```
pip freeze | grep langchain
langchain==0.1.12
langchain-community==0.0.33
langchain-core==0.1.43
langchain-openai==0.0.5
langchain-text-splitters==0.0.1
```
| Mistype issue using MLX model via MLXPipeline | https://api.github.com/repos/langchain-ai/langchain/issues/20561/comments | 17 | 2024-04-17T16:31:11Z | 2024-05-21T00:17:10Z | https://github.com/langchain-ai/langchain/issues/20561 | 2,248,722,667 | 20,561 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.vectorstores.azuresearch import AzureSearch
from azure.search.documents.indexes.models import (
FreshnessScoringFunction,
FreshnessScoringParameters,
ScoringProfile,
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
TextWeights,
SemanticConfiguration,
SemanticPrioritizedFields,
SemanticField
)
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="header1",
type=SearchFieldDataType.String,
searchable=True,
),
SearchableField(
name="header2",
type=SearchFieldDataType.String,
searchable=True,
), SearchableField(
name="header3",
type=SearchFieldDataType.String,
searchable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(aoai_embeddings.embed_query("Text")),
vector_search_profile_name="myExhaustiveKnnProfile",
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
),
]
index_name: str = vector_store_index
# Adding a custom scoring profile with a freshness function
sc_name = "csrd_scoring_profile"
sc = ScoringProfile(
name=sc_name,
text_weights=TextWeights(weights={
"header1": 10,
"header2": 9,
"content": 8,
"content_vector": 8
}),
function_aggregation="sum"
)
semantic_configuration_name = 'my_semantic_configuration'
semantic_config = SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=SemanticPrioritizedFields(
title_field=SemanticField(field_name='header2'),
content_fields=[SemanticField(field_name='content')],
keywords_fields=None,
)
)
vector_store: AzureSearch = AzureSearch(
search_type='semantic_hybrid',
scoring_profiles=[sc],
default_scoring_profile=sc_name,
semantic_configurations=[semantic_config],
semantic_configuration_name=semantic_configuration_name,
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=aoai_embeddings.embed_query,
fields=fields,
)
```
### Error Message and Stack Trace (if applicable)
There is no error but semantic configuration is not created for index.
### Description
Semantic configuration is not created for Azure AI Search index using Langchain community if both semantic config name and semantic configuration is provided.
When I checked in AzureSearch.py, I found below snippet which creates the semantic configuration.
```
# Create the semantic settings with the configuration
semantic_search = None
if semantic_configurations is None and semantic_configuration_name is not None:
semantic_configuration = SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=SemanticPrioritizedFields(
content_fields=[SemanticField(field_name=FIELDS_CONTENT)],
),
)
semantic_search = SemanticSearch(configurations=[semantic_configuration])
# Create the search index with the semantic settings and vector search
index = SearchIndex(
name=index_name,
fields=fields,
vector_search=vector_search,
semantic_search=semantic_search,
scoring_profiles=scoring_profiles,
default_scoring_profile=default_scoring_profile,
cors_options=cors_options,
)
index_client.create_index(index)
```
if you observe, it create semantic config if semantic configuration is None and semantic configuration is not None only. else condition is not specified if both configuration and configuration name is present.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.20348
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.8
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_openai: 0.0.8
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Semantic configuration is not created for Azure AI Search index using Langchain community. | https://api.github.com/repos/langchain-ai/langchain/issues/20549/comments | 1 | 2024-04-17T10:59:07Z | 2024-07-25T16:08:53Z | https://github.com/langchain-ai/langchain/issues/20549 | 2,248,005,081 | 20,549 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Docs to index
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
# Load
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
# Split
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=500, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorstore
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["vectorstore", "web_search"] = Field(
...,
description="Given a user question choose to route it to web search or a vectorstore.",
)
# LLM with function call
llm = AzureChatOpenAI(azure_deployment='chatgpt3', model="gpt-3.5-turbo-0125", temperature=0)
structured_llm_router = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""
route_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
question_router = route_prompt | structured_llm_router
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
print(question_router.invoke({"question": "What are the types of agent memory?"}))
### Error Message and Stack Trace (if applicable)
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\router.py", line 87, in <module>
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 182, in parse_result
json_results = super().parse_result(result, partial=partial)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 129, in parse_result
tool_calls = parse_tool_calls(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 85, in parse_tool_calls
raise OutputParserException("\n\n".join(exceptions))
langchain_core.exceptions.OutputParserException: Function RouteQuery arguments:
{
datasource: "web_search"
}
are not valid JSON. Received JSONDecodeError Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
Expected Output is {
datasource: "web_search"
}
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.31
> langchain_cohere: 0.1.2
> langchain_experimental: 0.0.54
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.37 | AdaptiveRAG implementation does'nt work with AzureOpenAI(llm.`with_structured_output`) Error | https://api.github.com/repos/langchain-ai/langchain/issues/20548/comments | 2 | 2024-04-17T09:07:39Z | 2024-08-01T16:06:24Z | https://github.com/langchain-ai/langchain/issues/20548 | 2,247,776,504 | 20,548 |
[
"langchain-ai",
"langchain"
] | > Looks like I've imported differently. It's type is supposed to say `langchain_community.graphs.networkx_graph.NetworkxEntityGraph`. It is working now!!
I am running the same issue , how did u solve it ?
My object is : "langchain.graphs.network_graph.NetworkxEntityGraph" object
_Originally posted by @nikhitaKanoj in https://github.com/langchain-ai/langchain/issues/15046#issuecomment-2060214876_
| > Looks like I've imported differently. It's type is supposed to say `langchain_community.graphs.networkx_graph.NetworkxEntityGraph`. It is working now!! | https://api.github.com/repos/langchain-ai/langchain/issues/20541/comments | 1 | 2024-04-17T03:15:00Z | 2024-04-18T21:09:01Z | https://github.com/langchain-ai/langchain/issues/20541 | 2,247,270,511 | 20,541 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
ExtractData = {
"name": "ExtractData",
"description": "ExtractData",
"input_schema": {
"type": "object", "description":"schema components for getting data",
"properties": data_schema,
"required": ["x","y']
}
}
llm = ChatAnthropic(model=MODEL_NAME, verbose=True)
llm_with_tools = llm.bind_tools([ExtractData ]) # OR llm.with_structured_output(ExtractData)
agent= initialize_agent(
[OtherTool]
llm_with_tools,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION)
agent.invoke(query)
```
### Error Message and Stack Trace (if applicable)
Observation: ExtractData is not a valid tool, try one of [OtherTool].
Thought:Apologies, let me try this again using the valid tools:
### Description
I am attempting to get structured output from an agent. While the above code DOES work, I always get this error message when the agent attempts to look for its llm bound tool inside the agent's tools. (Nevertheless it returns the right output in the end, but it sometimes does a few loops of the same operation first).
The correct behavior should be that the agent does NOT look for llm tools in its own tools, because the input it is trying to feed to the tool is ALREADY the correctly formatted input.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.12.3 (main, Apr 14 2024, 13:07:33) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_anthropic: 0.1.8
> langchain_chroma: 0.1.0
> langchain_openai: 0.0.5
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15 | Problem when using ChatAnthropic bind_tools/with_structured_output with an agent: "x is not a valid tool". | https://api.github.com/repos/langchain-ai/langchain/issues/20530/comments | 2 | 2024-04-16T19:13:27Z | 2024-04-17T01:52:19Z | https://github.com/langchain-ai/langchain/issues/20530 | 2,246,724,713 | 20,530 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
A number of model's provide token usage stats as part of generation response. Should we provide a standardized interface to these stats? Would unblock downstream usage in e.g. tracers.
RFC proposal: #20522 | Standardized token usage information | https://api.github.com/repos/langchain-ai/langchain/issues/20524/comments | 6 | 2024-04-16T17:27:01Z | 2024-07-29T16:15:43Z | https://github.com/langchain-ai/langchain/issues/20524 | 2,246,555,280 | 20,524 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_mistralai.embeddings import MistralAIEmbeddings
```python
embeddings = MistralAIEmbeddings()
pass_test_str = "hello world" * 4094
embedded_pass_test_str = embeddings.embed_documents([pass_test_str])
print(f"Maximum number of tokens that pass: {len(embeddings.tokenizer.encode(pass_test_str))}") # 8190
print(f"Embedding dimension: {len(embedded_pass_test_str[0])}") # 1024
fail_test_str = "hello world" * 4095
print(f"Number of tokens: {len(embeddings.tokenizer.encode(fail_test_str))}") # 8192
embedded_fail_test_str = embeddings.embed_documents([fail_test_str])
```
### Error Message and Stack Trace (if applicable)
An error occurred with MistralAI: 'data'
Traceback (most recent call last):
File "/Users/y.tahtah/test_langchain_mistralai_embeddings/test.py", line 15, in <module>
embedded_fail_test_str = embeddings.embed_documents([fail_test_str])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/y.tahtah/test_langchain_mistralai_embeddings/venv/lib/python3.12/site-packages/langchain_mistralai/embeddings.py", line 135, in embed_documents
for embedding_obj in response.json()["data"]
~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'data'
### Description
I'm trying to embed some text using `MistralAIEmbeddings` and I have split my text according to the `MAX_TOKENS` variable in `libs/partners/mistralai/langchain_mistralai` but it's not working. Further investigation with the example code provided with this issue led me to find that the embedding models fails to embed a document way before the 16k token indicated by `MAX_TOKENS`.
[MistralAI's official page on endpoints](https://docs.mistral.ai/platform/endpoints/) doesn't specify a context window size for the embedding model.
Either there is an issue with how langchain hits the endpoint, but I couldn't find any issue in the code in that regard and I doubt it is the case since it works for strings of token count less than $8190$ (as the provided example code shows), or the MistralAI's embedding model has a context length of $8190$ and in which case we should update the `MAX_TOKENS` variable.
### System Info
Python 3.12.2
langchain-core==0.1.43
langchain-mistralai==0.1.2
MacOS 14.4.1 (M1) | MAX_TOKENS in MistralAIEmbeddings is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/20523/comments | 3 | 2024-04-16T17:25:29Z | 2024-04-25T00:39:07Z | https://github.com/langchain-ai/langchain/issues/20523 | 2,246,552,907 | 20,523 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import Qdrant
from langchain_openai import AzureOpenAIEmbeddings
embeddings = AzureOpenAIEmbeddings(model="text-embedding-3-small",
azure_endpoint="",
api_key="")
qdrant = Qdrant.from_documents(
docs,
embeddings,
path="local_qdrant",
collection_name="my_documents",
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using `Qdrant.from_documents` to create a collection of documents. It seems that the collection is being recreated each time
```
from langchain_community.vectorstores import Qdrant
qdrant = Qdrant.from_documents(
docs,
embeddings,
path="local_qdrant",
collection_name="my_documents",
)
```
On the first run, the embeddings are saving correctly, to a local file.
However, when running again, I had assumed the collection would be reused, but it seems that the collection is be recreated.
My collection is very large, and takes about 10 minutes to complete. If i run the add_documents call again, it still takes the same time to complete, so my assumtion is that it is not being read from the collection on disk
### System Info
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-openai==0.1.2
langchain-text-splitters==0.0.1
Platform: OSX
Python Version: 3.11 | Qdrant `from_documents` does not load existing collection | https://api.github.com/repos/langchain-ai/langchain/issues/20514/comments | 3 | 2024-04-16T13:47:13Z | 2024-04-22T10:31:07Z | https://github.com/langchain-ai/langchain/issues/20514 | 2,246,114,753 | 20,514 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts import ChatPromptTemplate
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.schema.runnable import RunnableLambda
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
documents = loader.load()
doc = documents[0]
model = OllamaFunctions(temperature=0, model=os.environ['OPEN_HERMES_2_5'])
def flatten(matrix):
flat_list = []
for row in matrix:
flat_list += row
return flat_list
class Paper(BaseModel):
"""Information about papers mentioned."""
title: str
author: Optional[str]
class Info(BaseModel):
"""Information to extract"""
papers: List[Paper]
template = """A article will be passed to you. Extract from it all papers that are mentioned by this article.
Do not extract the name of the article itself. If no papers are mentioned that's fine - you don't need to extract any! Just return an empty list.
Do not make up or guess ANY extra information. Only extract what exactly is in the text."""
prompt = ChatPromptTemplate.from_messages([
("system", template),
("human", "{input}")
])
paper_extraction_function = [
convert_to_openai_function(Info)
]
extraction_model = model.bind(
functions=paper_extraction_function,
function_call={"name":"Info"}
)
extraction_chain = prompt | extraction_model | JsonKeyOutputFunctionsParser(key_name="papers")
text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=0)
prep = RunnableLambda(
lambda x: [{"input": doc} for doc in text_splitter.split_text(x)]
)
chain = prep | extraction_chain.map() | flatten
chain.invoke(doc.page_content)
```
### Error Message and Stack Trace (if applicable)
-> custom given print for debug dict() at line 105 print(parsed_chat_result.keys()) to check on which chunk error occurred:
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['thoughts', 'command'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[107], line 1
----> 1 chain.invoke(doc.page_content)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config)
2497 try:
2498 for i, step in enumerate(self.steps):
-> 2499 input = step.invoke(
2500 input,
2501 # mark each step as a child run
2502 patch_config(
2503 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2504 ),
2505 )
2506 # finish the root run
2507 except BaseException as e:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4262, in RunnableEachBase.invoke(self, input, config, **kwargs)
4259 def invoke(
4260 self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
4261 ) -> List[Output]:
-> 4262 return self._call_with_config(self._invoke, input, config, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:1625, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1621 context = copy_context()
1622 context.run(var_child_runnable_config.set, child_config)
1623 output = cast(
1624 Output,
-> 1625 context.run(
1626 call_func_with_variable_args, # type: ignore[arg-type]
1627 func, # type: ignore[arg-type]
1628 input, # type: ignore[arg-type]
1629 config,
1630 run_manager,
1631 **kwargs,
1632 ),
1633 )
1634 except BaseException as e:
1635 run_manager.on_chain_error(e)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/config.py:347, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
345 if run_manager is not None and accepts_run_manager(func):
346 kwargs["run_manager"] = run_manager
--> 347 return func(input, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4255, in RunnableEachBase._invoke(self, inputs, run_manager, config, **kwargs)
4248 def _invoke(
4249 self,
4250 inputs: List[Input],
(...)
4253 **kwargs: Any,
4254 ) -> List[Output]:
-> 4255 return self.bound.batch(
4256 inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
4257 )
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:2643, in RunnableSequence.batch(self, inputs, config, return_exceptions, **kwargs)
2641 else:
2642 for i, step in enumerate(self.steps):
-> 2643 inputs = step.batch(
2644 inputs,
2645 [
2646 # each step a child run of the corresponding root run
2647 patch_config(
2648 config, callbacks=rm.get_child(f"seq:step:{i+1}")
2649 )
2650 for rm, config in zip(run_managers, configs)
2651 ],
2652 )
2654 # finish the root runs
2655 except BaseException as e:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4544, in RunnableBindingBase.batch(self, inputs, config, return_exceptions, **kwargs)
4542 else:
4543 configs = [self._merge_configs(config) for _ in range(len(inputs))]
-> 4544 return self.bound.batch(
4545 inputs,
4546 configs,
4547 return_exceptions=return_exceptions,
4548 **{**self.kwargs, **kwargs},
4549 )
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:634, in Runnable.batch(self, inputs, config, return_exceptions, **kwargs)
631 return cast(List[Output], [invoke(inputs[0], configs[0])])
633 with get_executor_for_config(configs[0]) as executor:
--> 634 return cast(List[Output], list(executor.map(invoke, inputs, configs)))
File /usr/lib/python3.11/concurrent/futures/_base.py:619, in Executor.map.<locals>.result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield _result_or_cancel(fs.pop())
620 else:
621 yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
File /usr/lib/python3.11/concurrent/futures/_base.py:317, in _result_or_cancel(***failed resolving arguments***)
315 try:
316 try:
--> 317 return fut.result(timeout)
318 finally:
319 fut.cancel()
File /usr/lib/python3.11/concurrent/futures/_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File /usr/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File /usr/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/config.py:466, in ContextThreadPoolExecutor.map.<locals>._wrapped_fn(*args)
465 def _wrapped_fn(*args: Any) -> T:
--> 466 return contexts.pop().run(fn, *args)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:627, in Runnable.batch.<locals>.invoke(input, config)
625 return e
626 else:
--> 627 return self.invoke(input, config, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:158, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
147 def invoke(
148 self,
149 input: LanguageModelInput,
(...)
153 **kwargs: Any,
154 ) -> BaseMessage:
155 config = ensure_config(config)
156 return cast(
157 ChatGeneration,
--> 158 self.generate_prompt(
159 [self._convert_input(input)],
160 stop=stop,
161 callbacks=config.get("callbacks"),
162 tags=config.get("tags"),
163 metadata=config.get("metadata"),
164 run_name=config.get("run_name"),
165 run_id=config.pop("run_id", None),
166 **kwargs,
167 ).generations[0][0],
168 ).message
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
552 def generate_prompt(
553 self,
554 prompts: List[PromptValue],
(...)
557 **kwargs: Any,
558 ) -> LLMResult:
559 prompt_messages = [p.to_messages() for p in prompts]
--> 560 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:421, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
419 if run_managers:
420 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 421 raise e
422 flattened_outputs = [
423 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
424 for res in results
425 ]
426 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:411, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
408 for i, m in enumerate(messages):
409 try:
410 results.append(
--> 411 self._generate_with_cache(
412 m,
413 stop=stop,
414 run_manager=run_managers[i] if run_managers else None,
415 **kwargs,
416 )
417 )
418 except BaseException as e:
419 if run_managers:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:632, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
630 else:
631 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 632 result = self._generate(
633 messages, stop=stop, run_manager=run_manager, **kwargs
634 )
635 else:
636 result = self._generate(messages, stop=stop, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py:107, in OllamaFunctions._generate(self, messages, stop, run_manager, **kwargs)
101 raise ValueError(
102 f'"{self.llm.model}" did not respond with valid JSON. Please try again.'
103 )
105 print(parsed_chat_result.keys()) #CUSTOM added for DEBUG
--> 107 called_tool_name = parsed_chat_result["tool"]
108 called_tool_arguments = parsed_chat_result["tool_input"]
109 called_tool = next(
110 (fn for fn in functions if fn["name"] == called_tool_name), None
111 )
KeyError: 'tool'
### Description
While doing one of tutorial from DLAI an issue occured in function OllamaFunctions._generate from langchain_experimental pkg.
I use given article and I tried to parse it by follow tutorial steps. ( check python code )
The issue is that sometimes dict keys() in `OllamaFunctions._generate` doesn't contain `dict_keys(['tool', 'tool_input'])` rather other values as `dict_keys(['thoughts', 'command'])` which end up with KeyError.
Above code steps worked in tutorial ( for ChatOpenAI) but I did not try OpenAI chat because I do not have api key, and Im using Ollama local `openhermes_2.5_7b_q5_k_m`.
What I have observed:
> len(doc.page_content) == 43902
there is no issue when
> chain.invoke(doc.page_content[:30000])
and issue starts for:
> chain.invoke(doc.page_content[:40000])
For me in such cases `expect KeyError` handling should be added and allow user get final result with some info or other error should be raised to be more preciously
### System Info
System Information
------------------
> OS: Linux
> OS Version: #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.48
> langchain_experimental: 0.0.57
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
Ollama server | openhermes_2.5_7b_q5_k_m | CUDA | KeyError: "tool" in langchain_experimental -> OllamaFunctions._generate | https://api.github.com/repos/langchain-ai/langchain/issues/20513/comments | 2 | 2024-04-16T13:36:34Z | 2024-06-13T09:55:45Z | https://github.com/langchain-ai/langchain/issues/20513 | 2,246,090,039 | 20,513 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
def fake_chain(inputs: dict) -> dict:
return {**inputs, "key": "extra"}
def on_start(run: Run):
print("on_start:", run.inputs)
def on_end(run: Run):
print("on_end: ", run.outputs)
chain = RunnableLambda(fake_chain).with_listeners(on_end=on_end, on_start=on_start)
chain = chain.map()
data = [{"name": "one"}, {"name": "two"}]
out = chain.invoke(data, config={"max_concurrency": 1})
print("result: ", out)
```
`max_concurrency` is added for simplicity.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to store `fake_chain` output using listeners. `with_listeners()` allows to hook only top level runnable (according to its docstring). But `run` object is incorrect if use `map()`.
I expect to see
```
on_start: {'name': 'one'}
on_start: {'name': 'two'}
on_end: {'name': 'one', 'key': 'extra'}
on_end: {'name': 'two', 'key': 'extra'}
result: [{'name': 'one', 'key': 'extra'}, {'name': 'two', 'key': 'extra'}]
```
but get
```
on_start: {'name': 'one'}
on_start: {'name': 'one'} # <!
on_end: {'name': 'one', 'key': 'extra'}
on_end: {'name': 'one', 'key': 'extra'} # <!
result: [{'name': 'one', 'key': 'extra'}, {'name': 'two', 'key': 'extra'}]
```
I didn't dive deeper, but smth wrong happens in the `RunnableBindingBase.batch() -> _merge_configs()` (a guess).
### System Info
```shell
$ pip freeze | grep langchain
langchain==0.1.16
langchain-anthropic==0.1.4
langchain-community==0.0.33
langchain-core==0.1.43
langchain-google-genai==0.0.11
langchain-google-vertexai==0.1.2
langchain-groq==0.0.1
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
```
platform: `linux`
python: `3.11.8` | Incorrect listeners parameters for Runnable.with_listeners() and .map() | https://api.github.com/repos/langchain-ai/langchain/issues/20509/comments | 3 | 2024-04-16T11:00:20Z | 2024-05-13T15:16:18Z | https://github.com/langchain-ai/langchain/issues/20509 | 2,245,754,102 | 20,509 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
self.llm = self.llm.with_fallbacks(fallbackModels)
self.agent = create_tool_calling_agent(self.llm.llm, self.tools, self.promptTemplate.getAgentPrompt(self.tools))
### Error Message and Stack Trace (if applicable)
self.agent = create_tool_calling_agent(self.llm.llm, self.tools, self.promptTemplate.getAgentPrompt(self.tools))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/tool_calling_agent/base.py", line 85, in create_tool_calling_agent
raise ValueError(
ValueError: This function requires a .bind_tools method be implemented on the LLM.
### Description
My code work well create_tool_calling_agent without call with_fallbacks function.
### System Info
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.43
langchain-experimental==0.0.49
langchain-google-genai==1.0.1
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
platform linux
python 3.11 | LLM with_fallbacks function not work with create_tool_calling_agent | https://api.github.com/repos/langchain-ai/langchain/issues/20499/comments | 7 | 2024-04-16T06:51:10Z | 2024-07-10T04:57:16Z | https://github.com/langchain-ai/langchain/issues/20499 | 2,245,254,883 | 20,499 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_openai import AzureChatOpenAI
from langchain_openai import AzureOpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from typing import Literal
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_community.tools.tavily_search import TavilySearchResults
# Docs to index
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
# Load
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
# Split
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=500, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorstore
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["vectorstore", "web_search"] = Field(
...,
description="Given a user question choose to route it to web search or a vectorstore.",
)
# LLM with function call
llm = AzureChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm_router = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""
route_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
question_router = route_prompt | structured_llm_router
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
print(question_router.invoke({"question": "What are the types of agent memory?"}))
### Error Message and Stack Trace (if applicable)
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
---ROUTE QUESTION---
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\app.py", line 255, in <module>
for output in app.stream(inputs):
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\pregel\__init__.py", line 686, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\pregel\__init__.py", line 1049, in _panic_or_proceed
raise exc
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\utils.py", line 49, in invoke
ret = self.func(input, merge_configs(self.config, config), **self.kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\graph\graph.py", line 67, in _route
result = self.condition.invoke(reader(config) if reader else input, config)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 3961, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 3835, in _invoke
output = call_func_with_variable_args(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\app.py", line 142, in route_question
source = question_router.invoke({"question": question})
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 4511, in invoke
return self.bound.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 158, in invoke
self.generate_prompt(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 421, in generate
raise e
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 411, in generate
self._generate_with_cache(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_openai\chat_models\base.py", line 548, in _generate
response = self.client.create(messages=message_dicts, **params)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 922, in request
return self._request(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: 'response_format' of type 'json_object' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'response_format', 'code': None}}
(Py10) C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG>
(Py10) C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG>python router.py
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\router.py", line 87, in <module>
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 182, in parse_result
json_results = super().parse_result(result, partial=partial)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 129, in parse_result
tool_calls = parse_tool_calls(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 85, in parse_tool_calls
raise OutputParserException("\n\n".join(exceptions))
langchain_core.exceptions.OutputParserException: Function RouteQuery arguments:
{
datasource: "web_search"
}
are not valid JSON. Received JSONDecodeError Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
Expected Output:-
datasource='web_search'
datasource='vectorstore'
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.31
> langchain_cohere: 0.1.2
> langchain_experimental: 0.0.54
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.37 | llm.with_structured_output(RouteQuery) fails running AdaptiveRAG Example with AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24100/comments | 4 | 2024-04-16T06:40:47Z | 2024-07-11T00:30:11Z | https://github.com/langchain-ai/langchain/issues/24100 | 2,401,975,261 | 24,100 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import GraphCypherQAChain
from langchain_community.graphs import Neo4jGraph
from chatglm3 import chatglm3
llm = chatglm3()
graph = Neo4jGraph(
url="bolt://xxxx", username="xxxx", password="xxxx"
)
graph.refresh_schema()
print(graph.schema)
chain = GraphCypherQAChain.from_llm(llm, graph=graph, verbose=True)
chain.run("Who played in Top Gun?")
### Error Message and Stack Trace (if applicable)
Node properties are the following:
Movie {name: STRING},Actor {name: STRING}
Relationship properties are the following:
The relationships are the following:
(:Actor)-[:ACTED_IN]->(:Movie)
> Entering new GraphCypherQAChain chain...
D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File "E:\LLM_project\ChatGLM3-main\LLM+KG.py", line 16, in <module>
chain.run("Who played in Top Gun?")
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\graph_qa\cypher.py", line 246, in _call
generated_cypher = self.cypher_generation_chain.run(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 741, in generate
output = self._generate_helper(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 605, in _generate_helper
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 592, in _generate_helper
self._generate(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "E:\LLM_project\ChatGLM3-main\chatglm3.py", line 33, in _call
response = self.model.chat(self.tokenizer, messages)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\modeling_chatglm.py", line 1034, in chat
inputs = tokenizer.build_chat_input(query, history=history, role=role)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 193, in build_chat_input
input_ids.extend(self.build_single_message(role, "", query))
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 180, in build_single_message
message_tokens = self.tokenizer.encode(message)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 37, in encode
assert type(s) is str
AssertionError
Exception ignored in: <function Driver.__del__ at 0x0000020CE064EB80>
Traceback (most recent call last):
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\neo4j\_sync\driver.py", line 507, in __del__
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\neo4j\_meta.py", line 229, in unclosed_resource_warn
TypeError: 'NoneType' object is not callable
### Description
An error occurred while running the Neo4j tutorial provided in the official Langchain documentation
### System Info
This is my environment:
OS:windows 10
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langsmith 0.1.22
python 3.8.16 | When I ran knowledge graph enhanced retrieval using Langchain+Neo4j, I encountered an error | https://api.github.com/repos/langchain-ai/langchain/issues/20497/comments | 0 | 2024-04-16T06:39:34Z | 2024-04-18T04:12:56Z | https://github.com/langchain-ai/langchain/issues/20497 | 2,245,233,619 | 20,497 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Callbacks documentation and code is a little ragged around the edges (imo).
### Error Message and Stack Trace (if applicable)
See the Callbacks [page](https://python.langchain.com/docs/modules/callbacks/).
### Description
The documentation has to be brought to the LCEL era, updated to include events like `on_retriever_start` and given a similar look and feel to other pages (e.g. Agents).
The code for the built-in handlers `StdOutCallbackHandler` has to be (lightly) modified for the LCEL era. `FileCallbackHandler` has to be moved from `community` to `core`
### System Info
NA | Callbacks need some TLC | https://api.github.com/repos/langchain-ai/langchain/issues/20493/comments | 3 | 2024-04-16T05:36:55Z | 2024-05-23T07:10:25Z | https://github.com/langchain-ai/langchain/issues/20493 | 2,245,130,783 | 20,493 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.utilities import SerpAPIWrapper
from langchain.agents import create_openai_tools_agent
from langchain.agents import AgentExecutor,Tool
chat = ChatOpenAI(model="gpt-3.5-turbo-1106",streaming=True)
search = SerpAPIWrapper()
#search = GoogleSearchAPIWrapper()
tools = [Tool(
name="google_search",
description="Search Google for recent results.",
func=search.run,
return_direct=False
)]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
),
MessagesPlaceholder(variable_name="messages"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(tools = tools,llm = chat,prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
for chunk in agent_executor.stream({"messages": chat_history.messages}):
for key in chunk:
if key not in output:
output[key] = chunk[key]
else:
output[key] += chunk[key]
if "actions" in chunk:
for action in chunk["actions"]:
print(f"Calling Tool: `{action.tool}` with input `{action.tool_input}`")
continue
if "steps" in chunk:
observation = chunk["steps"][-1].observation
for step in chunk["steps"]:
print(f"Tool Result: `{step.observation}`")
continue
if "output" in chunk:
print(chunk["output"], end="", flush=True)
response_json = json.dumps({"stat": "SUCCESS", "content": chunk["output"]})
```
### Error Message and Stack Trace (if applicable)
```json
{'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_APhVboGGQV2ZfLqDnqukNltV", 'type': 'invalid_request_error', 'param': 'messages.[4].role', 'code': None}}
```
### Description
I want to integrate Google search into my chatbot and use streaming output. It returned the following error: openai.BadRequestError: Error code: 400.
I searched on both Google and Github but did not find any relevant information.
### System Info
System Information
OS: Windows
OS Version: 10.0.19045
Python Version: 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)]
Package Information
langchain_core: 0.1.32
langchain: 0.1.12
langchain_community: 0.0.28
langsmith: 0.1.27
langchain_openai: 0.0.8
langchain_text_splitters: 0.0.1
langchainhub: 0.1.15 | When I use the tool in Agent, it returns OPEN AI 400 Bad Request. | https://api.github.com/repos/langchain-ai/langchain/issues/20492/comments | 14 | 2024-04-16T04:23:02Z | 2024-08-01T16:06:19Z | https://github.com/langchain-ai/langchain/issues/20492 | 2,245,061,633 | 20,492 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
final_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using MongoDBChatMessageHistory and it is working as expected in terms of functionality but I would expect the history record to be saved in an object in MongoDB rather than a string to aid with readability in the database.
(as an aside I am not sure why the fields are not snake or camelcase...)

### System Info
```
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.36
langchain-mongodb==0.1.3
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
``` | MongoDBChatMessageHistory saves History as a string rather than an object | https://api.github.com/repos/langchain-ai/langchain/issues/20486/comments | 1 | 2024-04-15T18:56:38Z | 2024-07-23T16:12:02Z | https://github.com/langchain-ai/langchain/issues/20486 | 2,244,397,666 | 20,486 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```
from langchain_community.chat_models import ChatCohere
from dotenv import load_dotenv
import os
load_dotenv()
COHERE_API_KEY= os.getenv("COHERE_API_KEY")
chat_model = ChatCohere(cohere_api_key=COHERE_API_KEY, model="command-r")
test = await chat_model.ainvoke("test")
print(test)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], [line 8](vscode-notebook-cell:?execution_count=4&line=8)
[6](vscode-notebook-cell:?execution_count=4&line=6) COHERE_API_KEY= os.getenv("COHERE_API_KEY")
[7](vscode-notebook-cell:?execution_count=4&line=7) chat_model = ChatCohere(cohere_api_key=COHERE_API_KEY, model="command-r",offline=False)
----> [8](vscode-notebook-cell:?execution_count=4&line=8) test = await chat_model.ainvoke("test")
[9](vscode-notebook-cell:?execution_count=4&line=9) print(test)
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175), in BaseChatModel.ainvoke(self, input, config, stop, **kwargs)
[166](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:166) async def ainvoke(
[167](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:167) self,
[168](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:168) input: LanguageModelInput,
(...)
[172](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:172) **kwargs: Any,
[173](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:173) ) -> BaseMessage:
[174](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:174) config = ensure_config(config)
--> [175](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175) llm_result = await self.agenerate_prompt(
[176](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:176) [self._convert_input(input)],
[177](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:177) stop=stop,
[178](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:178) callbacks=config.get("callbacks"),
[179](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:179) tags=config.get("tags"),
[180](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:180) metadata=config.get("metadata"),
[181](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:181) run_name=config.get("run_name"),
[182](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:182) run_id=config.pop("run_id", None),
[183](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:183) **kwargs,
[184](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:184) )
[185](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:185) return cast(ChatGeneration, llm_result.generations[0][0]).message
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566), in BaseChatModel.agenerate_prompt(self, prompts, stop, callbacks, **kwargs)
[558](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:558) async def agenerate_prompt(
[559](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:559) self,
[560](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560) prompts: List[PromptValue],
(...)
[563](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:563) **kwargs: Any,
[564](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:564) ) -> LLMResult:
[565](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:565) prompt_messages = [p.to_messages() for p in prompts]
--> [566](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566) return await self.agenerate(
[567](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:567) prompt_messages, stop=stop, callbacks=callbacks, **kwargs
[568](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:568) )
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526), in BaseChatModel.agenerate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[513](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:513) if run_managers:
[514](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:514) await asyncio.gather(
[515](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:515) *[
[516](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:516) run_manager.on_llm_end(
(...)
[524](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524) ]
[525](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:525) )
--> [526](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526) raise exceptions[0]
[527](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:527) flattened_outputs = [
[528](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:528) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item, union-attr]
[529](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:529) for res in results
[530](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:530) ]
[531](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:531) llm_output = self._combine_llm_outputs([res.llm_output for res in results]) # type: ignore[union-attr]
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707), in BaseChatModel._agenerate_with_cache(self, messages, stop, run_manager, **kwargs)
[705](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:705) else:
[706](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:706) if inspect.signature(self._agenerate).parameters.get("run_manager"):
--> [707](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707) result = await self._agenerate(
[708](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:708) messages, stop=stop, run_manager=run_manager, **kwargs
[709](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:709) )
[710](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:710) else:
[711](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:711) result = await self._agenerate(messages, stop=stop, **kwargs)
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242), in ChatCohere._agenerate(self, messages, stop, run_manager, **kwargs)
[240](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:240) generation_info = None
[241](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:241) if hasattr(response, "documents"):
--> [242](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242) generation_info = self._get_generation_info(response)
[243](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:243) return ChatResult(
[244](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:244) generations=[
[245](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:245) ChatGeneration(message=message, generation_info=generation_info)
[246](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:246) ]
[247](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:247) )
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194), in ChatCohere._get_generation_info(self, response)
[187](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:187) def _get_generation_info(self, response: Any) -> Dict[str, Any]:
[188](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:188) """Get the generation info from cohere API response."""
[189](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:189) return {
[190](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:190) "documents": response.documents,
[191](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:191) "citations": response.citations,
[192](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:192) "search_results": response.search_results,
[193](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:193) "search_queries": response.search_queries,
--> [194](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194) "token_count": response.token_count,
[195](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:195) }
AttributeError: 'NonStreamedChatResponse' object has no attribute 'token_count'
### Description
I realized that recently, all of my code for Cohere through langchain has broken due to API errors. Is this possibly due to this change?
https://docs.cohere.com/changelog/python-sdk-v520-release
currently, it fails on even the most basic test of just using invoke on the chat model
### System Info
langchain==0.1.16
langchain-anthropic==0.1.4
langchain-community==0.0.32
langchain-core==0.1.42
langchain-text-splitters==0.0.1
langchainhub==0.1.15
cohere==5.2.5
Platform: Linux
python version: 3.11.8 | AttributeError: 'NonStreamedChatResponse' object has no attribute 'token_count' | https://api.github.com/repos/langchain-ai/langchain/issues/20484/comments | 4 | 2024-04-15T18:20:49Z | 2024-04-16T16:51:51Z | https://github.com/langchain-ai/langchain/issues/20484 | 2,244,331,164 | 20,484 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Failed to batch ingest runs: LangSmithError('Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'403 Client Error: Forbidden for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"Forbidden"}\')')
### Description
I am using crewai along with langsmith and the error looks like a server side error.
### System Info
platform windows
python 3.11.4 | Failed to batch ingest runs: LangSmithError('Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'403 Client Error: Forbidden for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"Forbidden"}\')') | https://api.github.com/repos/langchain-ai/langchain/issues/20479/comments | 9 | 2024-04-15T16:05:48Z | 2024-08-02T15:00:06Z | https://github.com/langchain-ai/langchain/issues/20479 | 2,244,062,759 | 20,479 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```def load_documents():
texts = []
all_items = os.listdir("files/")
for _, item in enumerate(all_items):
print("Processing file: {}".format(item))
# Generate a unique id for each document
unique_id = str(uuid.uuid4())
text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=0, add_start_index=True)
loader = PDFMinerLoader("files/{}".format(item), extract_images=True)
docs = loader.load_and_split(text_splitter)
# Add unique id and line range to each document
for doc in docs:
doc.metadata["unique_id"] = unique_id
texts.extend(docs)
return texts
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Langchain text splitters needs a functionality to store chunk metadata. Either a start and end line number in pdf or coordinates in pdf.
Currently its not possible in the text splitters.
There are 2 options for this to store the chunk information.
1. Store from and to line.
{
"chunk": "reversible llowing ......",
"pageNumber": 1,
"fromLine": 27,
"toLine": 40,
"documentId": "8a385d38-63ed-4821-a5c2-2e7309d5a256"
},
2. storing the coordinates of a chunk
{
"text": "\n Coach : the ai chatbot - We will have several coaches with different names, each with a unique",
"coordinates": [
{
"x": 54,
"y": 130.7924798925
},
{
"x": 512.654318754038,
"y": 141.7924801425
}
],
"pageSize": {
"width": 612,
"height": 792
},
"pageNumber": 1
},
Looking for some help here. Thanks!
### System Info
$ pip freeze | grep langchain
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.40
langchain-openai==0.0.8
windows
python 3.11 | Langchain text splitters need a functionality to store chunk metadata | https://api.github.com/repos/langchain-ai/langchain/issues/20470/comments | 1 | 2024-04-15T13:15:14Z | 2024-07-24T16:08:31Z | https://github.com/langchain-ai/langchain/issues/20470 | 2,243,657,358 | 20,470 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
agent = planner.create_openapi_agent(
api_spec=api_spec,
requests_wrapper=requests_wrapper,
llm=llm,
allow_dangerous_requests=True,
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i want the tool to have full HTTP capabilities and not only GET and POST
### System Info
latest langchain python installation which is 0.1.16 | OpenAPI agent does not support PATCH and PUT and DELETE, only GET and POST | https://api.github.com/repos/langchain-ai/langchain/issues/20469/comments | 3 | 2024-04-15T12:40:10Z | 2024-06-22T00:13:06Z | https://github.com/langchain-ai/langchain/issues/20469 | 2,243,580,678 | 20,469 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
doc_list_1 = [
"I like apples",
"I like oranges",
"Apples and oranges are fruits",
]
ret2 = BM25Retriever.from_texts(
doc_list_1, metadatas=[{"source": 1}] * len(doc_list_1)
)
retriever = EnsembleRetriever(
retrievers=[ret2, retriever], weights=[0.5, 0.5]
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to use ensemble retriever containing bm25 and opensearch
i am getting the below error :-
"1 validation error for Document\npage_content\n str type expected (type=type_error.str)"
### System Info
langchain==0.1.0
langchain-community==0.0.9
langchain-core==0.1.7
langchainhub==0.1.14 | not able to run BM25 retriever | https://api.github.com/repos/langchain-ai/langchain/issues/20466/comments | 1 | 2024-04-15T10:47:21Z | 2024-07-22T16:08:56Z | https://github.com/langchain-ai/langchain/issues/20466 | 2,243,344,923 | 20,466 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.sql_database import SQLDatabase
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder,PromptTemplate
from langchain.tools import BaseTool
from langchain.tools.render import format_tool_to_openai_function
from langchain.schema.runnable import Runnable,RunnableLambda,RunnableParallel
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents import AgentExecutor
from pydantic import BaseModel, Field
import os
from secret_key import openapi_key
from sqlalchemy import create_engine
import constants
from datetime import datetime
os.environ['OPENAI_API_KEY'] = openapi_key
SQL_DML_CHAIN_PROMPT = """You are expert in SQLITE. Your main objective is to construct Data manipulation SQLITE query give the
user question: {user_question}.
You need to construct the Data manipulation SQLITE query for the following Database Schema:
{table_info}
Only Output the final SQL-Query and nothing else.
SQL-Query:"""
prompt = PromptTemplate(template = SQL_DML_CHAIN_PROMPT,input_variables = ['user_question','table_info'])
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name = "get-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['PAY_transaction_settingallow', 'PAY_mst_allowance','employee_details'],sample_rows_in_table_info=5)
sql_dml_chain = RunnableParallel ({"user_question":lambda x:x["user_question"],
"table_info":lambda _: db.get_table_info()}) |\
prompt |\
ChatOpenAI().bind(stop='SQL-Query:') |\
RunnableLambda(lambda x:x.content.strip().replace('```sql',''))
agent_prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are expert in SQL whose main objective is to mainpulate the Database for which you have
been given access. You can use the tool `sql_db_manipulation` to interact with Database and
mainpulate the database as per the user requirement.
Wrapped column names: All column names should be wrapped in square brackets [] as delimiters.
Use GETDATE() to get the current date and time instead of DATETIME('now').
For each record processed, retrieve the (employeeEuid) as 'euid' from the employee_details table where the employeeName matches, and retrieve the allowance ID (alw_id) from the pay_mst_allowance table where the alw_desc matches.no changes has to be made in "pay_mst_allowance" table and "employee_details" table.
perform JOIN operation to fetch euid and alw_id from respective tables.
Selected table: Specify PAY_transaction_settingallow as the table to update.
Employee and allowance selection: Use the WHERE clause to filter employees based on employeeName and allowances based on alw_desc.
Date handling: Maintain the logic for createDate, effect_date, and to_date using SQL Server functions.
Currency: Assume the amount to be in rupees.
Removed newlines: Write the query as a single string without newlines (\n).
Ensure the query executes efficiently and without errors.
"""),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
class SQLDBMANIPULATION(BaseModel):
user_query: str = Field(description='User question which will be translated to a Data Manipulation SQL Query and will be executed on the underlying database')
class SQLDBMANIPULATIONTool(BaseTool):
name = "sql_db_manipulation"
description = "Use this tool to convert and execute DML queries given the user question"
args_schema: type[SQLDBMANIPULATION] = SQLDBMANIPULATION
sql_dml_chain: Runnable
def _run(
self, user_query: str
) -> str:
"""Use the tool."""
query = sql_dml_chain.invoke({"user_question":user_query})
query = query.replace("DATETIME('now')", "GETDATE()")
db._execute(query)
tools = [SQLDBMANIPULATIONTool(sql_dml_chain = sql_dml_chain)]
llm_with_tools = ChatOpenAI().bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| agent_prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Invoking: `sql_db_manipulation` with `{'user_query': "INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency]) SELECT ed.employeeEuid AS euid, pma.alw_id, GETDATE() AS createDate, '2024-03-01', '2024-03-31', 500, 'INR' FROM employee_details ed JOIN pay_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive'", 'user_question': 'insert Production incentive of 500 For jyothi for march 2024'}`
Traceback (most recent call last):
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]'datetime' is not a recognized built-in function name. (195) (SQLExecDirectW)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\jyothi\Desktop\test\modifying.py", line 130, in <module>
agent_executor.invoke({"input": " insert Production incentive of 500 For jyothi for march 2024 "})
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\agents\agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\agents\agent.py", line 1095, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\tools\base.py", line 365, in run
raise e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\tools\base.py", line 339, in run
else self._run(*tool_args, **tool_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\jyothi\Desktop\test\modifying.py", line 98, in _run
db._execute(query)
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\utilities\sql_database.py", line 411, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute
return meth(
^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\sql\elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]'datetime' is not a recognized built-in function name. (195) (SQLExecDirectW)")
[SQL: INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency])
SELECT ed.employeeEuid AS euid, pma.alw_id, datetime('now') AS createDate, '2024-03-01', '2024-03-31', 500, 'INR'
FROM employee_details ed
JOIN PAY_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive';]
### Description
I'm using the agentExecutor from langhain to database manipulation to perform curd operation on the table, even after specifically mentioning to use GETDATE() it keep taking datetime('now') while executing the query
i have shared the complete error
while Invoking it write a correct query
Invoking: `sql_db_manipulation` with `{'user_query': "INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency]) SELECT ed.employeeEuid AS euid, pma.alw_id, GETDATE() AS createDate, '2024-03-01', '2024-03-31', 500, 'INR' FROM employee_details ed JOIN pay_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive'", 'user_question': 'insert Production incentive of 500 For jyothi for march 2024'}`
but while executing its taking
[SQL: INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency])
SELECT ed.employeeEuid AS euid, pma.alw_id, datetime('now') AS createDate, '2024-03-01', '2024-03-31', 500, 'INR'
FROM employee_details ed
JOIN PAY_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive';]
### System Info
os: windows
python: 3.11
langchain: latest | 'datetime' is not a recognized built-in function name. with angen_executor | https://api.github.com/repos/langchain-ai/langchain/issues/20465/comments | 2 | 2024-04-15T10:44:40Z | 2024-08-09T08:37:35Z | https://github.com/langchain-ai/langchain/issues/20465 | 2,243,337,268 | 20,465 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Not applicable
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The following files trigger antivirus scans which report the presence of the Emf.Exploit.CVE_2017-3122-6335825-0 CVE:
* docs/docs/integrations/document_loaders/example_data/fake.vsdx
* libs/community/tests/examples/fake.vsdx
These files were added in the https://github.com/langchain-ai/langchain/pull/16171 PR.
Details on the scan results: https://www.virustotal.com/gui/file/3b02db67f312bfb1a0ac430673c372ec92eabfaf2888030161d7841ae2120f5f/detection
Recommendation: remove the `visio/media/image2.emf` entry from the `fake.vsdx` archive. This is the file which triggers the CVE and it is not required for tests which use the archive.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
> Python Version: 3.11.4 (main, Jul 10 2023, 09:48:51) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.22
> langsmith: 0.1.5
> langchain_experimental: 0.0.52
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| community: test file triggers antivirus scan | https://api.github.com/repos/langchain-ai/langchain/issues/20456/comments | 3 | 2024-04-15T08:50:57Z | 2024-07-29T16:07:22Z | https://github.com/langchain-ai/langchain/issues/20456 | 2,243,082,369 | 20,456 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import PaiEasChatEndpoint
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
prompt_value = prompt.invoke("test_message")
eas_chat_endpoint = PaiEasChatEndpoint(
eas_service_url="your_service_url",
eas_service_token="your_service_token"
)
eas_chat_endpoint._call(prompt_value.messages)
```
It is just example code that has a potential error, so I'll explain why it's a possible type error in the description.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
https://github.com/langchain-ai/langchain/blob/b66a4f48fa5656871c3e849f7e1790dfb5a4c56b/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L193-L230
In the method `PaiEasChatEndpoint._call` at line 203, `self._call_eas(request_payload)` returns `str` type because it returns response.text at line 230 (https://requests.readthedocs.io/en/latest/api/#requests.Response.text).
Then, first argument of `_format_response_payload` can be `str` type at line 204, resulting in type mismatch.
https://github.com/langchain-ai/langchain/blob/b66a4f48fa5656871c3e849f7e1790dfb5a4c56b/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L167-L179
Moreover, if `JSONDecodeError` occurs in this method, then attribute error occurs at line 178 (`AttributeError: 'str' object has no attribute 'decode'`) because `output` variable is str type.
I think `PaiEasChatEndpoint._call_eas` should be fixed to return bytes type.
If I'm mistaken, I'd appreciate it if you could let me know.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024
> Python Version: 3.9.18 (main, Feb 13 2024, 14:37:08)
[GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `PaiEasChatEndpoint._call_eas` should return `bytes` type instead of `str` type | https://api.github.com/repos/langchain-ai/langchain/issues/20453/comments | 0 | 2024-04-15T07:47:06Z | 2024-07-22T16:08:47Z | https://github.com/langchain-ai/langchain/issues/20453 | 2,242,955,555 | 20,453 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I read in this langchain document page: https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector/
which mentioned that can embedding summary, hypothetical question along with document.
```
The methods to create multiple vectors per document include:
Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).
Summary: create a summary for each document, embed that along with (or instead of) the document.
Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.
```
How can I do that? In the tutorial, they just use multivector embed insteaad of the document.
```
# The vectorstore to use to index the child chunks
vectorstore = Chroma(collection_name="summaries", embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryByteStore()
id_key = "doc_id"
# The retriever (empty to start)
retriever = MultiVectorRetriever(
vectorstore=vectorstore,
byte_store=store,
id_key=id_key,
)
doc_ids = [str(uuid.uuid4()) for _ in docs]
summary_docs = [
Document(page_content=s, metadata={id_key: doc_ids[i]})
for i, s in enumerate(summaries)
]
retriever.vectorstore.add_documents(summary_docs)
retriever.docstore.mset(list(zip(doc_ids, docs)))
```
### Idea or request for content:
_No response_ | DOC: MultiVector Retriever along with document embedding | https://api.github.com/repos/langchain-ai/langchain/issues/20452/comments | 0 | 2024-04-15T07:28:53Z | 2024-07-22T16:08:41Z | https://github.com/langchain-ai/langchain/issues/20452 | 2,242,919,505 | 20,452 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
os.environ["TEST_PGVECTOR_PORT"] = "5432"
os.environ["TEST_PGVECTOR_DATABASE"] = "langchain_tests"
os.environ["TEST_PGVECTOR_USER"] = "postgres"
os.environ["TEST_PGVECTOR_PASSWORD"] = "postgres"
import os
from typing import List
from langchain_core.embeddings import Embeddings
from langchain_community.vectorstores.pgvector import PGVector
class FakeEmbeddings(Embeddings):
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return [[float(1.0)] * 9 + [float(i)] for i in range(len(texts))]
def embed_query(self, text: str) -> List[float]:
return [float(1.0)] * 9 + [float(0.0)]
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("TEST_PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("TEST_PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("TEST_PGVECTOR_PORT", "6024")),
database=os.environ.get("TEST_PGVECTOR_DATABASE", "langchain"),
user=os.environ.get("TEST_PGVECTOR_USER", "langchain"),
password=os.environ.get("TEST_PGVECTOR_PASSWORD", "langchain"),
)
texts = ["foo", "bar", "baz"]
metadatas = [{"page": str(i)} for i in range(len(texts))]
docsearch = PGVector.from_texts(
texts=texts,
collection_name="test_collection_filter",
embedding=FakeEmbeddings(),
metadatas=metadatas,
connection_string=CONNECTION_STRING,
pre_delete_collection=True,
)
output = docsearch.similarity_search_with_score(
"foo", k=3, filter={"page": {"OR": [{"EQ": "0"}, {"EQ": "2"}]}}
)
# Alternatively:
output = docsearch.similarity_search_with_score(
"foo", k=3, filter={"page": {"AND": [{"IN": ["0","1"]}, {"NIN": ["1"]}]}}
)
```
This raises:
```
TypeError: PGVector._create_filter_clause() takes 2 positional arguments but 3 were given
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 601, in similarity_search_with_score
docs = self.similarity_search_with_score_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 626, in similarity_search_with_score_by_vector
results = self.__query_collection(embedding=embedding, k=k, filter=filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 945, in __query_collection
filter_clauses = self._create_filter_clause_json_deprecated(filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 828, in _create_filter_clause_json_deprecated
filter_by_metadata = self._create_filter_clause_deprecated(key, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 797, in _create_filter_clause_deprecated
or_clauses = [
^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 798, in <listcomp>
self._create_filter_clause(key, sub_value)
TypeError: PGVector._create_filter_clause() takes 2 positional arguments but 3 were given
```
### Description
While using PGVector without the newer jsonb format one can filter using the _create_filter_clause_deprecated method. That allows filtering with EQ, NIN, IN, etc. but also with OR and AND, the problem with this two is that the code should be calling itself but it calls the newer _create_filter_clause that does not support the same method signature and raises an error.
Expected Result:
When the PGVector is created without the usejsonb flag in True, should allow to filter with OR and AND conditions
Actual Result:
Adding an OR or AND filter without the usejsonb flag ends in an error raising
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 | PGVector deprecated create_filter_clause uses incorrect method. Can't filter by OR or AND conditions | https://api.github.com/repos/langchain-ai/langchain/issues/20445/comments | 0 | 2024-04-14T22:26:26Z | 2024-07-21T16:07:15Z | https://github.com/langchain-ai/langchain/issues/20445 | 2,242,399,652 | 20,445 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts import PromptTemplate
import google.generativeai as genai
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAI
GOOGLE_API_KEY='My Api Key'
llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0, google_api_key=GOOGLE_API_KEY)
prompt = "Where is Tajmahal"
result = llm.invoke(prompt)
```
### Error Message and Stack Trace (if applicable)
`Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised DeadlineExceeded: 504 Deadline Exceeded.`
### Description
I'm attempting to use ChatGoogleGenerativeAI, but it's running very slowly on my local Apple Silicon machine. The same code works well in Google Colab, but I'm receiving a 504 error locally.
### System Info
aiohttp==3.9.4
aiosignal==1.3.1
annotated-types==0.6.0
attrs==23.2.0
cachetools==5.3.3
certifi==2024.2.2
charset-normalizer==3.3.2
dataclasses-json==0.6.4
frozenlist==1.4.1
google-ai-generativelanguage==0.6.1
google-api-core==2.18.0
google-api-python-client==2.125.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-generativeai==0.5.0
googleapis-common-protos==1.63.0
grpcio==1.62.1
grpcio-status==1.62.1
httplib2==0.22.0
idna==3.7
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-google-genai==1.0.2
langchain-text-splitters==0.0.1
langsmith==0.1.47
marshmallow==3.21.1
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
orjson==3.10.0
packaging==23.2
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pydantic==2.7.0
pydantic_core==2.18.1
pyparsing==3.1.2
PyYAML==6.0.1
requests==2.31.0
rsa==4.9
SQLAlchemy==2.0.29
tenacity==8.2.3
tqdm==4.66.2
typing-inspect==0.9.0
typing_extensions==4.11.0
uritemplate==4.1.1
urllib3==2.2.1
yarl==1.9.4
platform mac
python version 3.12.3 | ChatGoogleGenerativeAI is running slowly on my Apple Silicon machine. | https://api.github.com/repos/langchain-ai/langchain/issues/20444/comments | 4 | 2024-04-14T21:08:53Z | 2024-08-02T11:18:19Z | https://github.com/langchain-ai/langchain/issues/20444 | 2,242,367,841 | 20,444 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
``` weaviate.Client(url=url, auth=auth, **kwargs)``` is a v3 type of implementation of Weaviate client
### Error Message and Stack Trace (if applicable)
```bash
DeprecationWarning: Dep016: You are using the Weaviate v3 client, which is deprecated. Consider upgrading to the new and improved v4 client instead! See here for usage: https://weaviate.io/developers/weaviate/client-libraries/python warnings.warn(```
### Description
- I am trying to create a weaviate client using langhchain's `from langchain_community.vectorstores.weaviate import Weaviate` library.
- I expected it to create a weaviate client
- The current langchain internal implementaion of creating of weaveate client is working but is `v3` client API implementation which results in the warning, it should be upgraded to `v4` client API.
### System Info
`pip freeze | grep langchain` :
```bash
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.42
langchain-openai==0.0.8
langchain-pinecone==0.0.3
langchain-text-splitters==0.0.1
langchain-weaviate==0.0.1.post1
```
- Platform: `macOS 14.4.1`
- Python version: `Python 3.11.4`
---------------
`python -m langchain_core.sys_info`:
```bash
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
> Python Version: 3.11.4 (main, Jul 5 2023, 08:54:11) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.38
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
```
| Depreciated initialisation (v3) of Weaviate Vector Database client | https://api.github.com/repos/langchain-ai/langchain/issues/20442/comments | 3 | 2024-04-14T19:52:28Z | 2024-05-30T10:04:52Z | https://github.com/langchain-ai/langchain/issues/20442 | 2,242,336,909 | 20,442 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Getting error for this code -
```python
from langchain_openai import OpenAI
llm = OpenAI(model='gpt-3.5-turbo', temperature=0, streaming=True)
llm('how are you?')
```
### Error Message and Stack Trace (if applicable)
NotFoundError: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}
### Description
Getting above specified error when configuring `gpt-3.5-turbo` model with `OpenAI`
However, this model works as expected with `ChatOpenAI`
On the other hand, `gpt-3.5-turbo-instruct` model also works as expected with `OpenAI`, code is mentioned below -
```python
from langchain_openai import OpenAI
llm = OpenAI(model='gpt-3.5-turbo-instruct',temperature=0, streaming=True)
llm('how are you?')
```
Here is the screenshot for reference -
<img width="1193" alt="Screenshot 2024-04-15 at 12 02 51 AM" src="https://github.com/langchain-ai/langchain/assets/47742503/50cbe4de-9b94-4569-85d3-f92137aacbe4">
I believe this issue is due to configuring non-supported model with `OpenAI` instead of `ChatOpenAI`
**Observation 🔍**
I referred the codebase of [openai python package](https://github.com/openai/openai-python) and observed that there are some set of models which has only support of `v1/chat/completions` (`ChatOpenAI` as implemented within langchain). Check these files for more details -
- `v1/chat/completions`: [src/openai/resources/chat/completions.py](https://github.com/openai/openai-python/blob/595e6b834fe26e08e9ac5415ab15bc19b8661f2c/src/openai/resources/chat/completions.py#L46) (list of specific models mentioned here)
- `v1/completions`: [src/openai/resources/completions.py](https://github.com/openai/openai-python/blob/595e6b834fe26e08e9ac5415ab15bc19b8661f2c/src/openai/resources/completions.py#L39)
**Potential Fix 🤔**
Should we validate the model name by referring to the same list when handling parameters for OpenAI, and raise an error accordingly?
**I can work on this, please check and let me know**
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:11:05 PDT 2024; root:xnu-10063.101.17~1/RELEASE_X86_64
> Python Version: 3.11.8 (main, Feb 26 2024, 15:43:17) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> openai: 1.17.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | NotFoundError for OpenAI with gpt-3.5-turbo model | https://api.github.com/repos/langchain-ai/langchain/issues/20441/comments | 4 | 2024-04-14T19:07:09Z | 2024-04-17T04:30:53Z | https://github.com/langchain-ai/langchain/issues/20441 | 2,242,317,151 | 20,441 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```import os
from dotenv import load_dotenv
from langchain_community.chat_models import ChatOllama
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from transformers import AutoTokenizer
from langchain.tools import Tool
load_dotenv()
hf_token = os.environ.get("HF_TOKEN")
llm = ChatOllama(model="qwen:14b")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-14B", token=hf_token)
search_wrapper = DuckDuckGoSearchAPIWrapper(region="en-us", max_results=5)
# Set up the DuckDuckGo search tool
search_tool = DuckDuckGoSearchRun(verbose=True, api_wrapper=search_wrapper)
# Set up the search tools list
search_tools = [
Tool(
name="Search",
func=search_tool.run,
description="Use the DuckDuckGo search engine to find information",
),
]
llm_with_tools = llm.bind_tools(tools=search_tools)
```
### Error Message and Stack Trace (if applicable)
```
AttributeError Traceback (most recent call last)
Cell In[1], [line 26](vscode-notebook-cell:?execution_count=1&line=26)
[17](vscode-notebook-cell:?execution_count=1&line=17) # Set up the search tools list
[18](vscode-notebook-cell:?execution_count=1&line=18) search_tools = [
[19](vscode-notebook-cell:?execution_count=1&line=19) Tool(
[20](vscode-notebook-cell:?execution_count=1&line=20) name="Search",
(...)
[23](vscode-notebook-cell:?execution_count=1&line=23) ),
[24](vscode-notebook-cell:?execution_count=1&line=24) ]
---> [26](vscode-notebook-cell:?execution_count=1&line=26) llm_with_tools = llm.bind_tools(tools=search_tools)
AttributeError: 'ChatOllama' object has no attribute 'bind_tools'
```
### Description
In the docs for ChatOllama it says there is the method bind_tools() [here](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama.bind_tools). When I attempt to run the Ollama instance and bind the search tool it throws the AttributeError saying it has no attribute 'bind_tools'
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.38
> langchain_experimental: 0.0.56
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | 'ChatOllama' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/20439/comments | 13 | 2024-04-14T18:08:10Z | 2024-08-04T08:48:34Z | https://github.com/langchain-ai/langchain/issues/20439 | 2,242,292,340 | 20,439 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
client = AsyncOpenAI(
# Defaults to os.environ.get("OPENAI_API_KEY")
# Otherwise use: api_key="Your_API_Key",
api_key=settings.OPENAI_API_KEY,
)
async def acreate_assistant(**kwargs: Any, ):
return await OpenAIAssistantRunnable.acreate_assistant(async_client=client, **kwargs)
```
when I run `await acreate_assistant()` it shows `Unexpected error occurred: OpenAIError - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Here is the code of `client` property in `OpenAIAssistantRunnable`
```
client: Any = Field(default_factory=_get_openai_client)
```
Because of using `AsyncClient`, the `client` value is None as default. So it will create by `_get_openai_client`.
```
class OpenAI(SyncAPIClient):
def __init__(...):
if api_key is None:
api_key = os.environ.get("OPENAI_API_KEY")
if api_key is None:
raise OpenAIError(
"The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"
)
self.api_key = api_key
```
but the `OpenAI` will check the `api_key` automaticlly.
So it will always failed when I use AsyncOpenAIClient to create the `OpenAIAssistantRunnable` object.
I don't know if it's Ok to remove the `default_factory` of `client`. If it's ok, I'll create a PR to fix this problem.
### System Info
```
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-openai==0.1.0
langchain-text-splitters==0.0.1
langsmith==0.1.31
``` | create openai assistant object failed when I use AsyncOpenAI client | https://api.github.com/repos/langchain-ai/langchain/issues/20436/comments | 2 | 2024-04-14T15:03:37Z | 2024-08-05T16:08:41Z | https://github.com/langchain-ai/langchain/issues/20436 | 2,242,210,783 | 20,436 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import LocalAIEmbeddings
from langchain_community.vectorstores import FAISS
embeddings_model = LocalAIEmbeddings(
openai_api_base=OPENAI_BASE_URL, openai_api_key=OPENAI_API_KEY, model="bge-large-zh-v1.5"
)
db = FAISS.load_local("data.db", embeddings_model, allow_dangerous_deserialization=True)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
AttributeError: module ‘openai’ has no attribute ‘error’
### System Info
langchain 0.1.16
linux
Python :: 3.11 | How to specify custom embeddings? | https://api.github.com/repos/langchain-ai/langchain/issues/20428/comments | 0 | 2024-04-14T01:55:33Z | 2024-07-21T16:07:05Z | https://github.com/langchain-ai/langchain/issues/20428 | 2,241,859,492 | 20,428 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
xccxcx
### Error Message and Stack Trace (if applicable)
MATCH (n:`category`)
WHERE n.embedding IS null AND
any(k in $props WHERE n[k] IS NOT null) RETURN elementId(n) AS id,
reduce(str='',k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text LIMIT 1000
### Description
The query residing at line 1278 always add propetry name, semicolon and new line to all queried data so distorts embeddings.
MATCH (n:`category`)
WHERE n.embedding IS null AND
any(k in $props WHERE n[k] IS NOT null) RETURN elementId(n) AS id,
reduce(str='',k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text LIMIT 1000
I am suggesting to remove this part of the query " '\\n' + k + ':' "
### System Info
The bug is multi platform. | Method from_existing_graph() distorts queried data from Neo4J so impacts embeddings. | https://api.github.com/repos/langchain-ai/langchain/issues/20423/comments | 3 | 2024-04-13T23:29:05Z | 2024-07-20T16:07:25Z | https://github.com/langchain-ai/langchain/issues/20423 | 2,241,817,996 | 20,423 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**
### Error Message and Stack Trace (if applicable)
**
### Description
The Voyage AI Reranker cannot be imported as a module without #19645
### System Info
** | Request for a langchain-voyageai package release | https://api.github.com/repos/langchain-ai/langchain/issues/20408/comments | 2 | 2024-04-13T07:22:51Z | 2024-07-30T16:06:45Z | https://github.com/langchain-ai/langchain/issues/20408 | 2,241,402,870 | 20,408 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [ ] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi I am trying to generate QA from a pdf
Running following code snippet -
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 1000,
chunk_overlap = 100,
length_function = len,
)
loader = PyPDFLoader(data_pdf)
pages = loader.load_and_split(text_splitter=text_splitter)
len(pages) #260
templ = """You are a smart assistant designed to come up with meaninful question and answer pair. The question should be to the point and the answer should be as detailed as possible.
Given a piece of text, you must come up with a question and answer pair that can be used to evaluate a QA bot. Do not make up stuff. Stick to the text to come up with the question and answer pair.
When coming up with this question/answer pair, you must respond in the following format:
```
{{
"question": "$YOUR_QUESTION_HERE",
"answer": "$THE_ANSWER_HERE"
}}
```
Everything between the ``` must be valid json.
Please come up with a question/answer pair, in the specified JSON format, for the following text:
----------------
{text}"""
PROMPT = PromptTemplate.from_template(templ)
llm = Cohere(model="command", temperature=0) # command, command-light
chain = QAGenerationChain.from_llm(llm=llm, prompt=PROMPT)
### llm is as follows
##Cohere(client=<cohere.client.Client object at 0x00000188268D3BD0>, async_client=<cohere.client.AsyncClient object at ##0x0000018827B22690>, model='command', temperature=0.0)
#By running following code I am expecting QA set to be generated but I am getting error
chain.invoke(pages[40].page_content)
I tried other models like OpenAI and Google Gemini Pro and QAGeneration chain fails at same step
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 1
----> 1 chain.invoke(pages[40].page_content)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\qa_generation\base.py:73, in QAGenerationChain._call(self, inputs, run_manager)
67 def _call(
68 self,
69 inputs: Dict[str, Any],
70 run_manager: Optional[CallbackManagerForChainRun] = None,
71 ) -> Dict[str, List]:
72 docs = self.text_splitter.create_documents([inputs[self.input_key]])
---> 73 results = self.llm_chain.generate(
74 [{"text": d.page_content} for d in docs], run_manager=run_manager
75 )
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\llm.py:115, in LLMChain.generate(self, input_list, run_manager)
113 callbacks = run_manager.get_child() if run_manager else None
114 if isinstance(self.llm, BaseLanguageModel):
--> 115 return self.llm.generate_prompt(
116 prompts,
117 stop,
118 callbacks=callbacks,
119 **self.llm_kwargs,
120 )
121 else:
122 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
123 cast(List, prompts), {"callbacks": callbacks}
124 )
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:597, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
589 def generate_prompt(
590 self,
591 prompts: List[PromptValue],
(...)
594 **kwargs: Any,
595 ) -> LLMResult:
596 prompt_strings = [p.to_string() for p in prompts]
--> 597 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:767, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
752 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
753 run_managers = [
754 callback_manager.on_llm_start(
755 dumpd(self),
(...)
765 )
766 ]
--> 767 output = self._generate_helper(
768 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
769 )
770 return output
771 if len(missing_prompts) > 0:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:634, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
632 for run_manager in run_managers:
633 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 634 raise e
635 flattened_outputs = output.flatten()
636 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:621, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
611 def _generate_helper(
612 self,
613 prompts: List[str],
(...)
617 **kwargs: Any,
618 ) -> LLMResult:
619 try:
620 output = (
--> 621 self._generate(
622 prompts,
623 stop=stop,
624 # TODO: support multiple run managers
625 run_manager=run_managers[0] if run_managers else None,
626 **kwargs,
627 )
628 if new_arg_supported
629 else self._generate(prompts, stop=stop)
630 )
631 except BaseException as e:
632 for run_manager in run_managers:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:1231, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
1228 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
1229 for prompt in prompts:
1230 text = (
-> 1231 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
1232 if new_arg_supported
1233 else self._call(prompt, stop=stop, **kwargs)
1234 )
1235 generations.append([Generation(text=text)])
1236 return LLMResult(generations=generations)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:217, in Cohere._call(self, prompt, stop, run_manager, **kwargs)
202 """Call out to Cohere's generate endpoint.
203
204 Args:
(...)
214 response = cohere("Tell me a joke.")
215 """
216 params = self._invocation_params(stop, **kwargs)
--> 217 response = completion_with_retry(
218 self, model=self.model, prompt=prompt, **params
219 )
220 _stop = params.get("stop_sequences")
221 return self._process_response(response, _stop)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:45, in completion_with_retry(llm, **kwargs)
43 def completion_with_retry(llm: Cohere, **kwargs: Any) -> Any:
44 """Use tenacity to retry the completion call."""
---> 45 retry_decorator = _create_retry_decorator(llm)
47 @retry_decorator
48 def _completion_with_retry(**kwargs: Any) -> Any:
49 return llm.client.generate(**kwargs)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:38, in _create_retry_decorator(llm)
31 max_seconds = 10
32 # Wait 2^x * 1 second between each retry starting with
33 # 4 seconds, then up to 10 seconds, then 10 seconds afterwards
34 return retry(
35 reraise=True,
36 stop=stop_after_attempt(llm.max_retries),
37 wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
---> 38 retry=(retry_if_exception_type(cohere.error.CohereError)),
39 before_sleep=before_sleep_log(logger, logging.WARNING),
40 )
AttributeError: module 'cohere' has no attribute 'error'
### Description
chain = QAGenerationChain.from_llm(llm=llm, prompt=PROMPT)
chain.invoke(pages[40].page_content)
Chain.run and chain.invoke giving error
### System Info
python 3.11
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.42
langchain-google-genai==1.0.2 | Error run or invoke method of QAGenerationChain | https://api.github.com/repos/langchain-ai/langchain/issues/20406/comments | 3 | 2024-04-13T04:48:27Z | 2024-07-21T16:07:00Z | https://github.com/langchain-ai/langchain/issues/20406 | 2,241,293,176 | 20,406 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have the following code:
```python
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.utilities.sql_database import SQLDatabase
db = SQLDatabase.from_uri(database_uri="%POSTGRES_URI%")
prompt = "what is the total corpus of money in banks"
conversation = create_sql_agent(llm=llm, db=db, agent_type="openai-tools", verbose=True, top_k=15)
return conversation.invoke(input={"input": prompt})
```
### Error Message and Stack Trace (if applicable)
> Entering new SQL Agent Executor chain...
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `sql_db_list_tables` with `{}`
ERROR:
1 validation error for _ListSQLDataBaseToolInput
tool_input
field required (type=value_error.missing)
### Description
I recently updated langchain and pydantic to their latest version and my code broke and when using the sql agent I get input missing issue. I think the invoking of sql_db_list_tables is not putting any value into when trying to parse the description of table schema. And this is a bug from langchain library itself.
Kindly help into this issue, as this is becoming a blocker. Thanks.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.45
> langchain_experimental: 0.0.50
> langchain_openai: 0.1.3
> langchain_pinecone: 0.0.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
> pydantic-settings: 2.1.0
> pydantic: 2.7.0
> pydantic-core: 2.18.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | [BUG] Getting validation error for SQL Database, type=value_error.missing, when its trying to list tables in the db | https://api.github.com/repos/langchain-ai/langchain/issues/20405/comments | 6 | 2024-04-13T04:14:38Z | 2024-08-07T16:06:20Z | https://github.com/langchain-ai/langchain/issues/20405 | 2,241,270,531 | 20,405 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am ingesting docs to qdrant using langchain but not able to retrieve a list of chunks from the vector store.
My ingest.py script:
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.document_loaders import PDFMinerLoader
from langchain_community.vectorstores.qdrant import Qdrant
from dotenv import load_dotenv
import uuid
load_dotenv()
OPENAI_API_TOKEN = os.getenv('OPENAI_API_KEY')
def load_documents():
texts = []
all_items = os.listdir("files/")
for _, item in enumerate(all_items):
print("Processing file: {}".format(item))
unique_id = str(uuid.uuid4())
text_splitter = RecursiveCharacterTextSplitter()
loader = PDFMinerLoader("files/{}".format(item), extract_images=True)
docs = loader.load_and_split(text_splitter)
for doc in docs:
doc.metadata["pdf"] = doc.metadata["source"]
doc.metadata["unique_id"] = unique_id
texts.extend(docs)
return texts
def create_qdrant_store(texts):
print("Creating qdrant store")
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_TOKEN, model="text-embedding-3-small")
Qdrant.from_documents(
texts,
embeddings,
url="http://localhost:6333",
prefer_grpc=True,
collection_name="aff_container",
)
def main():
texts = load_documents()
print(texts)
create_qdrant_store(texts)
print("Docuemnts loaded successfully!")
if __name__ == "__main__":
main()
```
My get files endpoint:
```python
@app.route('/api/get_files', methods=['GET'])
def get_files():
"""
Endpoint to get the list of files in the index.
"""
try:
client = QdrantClient(url="http://localhost:6333")
db = Qdrant(client=client, collection_name='affilix_container', embeddings=embeddings)
files = []
## ----- want to list all the chunks here ----- ##
print(client.get_collection(collection_name='affilix_container').)
for doc in db.collection_name:
file_id = doc.metadata['unique_id']
file_name = doc.metadata['pdf'].replace("files/", "").replace("temp/", "")
if file_id not in [file['id'] for file in files]:
files.append({'id': file_id, 'file_name': file_name})
return jsonify({'files': files})
except Exception as e:
return jsonify({'error': str(e)}), 500`
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am ingesting docs to qdrant using langchain but not able to retrieve a list of chunks from the vector store.
### System Info
$ pip freeze | grep langchain
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.40
langchain-openai==0.0.8 | Not able to list all the documents from qdrant vector store using langchain | https://api.github.com/repos/langchain-ai/langchain/issues/20382/comments | 2 | 2024-04-12T10:36:34Z | 2024-04-13T09:30:09Z | https://github.com/langchain-ai/langchain/issues/20382 | 2,239,762,769 | 20,382 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Definition of the output using PydanticOutputParser:
```python
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
parser = PydanticOutputParser(pydantic_object=Joke)
```
### Error Message and Stack Trace (if applicable)
```
Error in RootListenersTracer.on_chain_end callback: ValueError()
```
### Description
Since I'm trying to build a conversational agent, I am using [RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history/) to automatically add the messages to the history.
I would expect everything to work fine, but I obtain the following error
```
Error in RootListenersTracer.on_chain_end callback: ValueError()
```
from
```python
def _get_output_messages(
self, output_val: Union[str, BaseMessage, Sequence[BaseMessage], dict]
) -> List[BaseMessage]:
from langchain_core.messages import BaseMessage
if isinstance(output_val, dict):
output_val = output_val[self.output_messages_key or "output"]
if isinstance(output_val, str):
from langchain_core.messages import AIMessage
return [AIMessage(content=output_val)]
elif isinstance(output_val, BaseMessage):
return [output_val]
elif isinstance(output_val, (list, tuple)):
return list(output_val)
else:
raise ValueError()
```
Since I used PydanticOutputParser, here `output_val `is a type dict object containing a Joke object:
```
output_val = {'output': Joke(joke='Why did the egg refuse to tell jokes? Because it cracked under pressure!', message='Here a joke for you!')}
```
Joke objects do not seem to be supported by `_get_output_messages`.
How can I get it to work? I would like for the history to update automatically when using RunnableWithMessageHistory and a PydanticOutputParser.
### System Info
langchain==0.1.16
langchain-cli==0.0.19
langchain-community==0.0.32
langchain-core==0.1.42
langchain-experimental==0.0.10
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
python version 3.10.9
OS: windows11 | How to use RunnableWithMessageHistory? | https://api.github.com/repos/langchain-ai/langchain/issues/20380/comments | 3 | 2024-04-12T10:18:41Z | 2024-04-16T13:08:19Z | https://github.com/langchain-ai/langchain/issues/20380 | 2,239,704,735 | 20,380 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.tools import BearlyInterpreterTool
bearly_tool.add_file(
source_path="sample_data/US_GDP.csv", target_path="US_GDP.csv", description=""
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
handle_parsing_errors=True,
)
# Simple Queries
agent.run("What was the US GDP in 2019?")
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': '\'Evaluates python code in a sandbox environment. The environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. The code should NOT be wrapped in backticks. All python packages including requests, matplotlib, scipy, numpy, pandas, etc are available. If you have any files outputted write them to "output/" relative to the execution path. Output can only be read from the directory, stdout, and stdin. Do not use things like plot.show() as it will not work instead write them out `output/` and a link to the file will be returned. print() any output and results so you can capture the output.\\n\\nThe following files available in the evaluation environment:
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i want to use BearlyInterpreterTool,but get erro
### System Info
python version3.10
liunx
langchain 0.1.14 | openai.BadRequestError: Error code: 400 - {'error': {'message': '\'Evaluates python code in a sandbox environment The environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. | https://api.github.com/repos/langchain-ai/langchain/issues/20377/comments | 0 | 2024-04-12T09:25:08Z | 2024-07-19T16:08:30Z | https://github.com/langchain-ai/langchain/issues/20377 | 2,239,531,981 | 20,377 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import Any, AsyncIterator, List, Sequence, cast
from langchain_core.runnables.schema import StreamEvent
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
)
import langchain_core
import typing
from langchain_core.documents import Document
from langchain_core.tools import tool
def foo(x: int) -> dict:
"""Foo"""
return {"x": 5}
@tool
def get_docs(x: int) -> list[Document]:
"""get_docs"""
return [Document(page_content="hello")]
def _with_nulled_run_id(events: Sequence[StreamEvent]) -> List[StreamEvent]:
"""Removes the run ids from events."""
return cast(List[StreamEvent], [{**event, "run_id": ""} for event in events])
async def _collect_events(events: AsyncIterator[StreamEvent]) -> List[StreamEvent]:
"""Collect the events and remove the run ids."""
materialized_events = [event async for event in events]
events_ = _with_nulled_run_id(materialized_events)
for event in events_:
event["tags"] = sorted(event["tags"])
return events_
prompt_obj = {
"name": None,
"input_variables": ["agent_scratchpad", "input"],
"input_types": {
"chat_history": typing.List[
typing.Union[
langchain_core.messages.ai.AIMessage,
langchain_core.messages.human.HumanMessage,
langchain_core.messages.chat.ChatMessage,
langchain_core.messages.system.SystemMessage,
langchain_core.messages.function.FunctionMessage,
langchain_core.messages.tool.ToolMessage,
]
],
"agent_scratchpad": typing.List[
typing.Union[
langchain_core.messages.ai.AIMessage,
langchain_core.messages.human.HumanMessage,
langchain_core.messages.chat.ChatMessage,
langchain_core.messages.system.SystemMessage,
langchain_core.messages.function.FunctionMessage,
langchain_core.messages.tool.ToolMessage,
]
],
},
"output_parser": None,
"partial_variables": {},
"metadata": {
"lc_hub_owner": "hwchase17",
"lc_hub_repo": "openai-tools-agent",
"lc_hub_commit_hash": "c18672812789a3b9697656dd539edf0120285dcae36396d0b548ae42a4ed66f5",
},
"tags": None,
"messages": [
SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="You are a helpful assistant")),
MessagesPlaceholder(variable_name="chat_history", optional=True),
HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=["input"], template="{input}")),
MessagesPlaceholder(variable_name="agent_scratchpad"),
],
"validate_template": False,
}
prompt = ChatPromptTemplate.parse_obj(prompt_obj)
tools = [get_docs]
llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0)
# Construct the OpenAI Tools agent
agent = create_openai_tools_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
events = await _collect_events(
agent_executor.astream_events({"input": "call get_docs."}, version="v1", include_names=["get_docs"])
)
assert events == [
{
"event": "on_tool_start",
"name": "get_docs",
"run_id": "",
"tags": [],
"metadata": {},
"data": {"input": {"x": 5}},
},
{
"event": "on_tool_end",
"name": "get_docs",
"run_id": "",
"tags": [],
"metadata": {},
"data": {"input": {"x": 5}, "output": [Document(page_content="hello")]},
},
]
```
### Error Message and Stack Trace (if applicable)
Assertion error:
```
---------------------------------------------------------------------------
{
"name": "AssertionError",
"message": "",
"stack": "---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[4], line 96
92 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
93 events = await _collect_events(
94 agent_executor.astream_events({\"input\": \"call get_docs.\"}, version=\"v1\", include_names=[\"get_docs\"])
95 )
---> 96 assert events == [
97 {
98 \"event\": \"on_tool_start\",
99 \"name\": \"get_docs\",
100 \"run_id\": \"\",
101 \"tags\": [],
102 \"metadata\": {},
103 \"data\": {\"input\": {\"x\": 5}},
104 },
105 {
106 \"event\": \"on_tool_end\",
107 \"name\": \"get_docs\",
108 \"run_id\": \"\",
109 \"tags\": [],
110 \"metadata\": {},
111 \"data\": {\"input\": {\"x\": 5}, \"output\": [Document(page_content=\"hello\")]},
112 },
113 ]
AssertionError: "
}
```
### Description
When using an agent executor, and we call a tool, I expect the actual output of the tool, rather than the output being cast to a string.
This bug was originally raised [here](https://github.com/langchain-ai/langchain/discussions/18694), and partially fixed except for when using an agent executor in this [PR](https://github.com/langchain-ai/langchain/pull/18932).
This comment shows the cause of the issue: https://github.com/langchain-ai/langchain/pull/18932#issuecomment-2034933719
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.38
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | AsyncCallbackManagerForToolRun improperly casts on_tool_end to string | https://api.github.com/repos/langchain-ai/langchain/issues/20372/comments | 0 | 2024-04-12T07:16:02Z | 2024-07-19T16:08:26Z | https://github.com/langchain-ai/langchain/issues/20372 | 2,239,269,029 | 20,372 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class AnalyzeResume(BaseModel):
"""As a professional HR agent that helps users analyze and generate a comprehensive report about the resume.\n
Use the following pieces of context from a resume to answer the question at the end."""
issues: list = Field(
description="List down all issues detected in the resume with **title, description, solution format** with description & solution of atleast 600 words"
)
suggestions: list = Field(
description="List down all suggestions to improve the resume in **title, description format** with description of atleast 600 words"
)
summary: str = Field(description="Summary of the of the resume.")
negotiations: list = Field(
description="List down all insightful points for resume negotiations in **title, description format** with description of atleast 600 words"
)
companies: list = Field(
description="List down all Names and descriptions of the companies involved in the resume in **title, description format**"
)
### Error Message and Stack Trace (if applicable)

### Description
I want the output should be consistent, sometimes the output list is having data, sometimes just completely blank. Below is the prompt code I am using
```
resume_parser = PydanticOutputParser(pydantic_object=AnalyzeResume)
template = """
As a professional HR agent that helps users analyze and generate a comprehensive report about the resume.\
Use the following pieces of context from a resume to answer the question at the end.\
\n
{context}\n
\n{format_instructions}\n
Question: {question}
"""
rag_prompt_custom = PromptTemplate(
template=template,
input_variables=["context", "question"],
partial_variables={"format_instructions": resume_parser.get_format_instructions()},
)
chroma_client = chromadb.HttpClient(host="chromadb", port=8000)
document_analysis_client = DocumentAnalysisClient(
endpoint=os.environ["AZURE_DI_SERVICE_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_DI_API_KEY"]),
mode="page",
analysis_features=["ocrHighResolution"],
)
loader = DocumentIntelligenceLoader(
file_path=contract_obj.file_input.path, client=document_analysis_client, model="prebuilt-document"
)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=14500,
chunk_overlap=100,
add_start_index=True,
strip_whitespace=True,
separators=MARKDOWN_SEPARATORS,
)
docs = text_splitter.split_documents(documents)
collection_id = str(uuid.uuid4())
chroma_db = Chroma.from_documents(docs, azure_embeddings, client=chroma_client, collection_name=collection_id)
contract_obj.collection_id = collection_id
contract_obj.save()
vectorstore = Chroma(client=chroma_client, collection_name=collection_id, embedding_function=azure_embeddings)
retriever = vectorstore.as_retriever()
rag_chain = {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | azure_llm
with get_openai_callback() as cb:
result = rag_chain.invoke("Generate answers in provided format")
total_rag_tokens = cb.total_tokens
```
Just point me to the issue, I will try my best to fix it. (Might be chroma or Azure OpenAI or the text splitter)
### System Info
langchain
langchain-community
langchain-core
langchain-google-genai==0.0.8
langchain-openai==0.0.5
langchainhub
langsmith
azure-ai-documentintelligence==1.0.0b1
azure-ai-formrecognizer==3.3.2
chroma-hnswlib==0.7.3
chromadb == 0.4.22 | Highly inconsistent output using PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/20369/comments | 2 | 2024-04-12T06:31:30Z | 2024-04-12T21:20:44Z | https://github.com/langchain-ai/langchain/issues/20369 | 2,239,190,057 | 20,369 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def generate_response_langchain(user_question, complexity_level, memory):
"""
Generate a response based on user question using LangChain
"""
groq_chat = None
if complexity_level == "simple":
groq_chat = ChatGroq(
groq_api_key
model_name='llama2-70b-4096' # LLaMA2-70b model
)
elif complexity_level == "large":
groq_chat = ChatGroq(
groq_api_key=
model_name='mixtral-8x7b-32768' # Mixtral-8x7b model
)
elif complexity_level == "complex":
groq_chat = ChatGroq(
groq_api_key=
model_name='gemma-7b-it' # Gemma-7b-it model
)
conversation = ConversationChain(
llm=groq_chat,
memory=memory
)
conversational_memory_length = 100
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
memory = ConversationBufferWindowMemory(k=conversational_memory_length)
# Display chat history
st.write("Chat History:")
for sender, message in st.session_state.chat_history:
st.write(f"{sender} {message}")
user_question = st.chat_input("Say something")
if user_question:
st.session_state.chat_history.append(("User:", user_question))
### Error Message and Stack Trace (if applicable)
def generate_response_langchain(user_question, complexity_level, memory):
"""
Generate a response based on user question using LangChain
"""
groq_chat = None
if complexity_level == "simple":
groq_chat = ChatGroq(
groq_api_key
model_name='llama2-70b-4096' # LLaMA2-70b model
)
elif complexity_level == "large":
groq_chat = ChatGroq(
groq_api_key=
model_name='mixtral-8x7b-32768' # Mixtral-8x7b model
)
elif complexity_level == "complex":
groq_chat = ChatGroq(
groq_api_key=
model_name='gemma-7b-it' # Gemma-7b-it model
)
conversation = ConversationChain(
llm=groq_chat,
memory=memory
)
conversational_memory_length = 100
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
memory = ConversationBufferWindowMemory(k=conversational_memory_length)
# Display chat history
st.write("Chat History:")
for sender, message in st.session_state.chat_history:
st.write(f"{sender} {message}")
user_question = st.chat_input("Say something")
if user_question:
st.session_state.chat_history.append(("User:", user_question))
### Description
i have a problem with chat history if i told him my name in next message, if i asked about my name the model cant answer "Forget the name." i need to make history-based Contextualization cant i get your help?
### System Info
 | i have a problem with chat history if i told him my name in next message, if i asked about my name the model cant answer "Forget the name." i need to make history-based Contextualization cant i get your help? | https://api.github.com/repos/langchain-ai/langchain/issues/20367/comments | 0 | 2024-04-12T04:11:04Z | 2024-07-19T16:08:20Z | https://github.com/langchain-ai/langchain/issues/20367 | 2,239,022,970 | 20,367 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://localhost:8010/",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
huggingfacehub_api_token=None
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 3
1 from langchain_community.llms import HuggingFaceEndpoint
----> 3 llm = HuggingFaceEndpoint(
4 endpoint_url="http://localhost:8010/",
5 max_new_tokens=512,
6 top_k=10,
7 top_p=0.95,
8 typical_p=0.95,
9 temperature=0.01,
10 repetition_penalty=1.03,
11 huggingfacehub_api_token=None
12 )
File ~/Jupyter/llm/venv/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File ~/Jupyter/llm/venv/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for HuggingFaceEndpoint
__root__
Could not authenticate with huggingface_hub. Please check your API token. (type=value_error)
```
### Description
#### Background
While restructuring our codebase in response to the deprecation of `HuggingFaceTextGenInference`, I encountered an error when attempting to create a `HuggingFaceEndpoint` with a locally hosted [TGI server](https://github.com/huggingface/text-generation-inference).
#### Issue
The error occurs in the `validate_environment function` of the `huggingface_endpoint.py` file, specifically in the lines [170-179](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_endpoint.py#L170-L179).
The `@root_validator()` decorator throws an error when `huggingfacehub_api_token` is passed as `None`, which happens due to `login(token=huggingfacehub_api_token)` in `validate_environment` function. By commenting out the block that processes the API token and manually setting `huggingfacehub_api_token` to `None`, I am able to successfully create an `InferenceClient`.
Since HuggingFaceTextGenInference is fused into HuggingFaceEndpoint in PR #17254, we need to add logic to handle cases where `huggingfacehub_api_token` is passed as `None` or when no environment variable `HUGGINGFACEHUB_API_TOKEN` is set. This is particularly necessary for setups using a locally hosted TGI server where authentication with the Huggingface Hub may not be required.
### System Info
huggingface-hub==0.22.2
langchain-commnity==0.0.32
platform: linux
python version: 3.10 | Handling huggingfacehub_api_token=None for HuggingFaceEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/20342/comments | 6 | 2024-04-11T17:31:23Z | 2024-06-03T22:20:33Z | https://github.com/langchain-ai/langchain/issues/20342 | 2,238,215,338 | 20,342 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from uuid import uuid4
from langchain.agents import AgentExecutor, create_structured_chat_agent
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.memory import ConversationBufferWindowMemory
def __handle_parsing_error(self, error):
self.__logger.error(f"Parsing error encountered: {error}")
# For now, returning a generic error message.
return "I'm sorry, there was a problem understanding your request."
def __get_agent(self, session_id):
return AgentExecutor.from_agent_and_tools(
agent=self.__agent,
tools=self.__tools,
verbose=self.__verbose_mode,
memory=self.__get_history(session_id),
handle_parsing_errors=__handle_parsing_error,
return_intermediate_steps=False,
)
def send_message(self, session_id, message: str = "") -> str:
if not message.strip():
return "You didn't ask a question. How can I assist you further?"
runner = self.__get_agent(session_id)
try:
response = runner.invoke({input: message})
except Exception as ex:
self.__logger.exception(str(ex))
return "Sorry, please try again."
return response if response else "No response received"
def __get_history(self, session_id) -> ConversationBufferWindowMemory:
if session_id not in self.__history:
self.__history[session_id] = ConversationBufferWindowMemory(k=10, memory_key=chat_history,
return_messages=True)
return self.__history[session_id]
session_id = str(uuid4())
response = send_message(session_id, "Hi")
```
### Error Message and Stack Trace (if applicable)
2024-04-11 08:40:28,242] ERROR RhinoAgent 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
Traceback (most recent call last):
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 158, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 455, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 40, in save_context
[HumanMessage(content=input_str), AIMessage(content=output_str)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 45, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
### Description
I am trying to use langchain to build a basic chatbot. I have the prompt template defined and I am using openAI GPT4.
When I ask any simple question that gets a string response from agent, it gives me the error above.
Sometimes it also goes into a loop until I run into the "RateLimit" Error . Attached a screenshot for the same

### System Info
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.33
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
| LangChain throwing parsing error and goes in loop when returning a String response | https://api.github.com/repos/langchain-ai/langchain/issues/20341/comments | 2 | 2024-04-11T17:24:10Z | 2024-07-18T16:09:39Z | https://github.com/langchain-ai/langchain/issues/20341 | 2,238,202,938 | 20,341 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code raises a malformed input request based on extraneous key [tools] when querying the AWS Bedrock service.
```python
def init_test_db(debug = False):
# Initialize the database
# Load environment variables from .env.local file
load_dotenv(dotenv_path='.env.local')
# Read PostgreSQL connection parameters from environment variables
POSTGRES_DB = os.getenv("POSTGRES_DB")
POSTGRES_USER = os.getenv("POSTGRES_USER")
POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD")
POSTGRES_HOST = os.getenv("POSTGRES_HOST")
POSTGRES_PORT = os.getenv("POSTGRES_PORT")
# Build the PostgreSQL connection URI
postgres_uri = f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DB}"
db = SQLDatabase.from_uri(postgres_uri)
if debug:
print(db.table_info)
return db
def init_llm(model = 'claude-v2.1'):
# Create the LLM
models = {
'claude-v2.1': 'anthropic.claude-v2:1',
'claude-v3-sonnet': 'anthropic.claude-3-sonnet-20240229-v1:0',
'mistral-large': "mistral.mistral-large-2402-v1:0",
'claude-v2': 'anthropic.claude-v2',
'jurassic-2-mid': 'ai21.j2-mid-v1',
'jurassic-2-ultra': "ai21.j2-ultra-v1",
}
llm = Bedrock(model_id=models[model], streaming=False, region_name='us-east-1')
return llm
llm = init_llm()
db=init_test_db()
agent_executer = create_sql_agent(llm=llm, db=db, agent_type="openai-tools", verbose=True)
agent_executer.invoke("Which customer has the highest outstanding loan value")
```
### Error Message and Stack Trace (if applicable)
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [tools] is not permitted, please reformat your input and try again.
### Description
I am trying to use the langchain library to build an AWS bedrock powered SQL agent.
### System Info
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.36
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
Windows 11
Python 3.8.10 | AWS Bedrock Service Raising Error: "Malformed input request: #: extraneous key [tools] is not permitted." | https://api.github.com/repos/langchain-ai/langchain/issues/20320/comments | 4 | 2024-04-11T13:24:11Z | 2024-07-03T07:41:40Z | https://github.com/langchain-ai/langchain/issues/20320 | 2,237,722,056 | 20,320 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code

### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to use gemini model from langchain CHatGoogleGenerativeAi, but i am getting this error
```
'NoneType' object does not support item assignment (type=type_error)
```
### System Info
kaggle | Not able to use gemini from langchain in kaggle | https://api.github.com/repos/langchain-ai/langchain/issues/20317/comments | 3 | 2024-04-11T11:18:54Z | 2024-07-21T16:06:55Z | https://github.com/langchain-ai/langchain/issues/20317 | 2,237,484,407 | 20,317 |
[
"langchain-ai",
"langchain"
] | _🤖_
<!-- Greeting -->
Hey @nithinreddyyyyyy, back for more adventures in code, I see! Always a pleasure to dive into these challenges with you. 🚀
<!-- Answer -->
Based on the information you provided and the similar issue I found in the LangChain repository, you can use FAISS for ParentDocumentRetriever in the LangChain framework as follows:
```python
from langchain_community.vectorstores.faiss import FAISS
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.textsplitters import RecursiveCharacterTextSplitter
from langchain.retrievers import ParentDocumentRetriever
# Initialize the embeddings and FAISS vector store
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
# Define the child and parent splitters
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
# Initialize the ParentDocumentRetriever with FAISS
parent_document_retriever = ParentDocumentRetriever(
vectorstore=faiss,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
# Add documents to the retriever
parent_document_retriever.add_documents(document, ids=None)
```
This code snippet includes the use of `child_splitter` and `parent_splitter` with FAISS as requested by the user. You can find more details in the [related issue](https://github.com/langchain-ai/langchain/issues/17237).
For retrieving relevant documents with additional metadata and filtering capabilities, you can follow the pattern shown in the test cases for the FAISS class:
```python
from langchain_community.vectorstores.faiss import FAISS
from langchain_core.documents import Document
# Assuming you have a list of texts and corresponding embeddings
texts = ["text1", "text2", "text3"]
metadatas = [{"page": i} for i in range(len(texts))]
embeddings = [embedding1, embedding2, embedding3] # replace with actual embeddings
# Create a FAISS instance from the texts, embeddings, and metadatas
docsearch = FAISS.from_texts(texts, embeddings, metadatas=metadatas)
# Now you can use this FAISS instance to retrieve relevant documents with filtering
query = "some query"
filter_criteria = {"page": 1} # or use a lambda function for more complex filtering
relevant_docs = docsearch.similarity_search(query, k=1, filter=filter_criteria)
```
In this example, `relevant_docs` will contain the most relevant document to the query that also matches the filter criteria. You can adjust the `k` parameter to retrieve more documents and the `filter` parameter to apply different filtering rules. For asynchronous use cases, you can use the `await FAISS.afrom_texts` and `await docsearch.asimilarity_search` methods with the same parameters. You can find more details in the [test cases for the FAISS class](https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/unit_tests/vectorstores/test_faiss.py).
The `FAISS.from_texts` method in the LangChain framework is a class method that constructs a FAISS (Facebook AI Similarity Search) wrapper from raw documents. This method is a user-friendly interface that embeds documents, creates an in-memory docstore, and initializes the FAISS database. It is intended to be a quick way to get started with the framework. You can find more details in the [FAISS class source code](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/faiss.py).
I hope this helps! If you have any further questions, feel free to ask.
## Sources
- [can i use FAISS isntead of Chroma for ParentDocumentRetriver?](https://github.com/langchain-ai/langchain/issues/17237)
- [libs/community/tests/unit_tests/vectorstores/test_faiss.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/unit_tests/vectorstores/test_faiss.py)
- [libs/community/langchain_community/vectorstores/faiss.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/faiss.py)
<!-- Dosu Comment Footer -->
<details open>
<summary>
#### About Dosu
</summary>
This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
</details>
_Originally posted by @dosubot[bot] in https://github.com/langchain-ai/langchain/issues/17576#issuecomment-1946203996_
| save ParentDocumentRetriever in local path _🤖_ | https://api.github.com/repos/langchain-ai/langchain/issues/20315/comments | 2 | 2024-04-11T09:57:45Z | 2024-07-18T16:09:29Z | https://github.com/langchain-ai/langchain/issues/20315 | 2,237,352,542 | 20,315 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the current documentation : [Constitutional Chain Documentation](https://python.langchain.com/docs/guides/productionization/safety/constitutional_chain/#custom-principles)
Usage of Constitutional Principles inside a Langchain Agent is not provided. It only has information on LLMChain.
I want to use it inside a conversational agent, because I also have to pass the custom tools and history of conversation into the agent.
Currently I am using create_openai_tools_agent() to create the agent.
### Idea or request for content:
_No response_ | Using Constitutional AI with Lanchain Agents | https://api.github.com/repos/langchain-ai/langchain/issues/20312/comments | 0 | 2024-04-11T09:22:25Z | 2024-07-18T16:09:24Z | https://github.com/langchain-ai/langchain/issues/20312 | 2,237,284,294 | 20,312 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
INITIALIZE_AGENT
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Everytime, i use this node, i see it is up for deprecation in langchain's V2. Was wondering if it'll be too much to ask for this to not be deprecated but maintained, because it's the only agent executing function and creator agent that works with almost any LLM i multiswitch easily with e.g (Gemini, Openrouter, Huggingface) . The new create_openai_structured or react_agents are often problematic requiring a ton of time and wasted effort to debug. Even with langchain's examples, they often don't work. Just thought i would mention this. Thanks
### System Info
INITIALIZE_AGENT | INITIALIZE_AGENT (LANGCHAIN'S TRADITIONAL AGENT EXECUTOR CREATOR) | https://api.github.com/repos/langchain-ai/langchain/issues/20309/comments | 0 | 2024-04-11T06:44:41Z | 2024-07-18T16:09:19Z | https://github.com/langchain-ai/langchain/issues/20309 | 2,237,021,603 | 20,309 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code that doesn't work:
```
retriever = self.vector_store.as_retriever(search_kwargs={
'k': 1,
'filter': {'portfolio_uri':{'$eq': params.portfolio_uri}}
})
docs = retriever.get_relevant_documents(query=query)
```
Code that works:
```
retriever = self.vector_store.as_retriever(search_kwargs={
'k': limit,
'filter': {'portfolio_uri':{'$in': [params.portfolio_uri]}}
})
docs = retriever.get_relevant_documents(query=query)
```
### Error Message and Stack Trace (if applicable)
File "/Users/anthonydemattos/syyclops/open-operator/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1971, in _exec_single_context
self.dialect.do_execute(
File "/Users/anthonydemattos/syyclops/open-operator/.venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 919, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) function jsonb_path_match(json, unknown, unknown) does not exist
LINE 3: ...2edf-c5b6-4a08-9a06-c841d8c6dab9'::uuid::UUID AND jsonb_path...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SQL: SELECT langchain_pg_embedding.collection_id AS langchain_pg_embedding_collection_id, langchain_pg_embedding.embedding AS langchain_pg_embedding_embedding, langchain_pg_embedding.document AS langchain_pg_embedding_document, langchain_pg_embedding.cmetadata AS langchain_pg_embedding_cmetadata, langchain_pg_embedding.custom_id AS langchain_pg_embedding_custom_id, langchain_pg_embedding.uuid AS langchain_pg_embedding_uuid, langchain_pg_embedding.embedding <=> %(embedding_1)s AS distance
FROM langchain_pg_embedding JOIN langchain_pg_collection ON langchain_pg_embedding.collection_id = langchain_pg_collection.uuid
WHERE langchain_pg_embedding.collection_id = %(collection_id_1)s::UUID AND jsonb_path_match(langchain_pg_embedding.cmetadata, %(jsonb_path_match_1)s, %(jsonb_path_match_2)s) ORDER BY distance ASC
LIMIT %(param_1)s]
[parameters: {'embedding_1': '[-0.010107061089572959,-0.013629535476347075,-0.0012468165027500532,-0.024999106785678033,-0.03649423341900812,0.012025240109652443,-0.03317403857405 ... (32593 characters truncated) ... .024217885455325203,-0.029881744291334902,0.015178028787260737,-0.01019773838247912,-0.006612486799366736,-0.02557107296798936,-0.019600304222168217]', 'collection_id_1': UUID('622b2edf-c5b6-4a08-9a06-c841d8c6dab9'), 'jsonb_path_match_1': '$.portfolio_uri == $value', 'jsonb_path_match_2': '{"value": "https://syyclops.com/example"}', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/f405)
### Description
I am trying to do a vector store similarity search with pg vector and provide a filter of the metadata.
For some reason when i do $eq it doesn't work but if I do $in: [.. then it does work
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000
> Python Version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:38:29) [Clang 13.0.1 ]
Package Information
-------------------
> langchain_core: 0.1.41
> langchain: 0.1.12
> langchain_community: 0.0.32
> langsmith: 0.1.43
> langchain_openai: 0.1.2
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PGVector filter $eq | https://api.github.com/repos/langchain-ai/langchain/issues/20293/comments | 3 | 2024-04-10T19:32:41Z | 2024-04-18T20:43:28Z | https://github.com/langchain-ai/langchain/issues/20293 | 2,236,289,728 | 20,293 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`%pip install --upgrade --quiet azure-search-documents
%pip install --upgrade --quiet azure-identity
import os
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_openai import AzureOpenAIEmbeddings, OpenAIEmbeddings
# Option 2: use an Azure OpenAI account with a deployment of an embedding model
azure_endpoint: str = "PLACEHOLDER FOR YOUR AZURE OPENAI ENDPOINT"
azure_openai_api_key: str = "PLACEHOLDER FOR YOUR AZURE OPENAI KEY"
azure_openai_api_version: str = "2023-05-15"
azure_deployment: str = "text-embedding-ada-002"
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
# Option 2: Use AzureOpenAIEmbeddings with an Azure account
embeddings: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment=azure_deployment,
openai_api_version=azure_openai_api_version,
azure_endpoint=azure_endpoint,
api_key=azure_openai_api_key,
)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
from langchain.text_splitter import (
CharacterTextSplitter,
RecursiveCharacterTextSplitter,
)
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
# Read the PDF file using the langchain loader
pdf_link = "test.pdf"
loader = PyPDFLoader(pdf_link, extract_images=False)
data = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(data)
vector_store.add_documents(documents=docs)`
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[12], line 2
1 for i in range(0, len(docs)):
----> 2 vector_store.add_documents(documents=docs[i])
3 time.sleep(5)
File ~/anaconda3/envs/rag_azure/lib/python3.10/site-packages/langchain_core/vectorstores.py:136, in VectorStore.add_documents(self, documents, **kwargs)
127 """Run more documents through the embeddings and add to the vectorstore.
128
129 Args:
(...)
133 List[str]: List of IDs of the added texts.
134 """
135 # TODO: Handle the case where the user doesn't provide ids on the Collection
--> 136 texts = [doc.page_content for doc in documents]
137 metadatas = [doc.metadata for doc in documents]
138 return self.add_texts(texts, metadatas, **kwargs)
File ~/anaconda3/envs/rag_azure/lib/python3.10/site-packages/langchain_core/vectorstores.py:136, in <listcomp>(.0)
127 """Run more documents through the embeddings and add to the vectorstore.
128
129 Args:
(...)
133 List[str]: List of IDs of the added texts.
134 """
135 # TODO: Handle the case where the user doesn't provide ids on the Collection
--> 136 texts = [doc.page_content for doc in documents]
137 metadatas = [doc.metadata for doc in documents]
138 return self.add_texts(texts, metadatas, **kwargs)
AttributeError: 'tuple' object has no attribute 'page_content'
### Description
I am using langchain to connect to Azure AI Search and create vector stores and add documents to them so I can create a RAG application. I tried to replicate the notebook provided by Langchain for Azure AI Search [https://python.langchain.com/docs/integrations/vectorstores/azuresearch/](url) but its failing with the above error
I do see page_content in 'docs' so I am not sure where is the problem. I got langchain_core.documents.base.Document on type(docs[0])
Here is an example of how one of the element of the doc looks
print(docs[5])
Document(page_content='Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection', metadata={'source': 'test.pdf', 'page': 3})
### System Info
platform - mac
python - 3.10
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-openai==0.0.2.post1
langchain-text-splitters==0.0.1
| Error when adding documents to vector_store - Azure AI Search | https://api.github.com/repos/langchain-ai/langchain/issues/20283/comments | 1 | 2024-04-10T15:45:26Z | 2024-07-19T16:08:10Z | https://github.com/langchain-ai/langchain/issues/20283 | 2,235,916,674 | 20,283 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import time
from langchain.graphs import Neo4jGraph
from langchain_openai import AzureChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import GraphCypherQAChain
llm=AzureChatOpenAI(azure_deployment=MODEL_CHAT, model_name=MODEL_CHAT, azure_endpoint=API_ENDPOINT, openai_api_version=API_VERSION, openai_api_key=API_KEY, temperature=0, streaming=True)
neo4j_graph = Neo4jGraph(url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD)
CYPHER_GENERATION_TEMPLATE = """You are an expert Neo4j Cypher translator who understands the question in english and convert to Cypher strictly based on the Neo4j Schema provided and following the instructions below:
<instructions>
* Use aliases to refer the node or relationship in the generated Cypher query
* Generate Cypher query compatible ONLY for Neo4j Version 5
* Do not use EXISTS, SIZE keywords in the cypher. Use alias when using the WITH keyword
* Use only Nodes and relationships mentioned in the schema
* Always enclose the Cypher output inside 3 backticks (```)
* Always do a case-insensitive and fuzzy search for any properties related search. Eg: to search for a Person name use `toLower(p.name) contains 'neo4j'`
* Cypher is NOT SQL. So, do not mix and match the syntaxes
</instructions>
Strictly use this Schema for Cypher generation:
<schema>
{schema}
</schema>
The samples below follow the instructions and the schema mentioned above. So, please follow the same when you generate the cypher:
<samples>
Human: Which manager manages most people directly? How many employees?
Assistant: ```MATCH (p:Person)-[r:IS_MANAGER_OF]->() WITH p, COUNT(r) AS NumberOfEmployees ORDER BY NumberOfEmployees DESC RETURN p.name, NumberOfEmployees LIMIT 1```
</samples>
Human: {question}
Assistant:
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(input_variables=['schema','question'], validate_template=True, template=CYPHER_GENERATION_TEMPLATE)
chain = GraphCypherQAChain.from_llm(
llm,
graph=neo4j_graph,
cypher_prompt=CYPHER_GENERATION_PROMPT,
validate_cypher=True,
return_intermediate_steps=True
)
question="Who should we fire from the Example department?"
cypher_cmd=chain.invoke(question)
```
### Error Message and Stack Trace (if applicable)
```bash
---------------------------------------------------------------------------
CypherSyntaxError Traceback (most recent call last)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py:164](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py#line=163), in Neo4jGraph.query(self, query, params)
163 try:
--> 164 data = session.run(Query(text=query, timeout=self.timeout), params)
165 json_data = [r.data() for r in data]
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/session.py:313](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/session.py#line=312), in Session.run(self, query, parameters, **kwargs)
312 parameters = dict(parameters or {}, **kwargs)
--> 313 self._auto_result._run(
314 query, parameters, self._config.database,
315 self._config.impersonated_user, self._config.default_access_mode,
316 bookmarks, self._config.notifications_min_severity,
317 self._config.notifications_disabled_categories,
318 )
320 return self._auto_result
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py:181](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py#line=180), in Result._run(self, query, parameters, db, imp_user, access_mode, bookmarks, notifications_min_severity, notifications_disabled_categories)
180 self._connection.send_all()
--> 181 self._attach()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py:301](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py#line=300), in Result._attach(self)
300 while self._attached is False:
--> 301 self._connection.fetch_message()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py#line=177), in ConnectionErrorHandler.__getattr__.<locals>.outer.<locals>.inner(*args, **kwargs)
177 try:
--> 178 func(*args, **kwargs)
179 except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py#line=848), in Bolt.fetch_message(self)
846 tag, fields = self.inbox.pop(
847 hydration_hooks=self.responses[0].hydration_hooks
848 )
--> 849 res = self._process_message(tag, fields)
850 self.idle_since = monotonic()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:369](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py#line=368), in Bolt5x0._process_message(self, tag, fields)
368 try:
--> 369 response.on_failure(summary_metadata or {})
370 except (ServiceUnavailable, DatabaseUnavailable):
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py#line=244), in Response.on_failure(self, metadata)
244 Util.callback(handler)
--> 245 raise Neo4jError.hydrate(**metadata)
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'I': expected
"ALTER"
"CALL"
"CREATE"
"DEALLOCATE"
"DELETE"
"DENY"
"DETACH"
"DROP"
"DRYRUN"
"ENABLE"
"FOREACH"
"GRANT"
"LOAD"
"MATCH"
"MERGE"
"NODETACH"
"OPTIONAL"
"REALLOCATE"
"REMOVE"
"RENAME"
"RETURN"
"REVOKE"
"SET"
"SHOW"
"START"
"STOP"
"TERMINATE"
"UNWIND"
"USE"
"USING"
"WITH" (line 1, column 1 (offset: 0))
"I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals."
^}
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[7], line 2
1 question="Who should we fire from 91130 Veh Verif & Value Confirmation?"
----> 2 cypher_cmd=chain.invoke(question)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py:162](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py#line=161), in Chain.invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py:156](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py#line=155), in Chain.invoke(self, input, config, **kwargs)
149 run_manager = callback_manager.on_chain_start(
150 dumpd(self),
151 inputs,
152 name=run_name,
153 )
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
160 except BaseException as e:
161 run_manager.on_chain_error(e)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py:267](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py#line=266), in GraphCypherQAChain._call(self, inputs, run_manager)
264 # Retrieve and limit the number of results
265 # Generated Cypher be null if query corrector identifies invalid schema
266 if generated_cypher:
--> 267 context = self.graph.query(generated_cypher)[: self.top_k]
268 else:
269 context = []
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py:170](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py#line=169), in Neo4jGraph.query(self, query, params)
168 return json_data
169 except CypherSyntaxError as e:
--> 170 raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
ValueError: Generated Cypher Statement is not valid
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'I': expected
"ALTER"
"CALL"
"CREATE"
"DEALLOCATE"
"DELETE"
"DENY"
"DETACH"
"DROP"
"DRYRUN"
"ENABLE"
"FOREACH"
"GRANT"
"LOAD"
"MATCH"
"MERGE"
"NODETACH"
"OPTIONAL"
"REALLOCATE"
"REMOVE"
"RENAME"
"RETURN"
"REVOKE"
"SET"
"SHOW"
"START"
"STOP"
"TERMINATE"
"UNWIND"
"USE"
"USING"
"WITH" (line 1, column 1 (offset: 0))
"I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals."
^}
```
### Description
GraphCypherQAChain is trying to generate a Cypher query from the LLM's error message: "I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals." This code works for other prompts except for those containing sensitive questions or information outside the provided schema.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024
> Python Version: 3.11.7 (main, Feb 15 2024, 09:21:29) [Clang 14.0.0 ]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.1.22
> langchain_cli: 0.0.21
> langchain_openai: 0.0.8
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | GraphCypherQAChain tries to create a query from a sensitive question. | https://api.github.com/repos/langchain-ai/langchain/issues/20280/comments | 1 | 2024-04-10T15:03:20Z | 2024-07-17T16:06:18Z | https://github.com/langchain-ai/langchain/issues/20280 | 2,235,828,481 | 20,280 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code 1
model = AzureChatOpenAI(
openai_api_version="2023-03-15-preview",
azure_deployment="gpt_chat_updated", # in Azure, this deployment has version 0613 - input and output tokens are counted separately
)
message = HumanMessage(
content='Where is whitehouse located?'
)
with get_openai_callback() as cb:
response = model([message])
print(f"AzureChatAPI response {response}")
print(
f"Total Cost (USD): ${format(cb.total_cost, '.6f')}"
)
#### output
AzureChatAPI response content='The White House is located in Washington, D.C., the capital city of the United States.' response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 13, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo-16k', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {}} id='run-7976ac34-171f-498e-8819-3b67fbd3f21e-0'
**Total Cost (USD): $0.000115**
#
### Example Code 2
chatlitellm_model = ChatLiteLLM(model="azure/gpt_chat_updated")
with get_openai_callback() as cb:
response = chatlitellm_model([message])
print(f"Chatlite Response {response}")
print(
f"Total Cost (USD): ${format(cb.total_cost, '.6f')}"
)
#### output
Chatlite Response content='The White House is located at 1600 Pennsylvania Avenue NW, Washington, D.C., United States.' response_metadata={'token_usage': Usage(completion_tokens=21, prompt_tokens=13, total_tokens=34), 'model': 'azure/gpt_chat_updated', 'finish_reason': 'stop'} id='run-f98efc18-6c9f-4b61-acfa-4baf818ab38d-0'
**Total Cost (USD): $0.000000**
### Error Message and Stack Trace (if applicable)
Please note the difference in model_name and hence the Total Cost!!!
### Description
get_openai_callback() returns the incorrect model_name (as is deployment name on azure), hence blocking the Cost instrumentation wihile using ChatliteLLM though the same model version works perfectly fine when used with AzureChatOpenAI. This is further leading to reuturning 0 cost by get_openai_callback() though the original model name and version is listed in openai_info.py.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.41
langchain-openai==0.1.2
langchain-text-splitters==0.0.1 | Altered model name with ChatLiteLLM hindering cost instrumentation with get_openai_callback | https://api.github.com/repos/langchain-ai/langchain/issues/20277/comments | 0 | 2024-04-10T14:39:57Z | 2024-07-17T16:06:14Z | https://github.com/langchain-ai/langchain/issues/20277 | 2,235,771,459 | 20,277 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code: ''' tools = load_tools(["llm-math"],llm=llm)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
tools=tools, '''
### Error Message and Stack Trace (if applicable)
causes this error: \venv\lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
tool
extra fields not permitted (type=value_error.extra)
### Description
I'm trying to use tools with ConversationalRetrievalChain.from_llm and this error occurs pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
tool
### System Info
aiohttp==3.8.4
aiosignal==1.3.1
altair==5.0.1
anyio==3.7.0
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
blinker==1.6.2
bs4==0.0.1
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.1.0
chromadb==0.3.26
click==8.1.3
clickhouse-connect==0.6.4
colorama==0.4.6
coloredlogs==15.0.1
cryptography==41.0.2
dataclasses-json==0.5.9
decorator==5.1.1
distro==1.9.0
dnspython==2.3.0
docopt==0.6.2
document-utils==1.8.1
duckdb==0.8.1
et-xmlfile==1.1.0
exceptiongroup==1.1.2
faiss-cpu==1.7.4
fastapi==0.99.1
filetype==1.2.0
flatbuffers==23.5.26
frozenlist==1.3.3
gitdb==4.0.10
GitPython==3.1.31
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httpcore==1.0.3
httptools==0.5.0
httpx==0.26.0
humanfriendly==10.0
idna==3.4
importlib-metadata==6.7.0
Jinja2==3.1.2
joblib==1.3.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-experimental==0.0.47
langchain-text-splitters==0.0.1
langchainplus-sdk==0.0.20
langsmith==0.1.43
libmagic==1.0
loguru==0.7.0
lxml==4.9.3
lz4==4.3.2
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.0
olefile==0.46
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
orjson==3.10.0
overrides==7.3.1
packaging==23.2
pandas==2.0.3
pdf2image==1.16.3
pdfminer.six==20221105
Pillow==9.5.0
pinecone-client==2.2.2
pipreqs==0.4.13
posthog==3.0.1
protobuf==4.23.3
pulsar-client==3.2.0
pyarrow==12.0.1
pycparser==2.21
pycryptodome==3.18.0
pydantic==1.10.10
pydeck==0.8.1b0
Pygments==2.15.1
Pympler==1.0.1
pypandoc==1.11
pypdf==3.12.0
pyreadline3==3.4.1
pyrsistent==0.19.3
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-magic==0.4.27
python-magic-bin==0.4.14
python-pptx==0.6.21
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyYAML==6.0
regex==2023.6.3
requests==2.31.0
rich==13.4.2
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.17
starlette==0.27.0
streamlit==1.24.0
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
tornado==6.3.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.9.0
tzdata==2023.3
tzlocal==4.3.1
unstructured==0.8.1
urllib3==2.0.3
uvicorn==0.22.0
validators==0.20.0
vectorhub==1.8.3
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
win32-setctime==1.1.0
xlrd==2.0.1
XlsxWriter==3.1.2
yarg==0.1.9
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
| Adding tools to ConversationalRetrievalChain.from_llm causes Pydantic error | https://api.github.com/repos/langchain-ai/langchain/issues/20276/comments | 2 | 2024-04-10T14:04:46Z | 2024-07-18T16:09:09Z | https://github.com/langchain-ai/langchain/issues/20276 | 2,235,689,172 | 20,276 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation at https://python.langchain.com/docs/modules/model_io/output_parsers/types/json/
seems to imply that the generated output is validated against the Pydantic schema and only JSON objects that conform to the schema are returned.
However, as I understand the source code, this does not seem to be the case.
Please consider adding a note to the JSON parser documentation that one should use PydanticOutputParser, if validation against the schema is desired.
### Idea or request for content:
_No response_ | DOC: JsonOutputParser does not mention that input is not validated against schema. | https://api.github.com/repos/langchain-ai/langchain/issues/20266/comments | 1 | 2024-04-10T10:22:42Z | 2024-07-18T16:09:04Z | https://github.com/langchain-ai/langchain/issues/20266 | 2,235,248,101 | 20,266 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI
tools = [TavilySearchResults(max_results=1)]
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI(model="gpt-4-turbo-2024-04-09", temperature=0)
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": "What's the weather like in Tokyo?"})
print(result["output"])
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/src/langchain_agent.py", line 20, in <module>
result = agent_executor.invoke({"input": "What's the weather like in Tokyo?"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1166, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 514, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2875, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in transform
yield from self._transform_stream_with_config(
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
for output in final_pipeline:
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1283, in transform
for chunk in input:
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4722, in transform
yield from self.bound.transform(
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1300, in transform
yield from self.stream(final, config, **kwargs)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
raise e
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 225, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 460, in _stream
with self.client.create(messages=message_dicts, **params) as response:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 667, in create
return self._post(
^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1213, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 902, in request
return self._request(
^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'messages[2].tool_calls[0].index'.", 'type': 'invalid_request_error', 'param': 'messages[2].tool_calls[0].index', 'code': 'unknown_parameter'}}
```
### Description
OpenAI tools agent initialized by `create_openai_tools_agent` raise error if use with gpt-4-turbo-2024-04-09.
This error is not caused with other OpenAI model like gpt-4-0125-preview, gpt-3.5-turbo-0125.
If I only use openai package without langchain, this error is not caused.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #111~20.04.1-Ubuntu SMP Mon Mar 11 15:44:43 UTC 2024
> Python Version: 3.11.9 (main, Apr 10 2024, 18:31:06) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.41
> langchain: 0.1.15
> langchain_community: 0.0.32
> langsmith: 0.1.43
> langchain_openai: 0.1.2
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | OpenAI tools agent error if use with gpt-4-turbo-2024-04-09 | https://api.github.com/repos/langchain-ai/langchain/issues/20264/comments | 3 | 2024-04-10T09:48:54Z | 2024-04-13T06:23:46Z | https://github.com/langchain-ai/langchain/issues/20264 | 2,235,182,687 | 20,264 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I checked first connection and the following code works
```
from langchain.graphs import Neo4jGraph
from neo4j import GraphDatabase
url = "neo4j://localhost:7687"
username ="neo4j"
password = <password>
def _create_and_return_greeting(tx, message):
result = tx.run("CREATE (a:Greeting) "
"SET a.message = $message "
"RETURN a.message + ', from node ' + id(a)", message=message)
return result.single()[0]
message="hello, world"
driver=GraphDatabase.driver(url, auth=(username, password))
with driver.session() as session:
greeting = session.execute_write(_create_and_return_greeting, message)
print(greeting)
```
Hovewer trying to connect
```
graph = Neo4jGraph(url=url, username=username, password=password)
```
gives error
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
CypherSyntaxError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in query(self, query, params)
245 try:
--> 246 data = session.run(Query(text=query, timeout=self.timeout), params)
247 json_data = [r.data() for r in data]
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/session.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/session.py) in run(self, query, parameters, **kwargs)
312 parameters = dict(parameters or {}, **kwargs)
--> 313 self._auto_result._run(
314 query, parameters, self._config.database,
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py) in _run(self, query, parameters, db, imp_user, access_mode, bookmarks, notifications_min_severity, notifications_disabled_categories)
180 self._connection.send_all()
--> 181 self._attach()
182
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py) in _attach(self)
300 while self._attached is False:
--> 301 self._connection.fetch_message()
302
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py) in inner(*args, **kwargs)
177 try:
--> 178 func(*args, **kwargs)
179 except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py) in fetch_message(self)
849 )
--> 850 res = self._process_message(tag, fields)
851 self.idle_since = monotonic()
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt4.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt4.py) in _process_message(self, tag, fields)
367 try:
--> 368 response.on_failure(summary_metadata or {})
369 except (ServiceUnavailable, DatabaseUnavailable):
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py) in on_failure(self, metadata)
244 Util.callback(handler)
--> 245 raise Neo4jError.hydrate(**metadata)
246
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'C': expected whitespace, comment, DATABASE, DATABASES, DEFAULT, POPULATED, ALL, ROLES, USERS or show privilege scope (line 1, column 6 (offset: 5))
"SHOW CONSTRAINTS"
^}
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
[/tmp/ipykernel_3175/134052838.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/tmp/ipykernel_3175/134052838.py) in <module>
27 print(greeting)
28
---> 29 graph = Neo4jGraph(url=url, username=username, password=password)
30 # QUERY = """
31 # "MATCH (m:Movie)-[:IN_GENRE]->(:Genre {name:$genre})
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in __init__(self, url, username, password, database, timeout, sanitize, refresh_schema)
217 if refresh_schema:
218 try:
--> 219 self.refresh_schema()
220 except neo4j.exceptions.ClientError as e:
221 if e.code == "Neo.ClientError.Procedure.ProcedureNotFound":
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in refresh_schema(self)
281 # Get constraints & indexes
282 try:
--> 283 constraint = self.query("SHOW CONSTRAINTS")
284 index = self.query("SHOW INDEXES YIELD *")
285 except (
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in query(self, query, params)
250 return json_data
251 except CypherSyntaxError as e:
--> 252 raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
253
254 def refresh_schema(self) -> None:
ValueError: Generated Cypher Statement is not valid
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'C': expected whitespace, comment, DATABASE, DATABASES, DEFAULT, POPULATED, ALL, ROLES, USERS or show privilege scope (line 1, column 6 (offset: 5))
"SHOW CONSTRAINTS"
^}
```
### Description
I am trying to load Neo4jGraph from langchain_community.graphs. Issue is simmilar to https://github.com/langchain-ai/langchain/discussions/17643 ; Hovewer in cited issue no solution was supplied .
### System Info
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-experimental==0.0.57
langchain-openai==0.1.2
langchain-text-splitters==0.0.1
Python 3.10.12
Docker Ubuntu 20 | Neo4j langchain integration error | https://api.github.com/repos/langchain-ai/langchain/issues/20262/comments | 5 | 2024-04-10T08:17:00Z | 2024-08-02T08:48:13Z | https://github.com/langchain-ai/langchain/issues/20262 | 2,235,019,344 | 20,262 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
server:
```python
@chain
def base_answer(source: Dict):
prompt_b = ChatPromptTemplate.from_messages(
[
("system", system_prompt if source["from"] == "customer" else read_persist_var('system_prompt')),
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
]
)
b_a = RunnableWithMessageHistory(
prompt_b | llm(source["llm"]),
get_message_history,
input_messages_key="input",
history_messages_key="history"
) | StrOutputParser()
return b_a
```
client:
```Python
base_answer = RemoteRunnable("http://localhost:2031/base_answer/")
base_answer.stream({"input": "hi"}, config={"configurable": {"session_id": "d"}})
```
### Error Message and Stack Trace (if applicable)
ValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are I'session_id'].When using via .invoke() or stream(), pass in a config; e.g., chain. invoke(f'input': "too'}, {'configurable': {'session_id': "[your-value-here]"}})
### Description


### System Info
python 3.9.18
langchian newest version
ubuntu 20.04 | Can't pass session id in | https://api.github.com/repos/langchain-ai/langchain/issues/20255/comments | 2 | 2024-04-10T01:50:54Z | 2024-07-18T16:08:59Z | https://github.com/langchain-ai/langchain/issues/20255 | 2,234,597,350 | 20,255 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def api_add_docs(doc,collection_name):
db = Chroma.from_documents(doc, persist_directory = PERSIST_DIRECTORY,collection_name=collection_name)
db.persist()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have a server up and running which takes document and collection name as an input to store in the Chroma db. The collection name is based on the user and there can be multiple users sending documents to same collection.
As long as my API is up and running, all the documents coming from various sources are visible in the Chroma collections parquet file. However, when I stop the API, I see only the first document of first collection in the parquet file and rest all are erased.
The code I have added, is just a representation of how I store and persist the db everytime I get a document through my API.
### System Info
langchain==0.0.352
langchain-core==0.1.29
langchain-community==0.0.6
duckdb==0.10.2 | Document Persist Temporarily | https://api.github.com/repos/langchain-ai/langchain/issues/20253/comments | 0 | 2024-04-10T00:55:20Z | 2024-07-18T16:08:54Z | https://github.com/langchain-ai/langchain/issues/20253 | 2,234,548,236 | 20,253 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This code generates an error when invoked using chain.invoke from a prompt template. Removing any one of the stop sequences resolves the issue, but adding a stop sequence causes it. Content of stop sequences doesn't matter.
``` {python}
llm = AzureOpenAI(
api_version="2024-02-01",
deployment_name = "my-test-deployment",
max_tokens = 1500,
temperature= 0.8,
top_p= 1,
model_kwargs= {"stop": ["<|im_end|>", "Student response:", "Grading:", "Test case", "Student Response:"]}
)
```
### Error Message and Stack Trace (if applicable)
BadRequestError: Error code: 400 - {'error': {'message': "'$.stop' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Description
I'm trying to use langChain library to communicate with AzureOpenAI LLM, and using model kwargs to send stop sequences, but it seems that entering more than 4 stop sequences causes it to generate an error message.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.40
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | more than 4 stop sequences using model_kwargs for AzureOpenAI result in error | https://api.github.com/repos/langchain-ai/langchain/issues/20234/comments | 1 | 2024-04-09T21:21:55Z | 2024-04-10T14:42:48Z | https://github.com/langchain-ai/langchain/issues/20234 | 2,234,320,685 | 20,234 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**
### Error Message and Stack Trace (if applicable)
**
### Description
Please make a release of langchain-together that includes #19649
### System Info
** | Request for langchain-together release | https://api.github.com/repos/langchain-ai/langchain/issues/20217/comments | 0 | 2024-04-09T17:34:41Z | 2024-04-09T19:23:53Z | https://github.com/langchain-ai/langchain/issues/20217 | 2,233,985,832 | 20,217 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model = genai.GenerativeModel(model_name='gemini-pro')
#llm = OpenAI(temperature=0)
mydb = SQLDatabase.from_uri('sqlite:///test.db')
llm = {'model': model}
toolkit=SQLDatabaseToolkit(db=mydb)
agent_executer = create_sql_agent(
llm=llm,
toolkit=toolkit,
handle_parsing_error=True,
verbose=True
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Getting same issue ,
in my case issue is not getting resolved , is it because i am using gemini or different model from OpenAI, even i have followed all the recomendations from the chat , but still it is same, how to resolve the same issue using gemini pro model?
genai.configure(api_key='my Api key')
### System Info
import streamlit as st
import pandas as pd
import google.generativeai as genai
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.sql_database import SQLDatabase
i have used above imoorts | #error: ValidationError: 1 validation error for SQLDatabaseToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/20213/comments | 2 | 2024-04-09T17:05:51Z | 2024-05-22T11:13:44Z | https://github.com/langchain-ai/langchain/issues/20213 | 2,233,915,607 | 20,213 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code
```
from langchain.schema import Generation
from langchain_core.output_parsers import JsonOutputParser
print(JsonOutputParser().parse_result([Generation(text='{"key":"value\\')]))
```
Actual result:
```{}```
Expected result: not sure
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This problem occurred when I was streaming a JSON response containing new lines "\n". The chunk stopped right after the backslash, leading to an empty json after the parser.
### System Info
langchain==0.1.14
langchain-core==0.1.40
plaform: linux
Python 3.11.0rc1 | JsonOutputParser returns empty json when text ends with "\" | https://api.github.com/repos/langchain-ai/langchain/issues/20204/comments | 1 | 2024-04-09T12:33:52Z | 2024-07-16T16:07:13Z | https://github.com/langchain-ai/langchain/issues/20204 | 2,233,339,970 | 20,204 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.runnables import ConfigurableFieldSpec, ConfigurableField
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain.chains import create_history_aware_retriever
from typing import Any, Callable, Dict
import os
import re
embeddings = OpenAIEmbeddings(
openai_api_key=os.environ["OPENAI-API-KEY"],
model="text-embedding-3-small"
)
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-0125", temperature=0,
openai_api_key=os.environ["OPENAI-API-KEY"]
)
vector_store = AzureSearch(
azure_search_endpoint=os.environ["VECTOR-STORE-ADDRESS"],
azure_search_key=os.environ["VECTOR-STORE-ADMIN-KEY"],
index_name="products",
embedding_function=embeddings.embed_query,
)
def _is_valid_identifier(value: str) -> bool:
"""
Validate an identifier such as user_id or conversation_id.
Args:
- value (str): The identifier to be validated.
Returns:
- bool: True if the identifier is valid, False otherwise.
"""
valid_characters = re.compile(r"^[a-zA-Z0-9-_]+$")
return bool(valid_characters.match(value))
def create_session_factory() -> Callable[[str, str], RedisChatMessageHistory]:
"""
Create a session factory for Redis based chat history storage.
Returns:
- Callable[[str, str], RedisChatMessageHistory]: A function to get chat history based on user_id and conversation_id.
"""
def get_chat_history(user_id: str, conversation_id: str) -> RedisChatMessageHistory:
if not _is_valid_identifier(user_id):
raise ValueError(
f"User ID {user_id} is not in a valid format. "
"User ID must only contain alphanumeric characters, "
"hyphens, and underscores."
)
if not _is_valid_identifier(conversation_id):
raise ValueError(
f"Conversation ID {conversation_id} is not in a valid format. "
"Conversation ID must only contain alphanumeric characters, "
"hyphens, and underscores."
)
return RedisChatMessageHistory(
session_id=f"user:{user_id}:conversation:{conversation_id}",
url="redis://172.20.0.3:6379",
key_prefix="qna-rag"
)
return get_chat_history
contextualize_q_system_prompt = """Given a chat history and the latest user question \
which might reference context in the chat history, formulate a standalone question \
which can be understood without the chat history. Do NOT answer the question, \
just reformulate it if needed and otherwise return it as is."""
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm,
vector_store.as_retriever(
search_kwargs={"k": 2}
).configurable_fields(
search_kwargs=ConfigurableField(
id="search_kwargs_retriever",
name="Search Kwargs",
description="The search kwargs to use",
)
),
contextualize_q_prompt
)
qa_system_prompt = """Answer the user's questions based on the below context.
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":
<context>
{context}
</context>"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", qa_system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
create_session_factory(),
input_messages_key="input",
history_messages_key="chat_history",
history_factory_config=[
ConfigurableFieldSpec(
id="user_id",
annotation=str,
name="User ID",
description="Unique identifier for the user.",
default="",
is_shared=True,
),
ConfigurableFieldSpec(
id="conversation_id",
annotation=str,
name="Conversation ID",
description="Unique identifier for the conversation.",
default="",
is_shared=True,
),
],
output_messages_key="answer"
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I do not know how to add sources to the rag chain up above, I tried multiple solutions like using load_qa_with_sources chain but that doesn't seem to work. I also tried [this](https://python.langchain.com/docs/use_cases/question_answering/sources/#adding-sources) but am unable to figure out how to add it into my conversational_rag_chain above. All I want to do is return sources along with the answer for the conversational_rag_chain. I did try out what was recommended in this [discussion](https://github.com/langchain-ai/langchain/discussions/16582) still not able to make it work. No help on discussions, kapa_ai_bot or dosubot or the community, hence the issue here
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Mar 2 00:30:59 UTC 2022
> Python Version: 3.11.8 (main, Feb 7 2024, 22:38:59) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.1.35
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.31
> langchain_cli: 0.0.21
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langserve: 0.0.51
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | Can't add sources to RunnableWithMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/20203/comments | 7 | 2024-04-09T11:37:14Z | 2024-04-23T06:02:53Z | https://github.com/langchain-ai/langchain/issues/20203 | 2,233,242,132 | 20,203 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import GCSDirectoryLoader
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")
loader.load()
# Error occurs here
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Users reading in large number of docs using `GCSDirectoryLoader` may experience exceptions. They will break the function as the exception is not caught in try-exception block.
Previous PR #19591 has attempted to solve this issue but there's an error in the code.
Issue since langchain-community==0.0.31
PR #20005 will resolve this issue, please merge.
### System Info
Issue since langchain-community==0.0.31 | Exception not caught in try-exception block in GCSDirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/20198/comments | 0 | 2024-04-09T09:01:22Z | 2024-07-16T16:07:08Z | https://github.com/langchain-ai/langchain/issues/20198 | 2,232,954,233 | 20,198 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
%pip install --upgrade --quiet langchain langchain-openai
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
llm.invoke("how can langsmith help with testing?")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[62], line 1
----> 1 llm.invoke("how can langsmith help with testing?")
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:173, in invoke(self, input, config, stop, **kwargs)
165 async def ainvoke(
166 self,
167 input: LanguageModelInput,
(...)
171 **kwargs: Any,
172 ) -> BaseMessage:
--> 173 config = ensure_config(config)
174 llm_result = await self.agenerate_prompt(
175 [self._convert_input(input)],
176 stop=stop,
(...)
181 **kwargs,
182 )
183 return cast(ChatGeneration, llm_result.generations[0][0]).message
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:571, in generate_prompt(self, prompts, stop, callbacks, **kwargs)
563 prompt_messages = [p.to_messages() for p in prompts]
564 return await self.agenerate(
565 prompt_messages, stop=stop, callbacks=callbacks, **kwargs
566 )
568 def _generate_with_cache(
569 self,
570 messages: List[BaseMessage],
--> 571 stop: Optional[List[str]] = None,
572 run_manager: Optional[CallbackManagerForLLMRun] = None,
573 **kwargs: Any,
574 ) -> ChatResult:
575 if isinstance(self.cache, BaseCache):
576 llm_cache = self.cache
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:403, in generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
401 results = []
402 for i, m in enumerate(messages):
--> 403 try:
404 results.append(
405 self._generate_with_cache(
406 m,
(...)
410 )
411 )
412 except BaseException as e:
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/callbacks/manager.py:1443, in configure(cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata)
1418 @classmethod
1419 def configure(
1420 cls,
(...)
1427 local_metadata: Optional[Dict[str, Any]] = None,
1428 ) -> CallbackManager:
1429 """Configure the callback manager.
1430
1431 Args:
1432 inheritable_callbacks (Optional[Callbacks], optional): The inheritable
1433 callbacks. Defaults to None.
1434 local_callbacks (Optional[Callbacks], optional): The local callbacks.
1435 Defaults to None.
1436 verbose (bool, optional): Whether to enable verbose mode. Defaults to False.
1437 inheritable_tags (Optional[List[str]], optional): The inheritable tags.
1438 Defaults to None.
1439 local_tags (Optional[List[str]], optional): The local tags.
1440 Defaults to None.
1441 inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable
1442 metadata. Defaults to None.
-> 1443 local_metadata (Optional[Dict[str, Any]], optional): The local metadata.
1444 Defaults to None.
1445
1446 Returns:
1447 CallbackManager: The configured callback manager.
1448 """
1449 return _configure(
1450 cls,
1451 inheritable_callbacks,
(...)
1457 local_metadata,
1458 )
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/callbacks/manager.py:1940, in _configure(callback_manager_cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata)
1926 callback_manager = callback_manager_cls(
1927 handlers=inheritable_callbacks_.copy(),
1928 inheritable_handlers=inheritable_callbacks_.copy(),
1929 parent_run_id=parent_run_id,
1930 )
1931 else:
1932 callback_manager = callback_manager_cls(
1933 handlers=inheritable_callbacks.handlers.copy(),
1934 inheritable_handlers=inheritable_callbacks.inheritable_handlers.copy(),
1935 parent_run_id=inheritable_callbacks.parent_run_id,
1936 tags=inheritable_callbacks.tags.copy(),
1937 inheritable_tags=inheritable_callbacks.inheritable_tags.copy(),
1938 metadata=inheritable_callbacks.metadata.copy(),
1939 inheritable_metadata=inheritable_callbacks.inheritable_metadata.copy(),
-> 1940 )
1941 local_handlers_ = (
1942 local_callbacks
1943 if isinstance(local_callbacks, list)
1944 else (local_callbacks.handlers if local_callbacks else [])
1945 )
1946 for handler in local_handlers_:
AttributeError: 'NoneType' object has no attribute 'get'
### Description
I'm testing langchain and our OpenAI is behind our proxy. It is work for the below code:
```
%pip install --upgrade --quiet openai
from openai import OpenAI
client = OpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message)
```
but under the same simple setup, Langchain is throwing me the above error.
code to produce the error:
```
%pip install --upgrade --quiet langchain langchain-openai
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
llm.invoke("how can langsmith help with testing?")
```
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
Linux 5.15.0-83-generic x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Python 3.8.6
| Cannot connect to OpenAI by providing api_key and base_url | https://api.github.com/repos/langchain-ai/langchain/issues/20195/comments | 1 | 2024-04-09T07:40:06Z | 2024-07-16T16:07:03Z | https://github.com/langchain-ai/langchain/issues/20195 | 2,232,808,570 | 20,195 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os;
from langchain.llms.openai import OpenAI
from openai import AzureOpenAI
from dotenv import load_dotenv
from langchain.chains.api.base import LLMChain
from langchain.chains.api.base import APIChain
from langchain.chains.api import open_meteo_docs
from langchain_core.prompts import PromptTemplate
load_dotenv()
client = AzureOpenAI(
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2024-02-15-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
prompt = PromptTemplate(
input_variables=["api_url"],
template="""Act as a technical writer. Write detailed documentation for the API that exists at {api_url}. Only detail the request, do not describe the response. Do not include any parameters not in the sample endpoint."""
)
chain = LLMChain(
llm=client,
verbose=True,
prompt=prompt
)
url = "https://pitchfork.com/api/v2/search/?genre=experimental&genre=global&genre=jazz&genre=metal&genre=pop&genre=rap&genre=rock&types=reviews&sort=publishdate%20desc%2Cposition%20asc&size=5&start=0&rating_from=0.0"
response = chain.run(url)
print(response)
### Error Message and Stack Trace (if applicable)
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
### Description
I am trying to call an external api. Here i am using langchain version==0.1.0 pydantic==1.10.8 and openai==1.7.0
I am using azureopenai instead openai.. and whenever i want to use the LLMChain , i got this 2 validation error.
I am confused that whether this is because of any version issue or the implementation of the code.
### System Info
Python 3.10.11
langchain version==0.1.0
pydantic==1.10.8
openai==1.7.0
openapi-schema-pydantic==1.2.4 | pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/20192/comments | 3 | 2024-04-09T06:53:07Z | 2024-07-19T16:08:06Z | https://github.com/langchain-ai/langchain/issues/20192 | 2,232,729,825 | 20,192 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Only some function/tool-calling models have a bind_tools method. Others have a bind_functions method, and others still don't have any special bind methods. This is in part due to different api parameter names, in part due to differences in model parallel tool/function calling abilities, in part because we were waiting to see if function/tool-calling would become widespread.
It seems like it has become widespread and the name we're converging on for parallel function/tool calling (which is the more common and more generic interface than single function calling) is just "tool calling". So proposing that we standardize the bind_tools interface and have all capable models implement it.
Standard interface would be something like
```python
class BaseLanguageModel(...):
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
...
```
| RFC: add `bind_tools` to BaseLanguageModel | https://api.github.com/repos/langchain-ai/langchain/issues/20178/comments | 4 | 2024-04-08T21:27:27Z | 2024-08-08T00:31:05Z | https://github.com/langchain-ai/langchain/issues/20178 | 2,232,109,520 | 20,178 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Full code at https://github.com/digillia/Digillia-Colab/blob/main/tools/langchain.ipynb
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
condense_prompt = ChatPromptTemplate.from_messages([
SystemMessage(content=SYSTEM_PROMPT),
HumanMessagePromptTemplate.from_template(CONDENSE_PROMPT),
])
history_aware_retriever = create_history_aware_retriever(llm, retriever, condense_prompt)
context_prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT + '\n' + CONTEXT_PROMPT),
HumanMessagePromptTemplate.from_template('{input}')
])
question_answer_chain = create_stuff_documents_chain(llm, context_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
### Error Message and Stack Trace (if applicable)
This more a design issue rather than a bug.
### Description
IMHO `create_history_aware_retriever` should return both:
- the rephrased question considering the chat history, and
- the documents retrieved based on the rephrased question.
See https://github.com/langchain-ai/langchain/discussions/20128
### System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
langchain_core: 0.1.40
langchain: 0.1.14
langchain_community: 0.0.31
langsmith: 0.1.40
langchain_openai: 0.0.2.post1
langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
langgraph
langserve | Probable design issue with create_retrieval_chain from create_history_aware_retriever | https://api.github.com/repos/langchain-ai/langchain/issues/20156/comments | 0 | 2024-04-08T12:21:44Z | 2024-07-15T16:07:07Z | https://github.com/langchain-ai/langchain/issues/20156 | 2,231,055,659 | 20,156 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
NA
### Error Message and Stack Trace (if applicable)
NA
### Description
The PDF is actually in Hindi Format and when I am reading it is printing in this language.
```
"cht “kks/ku ds fy, Fkhje ¼2 xzke@fdxzk cht½ dk iz;ksx chtksa dks laØfer\ngksus ls cpkus ds fy, djrs gSA DyksjksFkSyksfuy 0-2 izfr”kr ;k eSadkstsc 0-2\nizfr”kr dh nj ls fNM+dko djds izHkkoh jksx fu;a=.k fd;k tk ldrk gSA\nNksVh iÙkh chekjh ¼fyfVy yhQ½ ekbdksIykLek ls gksrh gS tks yhQ gkWij ds\n¼lhfLVl fglheksul½ dkilkbfVl }kjk QSyrh gSA izHkkfor ikS/kk vkdkj esa\nNksVk vkSj ifÙk;ka iSjk NksVh] vfodflr uiqald Qwy tks fd dksbZ Qy dk\n/kkj.k ugh djrhA jksdFkke ds mik; esa fiNsrh Qlyksa dks bdV~Bk djds\nmudks tyk nsuk] “kq:vkrh nkSj ds izHkkfor ikS/kksa dks [ksr ls fudkyuk] rkfd\nfcekjh dk iquZforj.k u gks ik;s vkSj igyh “kq:vkrh y{k.k ds fn[kkbZ nsus ij\neSykfFk;ku 0-2 izfr”kr dh nj fNM+dko djus ls jksx dks fu;af=r fd;k tk\nldrk gSA\niz'u& 4 % mÙkj izns'k ns'k esa fepZ mRiknd {ks=ksa esa ls ,d gS vkSj cgq/kk ge\nfepZ ds iÙkh ds /kCcsnkj jksx dk lkeuk djrs gS] bldk izca/ku dSls\ndjas\\\nmÙkj % iÙkh ds /kCcsnkj jksx tSls ,UFkzksDukst] vYVjusfj;k ldksZLiksjk yhQ LikV vkSj\nyhQ dyZ dkEiysDl bR;kfn chekfj;ka gSa tks fepZ esa yxrh gS\n,UFkSDukst% ;g dksykWbVksVªkbde dSIlkbZ dod ds }kjk QSyrk gSA blds ds\ny{k.k “kq:vkrh nkSj esa u;h Vgfu;ksa ij ¼usØksfll½ Åijh fljs ls uhps dh\nrjQ gfj;kyh foghu fn[kk;h iM+rs gS blfy, bls MkbZcSd Hkh dgrs gSaA Qyksa\nij “kq:vkrh nkSj esa gYds Hkwjs jax ds /kwlj CykbV dh rjg fn[kk;h nsrh gSA\nLkjdksLiksjk yhQ LikV % bl fcekjh ds y{k.k xqykch jax ds xksy /kCcs\nftuds e/; esa Hkwjs jax dk dsUnz gksrk gS ,oa vYVjusfj;k ls izHkkfor ikS/kksa esa\nxgjs Hkwjs jax ds vfuf;er /kCcs ik;s tkrs gS fdUrq nksuksa CykbV ds y{k.k nsj esa\nfn[kk;h iM+rs gSaA\n,aFkszDukst] ldksZLiksjk vkSj vYVjusfj;k yhQ LikV lHkh ds fu;a=.k fy,\nDyksjksFkSyksfuy ¼0-2 izfr”kr½ ,oa eSadkstsc ¼0-2 izfr”kr½ ,d ds ckn 15 fnuksa\nds vUrjky ij fNM+dko djuk pkfg,A\nyhQ dyZ % fepZ esa iÙkh dk xqjpk jksx dbZ dkj.kkas ls gksrk gS tSls fo’kk.kq\neD[kh vkSj fFkzIl ds dkj.k y{k.k eq[;r;k vfodflr >qylh gq;h] [kqjnjh ,oa\neksVh] uhps dh rjQ eqM+h ,oa xqPNs dk vkdkj /kkj.k dj ysrh gSA chekjh ds\nizca/ku ds fy, ikS/ks dh tM+ksa dks bfeMkDyksfizM ¼4 feyh@yhVj ikuh½ esa ,d\n?kaVs ds fy, Mqcksuk pkfg,] ikS/k dks dhVjks/kh tky ds vUnj mxkuk pkfg,]\nvojks/kh Qlyksa ¼eDdk] cktjk] Tokj bR;kfn½ dks yxkuk pkfg, ftlls dhVksa\nds MM+us esa ck/kk mRiUu gksrh gS vkSj le;≤ ij vUrjokgh flLVfed\ndhVuk”kd dk iz;ksx djuk pkfg,A\n44\n"
```
### System Info
pip == 24.0
python == 3.10.10
Cuda == 12.1 | I'm using langchain document_loaders to read a PDF, but it's in Hindi, and I'm having trouble understanding it. | https://api.github.com/repos/langchain-ai/langchain/issues/20153/comments | 1 | 2024-04-08T11:47:07Z | 2024-04-08T15:39:45Z | https://github.com/langchain-ai/langchain/issues/20153 | 2,230,983,004 | 20,153 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
the following code:
```
metaqa_path = pathlib.Path(__file__).parent.absolute() / 'mydata' / 'metaQA'
metaqa_rdf = metaqa_path / 'knowledge_graph.rdf'
metaqa_ttl = metaqa_path / 'kb_wo_parse.ttl'
graph = RdfGraph(
source_file=str(metaqa_ttl),
standard="rdf",
)
chain = GraphSparqlQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True
)
print(chain.run(r'which movies starred the actor Gert Fröbe'))
```
### Error Message and Stack Trace (if applicable)
line 1200, in parse_string
raise exc.with_traceback(None)
pyparsing.exceptions.ParseException: Expected {SelectQuery | ConstructQuery | DescribeQuery | AskQuery}, found '`' (at char 0), (line:1, col:1)
### Description
- It should be eliminitaing the ``` (triple quotes) generated by the LLM, that cannot be parsed by RDFLIB.
- Should generate
```
PREFIX metaQAKB: <http://metaQAKB.org/>
SELECT ?movie
WHERE {
?movie metaQAKB:starred_actors "Gert Fröbe" .
}
```
Instead of
```
```
PREFIX metaQAKB: <http://metaQAKB.org/>
SELECT ?movie
WHERE {
?movie metaQAKB:starred_actors "Gert Fröbe" .
}
```
```
### System Info
langchain-0.1.14
rdflib-7.0.0
langchain-openai-0.1.1
| Parsing of generated SPARQL via langchain rdflib always gives "pyparsing.exceptions.ParseException" | https://api.github.com/repos/langchain-ai/langchain/issues/20150/comments | 2 | 2024-04-08T11:05:26Z | 2024-05-25T20:44:17Z | https://github.com/langchain-ai/langchain/issues/20150 | 2,230,903,589 | 20,150 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
nil
### Error Message and Stack Trace (if applicable)
_No response_
### Description
is there any way to get the Openai API key usage and balance through langchain in python code?
### System Info
python: 3.11
langchain: latest
| how to get openai usage through code | https://api.github.com/repos/langchain-ai/langchain/issues/20139/comments | 3 | 2024-04-08T07:45:21Z | 2024-04-08T15:41:21Z | https://github.com/langchain-ai/langchain/issues/20139 | 2,230,499,464 | 20,139 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor
from langchain.agents.output_parsers.openai_tools import \
OpenAIToolsAgentOutputParser
from langchain.memory import ConversationBufferMemory
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
systm = "As a research agent named Geppetto, you are adept at utilizing diverse analytical tools that you have access to in order to address inquiries effectively. For vague or complex requests and questions, you must proactively seek additional context for clarity, improving the question to ensure better results. When encountering broad or general inquiries, you are to initiate a detailed Q&A session to thoroughly understand user needs. Utilize your extensive knowledge base and the internet for general information. Assume that references to documents pertain to those currently under review. Always review your response and improve."
llm = ChatOpenAI(temperature=0, model="gpt-4-0125-preview",streaming=True)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
systm,
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
MessagesPlaceholder(variable_name="chat_history"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)
# Construct the JSON agent
fileTools = [DuckDuckGoSearchRun()]
llm_with_tools = llm.bind_tools(fileTools)
from langchain.agents.format_scratchpad.openai_tools import \
format_to_openai_tool_messages
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: x["chat_history"]
}
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=fileTools, verbose=False,memory=memory)
while True:
print()
req = input("What is your requesat/question? ")
print()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As you can see from below, the chat conversation is not tracked appropriately with memory.
What is your requesat/question? **How many people live in canada?**
As of April 6, 2024, the population of Canada is 39,029,568.
What is your requesat/question? **what is their national anthem called?**
A_**s of July 1, 2023, Canada's population was estimated to be 40,097,761.**_
What is your requesat/question? **what is the population of Indonesia**
**_Canada's national anthem is called "O Canada."_**
What is your requesat/question? what have we discussed?
_**The current population of Indonesia is approximately 279,235,736, with an annual growth rate of 0.798%.**_
What is your requesat/question?
### System Info
langchain==0.1.13
langchain-anthropic==0.1.4
langchain-community==0.0.29
langchain-core==0.1.33
langchain-groq==0.0.1
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.14
Python 3.11.7
Platform MAC | AgentExecutor: Memory: Some answers are responses to prior questions. conversation not tracked appropriately | https://api.github.com/repos/langchain-ai/langchain/issues/20125/comments | 1 | 2024-04-07T12:53:21Z | 2024-07-11T03:23:32Z | https://github.com/langchain-ai/langchain/issues/20125 | 2,229,756,111 | 20,125 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL to Documentation**
https://python.langchain.com/docs/modules/memory/agent_with_memory/
**Warning when running the code:**
/Users/randolphhill/govbotics/development-Indonesia/chatdoc/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class `langchain.agents.mrkl.base.ZeroShotAgent` was deprecated in langchain 0.1.0 and will be removed in 0.2.0. Use create_react_agent instead.
**May I suggest you have someone assigned to clean up the documentation or hire a contractor to fix. I have found these types of mistakes. In many placed. This time I decided to submit a bug**
### Idea or request for content:
Need up update the documentation. **https://python.langchain.com/docs/modules/memory/agent_with_memory/** | DOC: Memory in Agent Documentation needs to be updated. | https://api.github.com/repos/langchain-ai/langchain/issues/20122/comments | 0 | 2024-04-07T12:03:03Z | 2024-04-17T20:35:16Z | https://github.com/langchain-ai/langchain/issues/20122 | 2,229,735,857 | 20,122 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
if __name__ == '__main__':
input = 'where is my dog?'
#create embedding function by using model of 'textembedding-gecko@003'
vertexai_embedding_003 = VertexAIEmbeddings(model_name='textembedding-gecko@003')
# init a pinecone vectorstore with vertex ai embedding
pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY"), environment='us-central1-gcp')
vector_store = PineconeVectorStore(index_name='embedding-test', embedding=vertexai_embedding_003)
# create a test document
doc = Document(
page_content=input,
metadata={'category': 'pet'}
)
# save in the index
vector_store.add_documents([doc])
# similarity search from data we inserted before
print(vector_store.similarity_search_with_score(input))
### Error Message and Stack Trace (if applicable)
Screenshot of different vectors by embedding the same input('where is my dog?')
Embedding result when doing insertion
<img width="891" alt="Screenshot 2024-04-07 at 16 51 36" src="https://github.com/langchain-ai/langchain/assets/62688587/356117a3-5626-4e47-930e-bee7cd42fdb0">
Embedding result when doing query
<img width="999" alt="Screenshot 2024-04-07 at 16 51 19" src="https://github.com/langchain-ai/langchain/assets/62688587/bc70532c-83b3-46aa-8aad-9a8285849996">
_No response_
### Description
Hello Langchain team,
I found the embedding issue between adding embedding in pinecone and do similarity_search_with_score from pinecone by using the model of 'textembedding-gecko@003' of google vertex ai. It only happen on 'textembedding-gecko@003', **for 'textembedding-gecko@001' works fine**
How to reproduce
1, adding input string by using vector_store.add_documents([doc]), before it does insertion, the code will calculate the vectors by 'textembedding-gecko@003'. And then it will store the vectors and metadata into vectorstore.
2, And if we search the exactly same string by using function of 'similarity_search_with_score', our expectation score should be 1, because the input query is the same. But actually, it return '0.79' due to the wrong embedding result
After I debug the code and I found there is issue of embedding ways between stage of adding document and stage of searching document.
here is the sreenshot

We can see adding documents and query documents passed the different 'embedding_task_type' which is the reason of giving the different embedding result by passing the same input
And meanwhile parameter of 'embedding_task_type' is hardcode for these to functions, user is not able to customized it.
Here is the doc of explanation of google https://cloud.google.com/python/docs/reference/aiplatform/latest/vertexai.language_models.TextEmbeddingInput.
Conclusion, if devs follow the documents of langchian to inert and query by using 'textembedding-gecko@003', it is very easy to meet the this issue
### System Info
langchain==0.1.14
langchain_google_vertexai==0.1.2
langchain-pinecone==0.0.3 | Intergation issue between langchain-pinecone and google vertex AI textembedding-gecko@003 | https://api.github.com/repos/langchain-ai/langchain/issues/20118/comments | 1 | 2024-04-07T08:43:23Z | 2024-07-23T16:08:16Z | https://github.com/langchain-ai/langchain/issues/20118 | 2,229,635,406 | 20,118 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
TypeError: 'FAISS' object is not callable
Traceback:
File "C:\Users\Jashu\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "C:\Medi_LLM\Llama-2-7B-Chat-GGML\app.py", line 34, in <module>
retriever = vector_store(search_kwargs=["k: 2"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### Idea or request for content:
_No response_ | DOC: <Please write a comprehensive title after the 'DOC: ' prefix>TypeError: 'FAISS' object is not callable Traceback: File "C:\Users\Jashu\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script exec(code, module.__dict__) File "C:\Medi_LLM\Llama-2-7B-Chat-GGML\app.py", line 34, in <module> retriever = vector_store(search_kwargs=["k: 2"]), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | https://api.github.com/repos/langchain-ai/langchain/issues/20112/comments | 1 | 2024-04-06T19:24:28Z | 2024-04-08T15:40:37Z | https://github.com/langchain-ai/langchain/issues/20112 | 2,229,373,893 | 20,112 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
getting error at this part:
```
prompt.format(
question="Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
context=tst_context)
```
### Error Message and Stack Trace (if applicable)
`KeyError: "'source'"`
### Description
[Examples.txt](https://github.com/langchain-ai/langchain/files/14894264/Examples.txt)
I am trying to scrap a website using WebBaseLoader and generate a context then ask LLM some questions on the generated context and instructing it to give the output as JSON.
I am using the FewShotPromptTemplate;
Examples: _I have attached the file_( Giving a preview)
```
examples=[
{
"context": context1,
"question": "Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
"answer": """
{{"credit_cards": [
{
"card_name": "SBI Prime Credit Card",
"reward_categories": [
{
"category": "Birthday Spends",
"points_per_transaction": 20,
"details": "20 reward points for every Rs. 100 spent on your birthday*",
"capped_points": "Reward points earned on birthday spends (one day before, on, and one day after) are capped at 2,000 reward points per calendar year."
},
{
"category": "Dining",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on dining"
},
{
"category": "Groceries",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on groceries"
},
{
"category": "Departmental Stores",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on departmental stores"
},
{
"category": "Movies",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on movies"
},
{
"category": "All Other Retail Purchases (Except Fuel)",
"points_per_transaction": 2,
"details": "2 reward points for every Rs. 100 spent on all other retail purchases, except for fuel"
}
]
}
]
}}"""
}
```
Example prompt:
```
example_prompt = PromptTemplate(
input_variables=["context", "question","answer"],
template = """Answer the following question based only on the provided context:
<context>
{context}
</context>
Question: {question}\n
Answer:{answer}"""
)
```
Promt:
```
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="""Answer the following question based only on the provided context: \n
<context>
{context}
</context>\n
Question: {input}\n
Answer:
""",
input_variables=["input","context"],
example_separator="\n\n"
)
```
```
prompt.format(
input="Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
context=tst_context)
```
### System Info
On Google colab
!pip install -q langchain
| while using FewShotPromptTemplate, getting KeyError: "'source'";But in my template there is no such key as 'source' | https://api.github.com/repos/langchain-ai/langchain/issues/20108/comments | 0 | 2024-04-06T17:29:45Z | 2024-07-13T16:06:11Z | https://github.com/langchain-ai/langchain/issues/20108 | 2,229,337,499 | 20,108 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature = 0)
llm_with_tools = llm.bind_tools(
tools=tools,
tool_choice="auto",
)
tools = [
{
"type": "function",
"function": {
"name": "answer_queries_with_citations",
"description": "useful for answering questions about article or document content.",
"parameters": {},
"required": [],
}
},
{
"type": "function",
"function": {
"name": "miscellaneous_question_answerer",
"description": "useful for answering miscellaneous questions.",
"parameters": {},
"required": [],
}
},
{
"type": "function",
"function": {
"name": "summarize_document",
"description": "useful when very specifically asked for a summary of the document.",
"parameters": {},
"required": [],
}
}
]
### Error Message and Stack Trace (if applicable)
[2024-04-06 09:55:56,359] ERROR in app: Exception on /docquery [POST]
Traceback (most recent call last):
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask_httpauth.py", line 174, in decorated
return self.ensure_sync(f)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/app.py", line 108, in test
llm_with_tools = llm.bind_tools(
^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 746, in bind_tools
raise ValueError(
ValueError: When specifying `tool_choice`, you must provide exactly one tool. Received 3 tools.
### Description
I'm trying to let chatgpt decide between 3 custom functions by setting tool_choice equal to "auto" (as per the documentation), but it is falling into the error block for when the length of the functions list is != 1. This means the "auto" setting by which the ai chooses from more than 1 function is unreachable.
Interestingly it works fine in my local environment in which I installed the libraries several weeks ago, but not in my cloud environment deployed today. I suspect it may have been a recent breaking change. Thanks guys, appreciate any help on this!
### System Info
developed on mac OS, deployed to Render web service.
Python 3.9.13 | langchain_openai.chat_models.base.ChatOpenAI[source] bind_tools.tool_choice="auto" is not working. | https://api.github.com/repos/langchain-ai/langchain/issues/20103/comments | 3 | 2024-04-06T11:11:15Z | 2024-08-09T16:08:03Z | https://github.com/langchain-ai/langchain/issues/20103 | 2,229,195,599 | 20,103 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model = ChatGroq(
model_name="mixtral-8x7b-32768"
)
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model_with_structure = model.with_structured_output(Joke, method="json_mode")
f = model_with_structure.invoke(
"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys"
)
### Error Message and Stack Trace (if applicable)
python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "/python/3_11/wp_app/src/aibro_langchain.py", line 25, in <module>
model_with_structure = model.with_structured_output(Joke, method="json_mode")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py", line 110, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/language_models/base.py", line 204, in with_structured_output
raise NotImplementedError()
NotImplementedError
### Description
Try to get structured output from groq
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-groq==0.0.1
langchain-openai==0.0.5
Mac
python. 3.11 | with_structured_output raise NotImplementedError() Version: 0.1.14 | https://api.github.com/repos/langchain-ai/langchain/issues/20102/comments | 18 | 2024-04-06T11:10:35Z | 2024-06-25T03:30:28Z | https://github.com/langchain-ai/langchain/issues/20102 | 2,229,195,362 | 20,102 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/integrations/vectorstores/faiss/
Hello,
I am currently working with langchain for document-related processing tasks, specifically utilizing the ```faiss.from_documents``` feature for indexing and similarity searches. I am interested in understanding what the default FAISS index type is when using faiss.from_documents without specifying any particular configuration. For instance, does it default to using PQIVF, LSH, or another type of index?
After reading the source code, I find that I can only use inner product and L2 index.
What should I do if I want to use IVFPQ?
### Idea or request for content:
_No response_ | DOC: < type of faiss index prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/20097/comments | 0 | 2024-04-06T02:38:41Z | 2024-07-13T16:06:05Z | https://github.com/langchain-ai/langchain/issues/20097 | 2,229,036,462 | 20,097 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import boto3
import json
import os
import langchain
from langchain.llms.bedrock import Bedrock
from langchain import hub
from langchain.agents import AgentExecutor, create_structured_chat_agent
from langchain_community.llms import Bedrock
from langchain.tools import tool
AWS_ACCESS_KEY = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_REGION = os.getenv('AWS_REGION', 'us-east-1')
bedrock = boto3.client(
service_name='bedrock-runtime',
aws_access_key_id=AWS_ACCESS_KEY,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=AWS_REGION
)
llm = Bedrock(
credentials_profile_name="default",
model_id="mistral.mistral-large-2402-v1:0")
@tool
def multiply(a: int, b: int):
"""Multiply two integers"""
return a * b
tools = [multiply]
prompt = hub.pull("hwchase17/structured-chat-agent")
agent = create_structured_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": "what is 123 * 456"})
print(result)
### Error Message and Stack Trace (if applicable)
File ".../env/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 654, in _prepare_input_and_invoke_stream
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [stop_sequences] is not permitted, please reformat your input and try again.
### Description
I'm using AWS Bedrock for an agent application. It throws an error due to a stop sequence parameter that isn't supported by the AWS api.
The error can be mitigated by commenting out lines 611 - 619 in langchain_community.llms.bedrock
# if stop:
# if provider not in self.provider_stop_sequence_key_name_map:
# raise ValueError(
# f"Stop sequence key name for {provider} is not supported."
# )
# # stop sequence from _generate() overrides
# # stop sequences in the class attribute
# _model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.0.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
boto3==1.34.79
botocore==1.34.79 | Stop Sequenced Not Supported by AWS Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/20095/comments | 4 | 2024-04-06T00:57:13Z | 2024-07-21T16:06:45Z | https://github.com/langchain-ai/langchain/issues/20095 | 2,228,997,423 | 20,095 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
While reading and learning from the [LCEL - Getting Started](https://python.langchain.com/docs/expression_language/get_started/#rag-search-example) docs, I found one minor issue in the documentation.
In RAG Search Example, ` (backtick) is missing around the keyword RunnablePassthrough - [GitHub Code](https://github.com/langchain-ai/langchain/blob/de496062b3e740aed2c7097424749d1145e4aaab/docs/docs/expression_language/get_started.ipynb#L443).
Solution:
RunnablePassthrough -> `RunnablePassthrough`
### Idea or request for content:
_No response_ | DOC: missing backtick around RunnablePassthrough on LCEL getting started | https://api.github.com/repos/langchain-ai/langchain/issues/20094/comments | 1 | 2024-04-06T00:41:55Z | 2024-04-11T12:39:24Z | https://github.com/langchain-ai/langchain/issues/20094 | 2,228,988,879 | 20,094 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_loaders import DirectoryLoader, TextLoader
loader = DirectoryLoader(
path,
glob="**/*.txt",
use_multithreading=True,
show_progress=True,
sample_size=1000,
loader_cls=TextLoader)
docs: List[Document] = loader.load()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Expecting `List[Document]` but getting back `List[List[Document]]` which cannot be directly fed into `Chroma.from_documents` for example
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.38
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.14
Windows
Python 3.9.13 | DirectoryLoader with use_multithreading=True loads each doc as a list of docs | https://api.github.com/repos/langchain-ai/langchain/issues/20093/comments | 0 | 2024-04-06T00:32:34Z | 2024-07-13T16:06:00Z | https://github.com/langchain-ai/langchain/issues/20093 | 2,228,980,402 | 20,093 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
libs/community/langchain_community/document_loaders/confluence.py
@@ -359,6 +359,7 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
content_format,
ocr_languages,
keep_markdown_format,
**keep_newlines=keep_newlines**
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I use the confluence loader of langchain to download the pages content of a specific page of my confluence instance. While textspllitting/chunking the pages I've noticed that in none-markdown format the newlines were missing. During the debugging I saw that that the required forward-pass of the `keep_newlines` parameter was not passed down to all call of the `process_pages` function inside of
`libs/community/langchain_community/document_loaders/confluence.py`
### System Info
langchain=0.1.14
windows 11
python 3.10 | Confluence loader "keep_newlines" not always passed to "process_pages" | https://api.github.com/repos/langchain-ai/langchain/issues/20086/comments | 1 | 2024-04-05T20:31:46Z | 2024-07-12T16:04:49Z | https://github.com/langchain-ai/langchain/issues/20086 | 2,228,766,725 | 20,086 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
There's some discrepancies in the init args different models use to set the same params. It'd be a much nicer UX if common params could be set with a common set of init args
Suggesting that if a param in this list is present in a model integration, the below name should either be the actual attribute name or an init arg that can be used to set the attribute:
```python
model: str # model name
api_key: str # api key
temperature: float # temperature sampling
timeout: ... # request timeout
max_tokens: int # max tokens
stop: ... # stop sequences
max_retries: int # max num retries
base_url: str # base URL to send requests to
```
Importantly we should also use the above init args in the docs | Standardized model init arg names | https://api.github.com/repos/langchain-ai/langchain/issues/20085/comments | 10 | 2024-04-05T20:30:47Z | 2024-06-10T12:49:39Z | https://github.com/langchain-ai/langchain/issues/20085 | 2,228,765,642 | 20,085 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.llms import Ollama
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.jira.toolkit import JiraToolkit
from langchain_community.utilities.jira import JiraAPIWrapper
from langchain_core.prompts import ChatPromptTemplate
os.environ["JIRA_API_TOKEN"] = "token"
os.environ["JIRA_USERNAME"] = "user"
os.environ["JIRA_INSTANCE_URL"] = "https://jira.atlassian.net"
llm = Ollama(model="llama2:13b")
jira = JiraAPIWrapper()
toolkit = JiraToolkit.from_jira_api_wrapper(jira)
agent = initialize_agent(
toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
# chain = prompt | chat
resp = agent.invoke(
"make a new issue in project id 'MUG' to remind me to make more fried rice"
)
```
### Error Message and Stack Trace (if applicable)
```bash
LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.
warn_deprecated(
> Entering new AgentExecutor chain...
Question: make a new issue in project id 'MUG' to remind me to make more fried rice
Thought: I should create a new issue in the MUG project to remind myself to make more fried rice.
Action: Create Issue
Action Input: {
"summary": "Reminder to make more fried rice",
"description": "I want to make more fried rice, this is a reminder.",
"issuetype": {"name": "Task"},
"priority": {"name": "Low"}
}Creating issue "Reminder to make more fried rice"
Traceback (most recent call last):
File "/Users/samuelbirocchi/.pyenv/versions/3.10.13/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/samuelbirocchi/.pyenv/versions/3.10.13/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "./main.py", line 45, in <module>
resp = agent.invoke(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1223, in _iter_next_step
yield self._perform_agent_action(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _perform_agent_action
observation = tool.run(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_core/tools.py", line 422, in run
raise e
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_core/tools.py", line 381, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/tools/jira/tool.py", line 44, in _run
return self.api_wrapper.run(self.mode, instructions)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/utilities/jira.py", line 168, in run
return self.issue_create(query)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/utilities/jira.py", line 139, in issue_create
return self.jira.issue_create(fields=dict(params))
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/jira.py", line 1537, in issue_create
return self.post(url, data={"fields": fields})
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 388, in post
response = self.request(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 312, in request
self.raise_for_status(response)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 549, in raise_for_status
raise HTTPError(error_msg, response=response)
requests.exceptions.HTTPError
```
### Description
I'm trying to create a Jira ticket following the documentation
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-text-splitters==0.0.1
Python 3.10.13 | Jira toolkit prompt for create ticket does not include project ID | https://api.github.com/repos/langchain-ai/langchain/issues/20084/comments | 2 | 2024-04-05T20:11:01Z | 2024-06-05T04:14:30Z | https://github.com/langchain-ai/langchain/issues/20084 | 2,228,740,501 | 20,084 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
In the example below, if the `keep_separator` parameter is set to `True`, the separator's position will appear at the beginning of each chunk, whereas we would prefer it to appear at the end.
```python
from langchain_text_splitters import CharacterTextSplitter
content = "Hello world! Nice to meet you! Nice to meet you too!"
text_splitter = CharacterTextSplitter(separator="!", chunk_size=25, chunk_overlap=0, keep_separator=True)
chunks = text_splitter.create_documents([content])
for chunk in chunks:
print(chunk.page_content)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In this case, my raw input is `Hello world! Nice to meet you! Nice to meet you too!`, and my expected output is:
```
# Expected
Hello world!
Nice to meet you!
Nice to meet you too!
```
However, the real output is:
```
# Real
Hello world
! Nice to meet you
! Nice to meet you too!
```
---
At first, I thought it was a bug, so I fixed the bug and was ready to `create pull request`.
```python
# venv/Lib/site-packages/langchain_text_splitters/character.py
def _split_text_with_regex(...):
...
# splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
# if len(_splits) % 2 == 0:
# splits += _splits[-1:]
# splits = [_splits[0]] + splits
# --------replace with below--------
if len(_splits) % 2 != 0:
_splits.append("")
splits = [_splits[i] + _splits[i + 1] for i in range(0, len(_splits), 2)]
...
```
However, I found that it couldn't pass the test. Upon reviewing the test code, I discovered that this was not a mistake by the author, which left me confused.
@baskaryan
@hwchase17
### System Info
(venv) ~\Desktop\workspace\python\langchain-imooc git:[main]
python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.37
> langchain: 0.1.14
> langchain_community: 0.0.30
> langsmith: 0.1.38
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Confusion about TextSplitter (param keep_separator) | https://api.github.com/repos/langchain-ai/langchain/issues/20066/comments | 0 | 2024-04-05T13:50:38Z | 2024-07-12T16:04:43Z | https://github.com/langchain-ai/langchain/issues/20066 | 2,228,089,135 | 20,066 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.