issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Currently we have many "type: ignore" comments in our code for ignoring mypy errors. We should rarely if ever use these. We've had to recently add a bunch of these ignore comments because of an error (i made) that was silently causing mypy not to run in CI.
We should work to remove as many of them as possible by fixing the underlying issues. To find the you can just grep for:
```bash
git grep "type: ignore" libs/
```
This is a big effort, even just removing a few at a time would be very helpful. | Remove "type: ignore" comments | https://api.github.com/repos/langchain-ai/langchain/issues/17048/comments | 4 | 2024-02-05T19:26:32Z | 2024-04-04T14:22:40Z | https://github.com/langchain-ai/langchain/issues/17048 | 2,119,302,272 | 17,048 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.elasticsearch import ElasticsearchStore
from langchain.embeddings.huggingface import HuggingFaceBgeEmbeddings
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chat_models.openai import ChatOpenAI
vectorstore = ElasticsearchStore(
embedding=HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-small-en-v1.5",
model_kwargs={"device": "cpu"},
encode_kwargs={"normalize_embeddings": True},
),
index_name="z-index",
es_url="http://localhost:9200",
)
metadata_field_info = [
...,
AttributeInfo(
name="update_date",
description="Date when the document was last updated",
type="string",
),
...
]
document_content = "an abstract of the document"
retriever = SelfQueryRetriever.from_llm(
ChatOpenAI(temperature=0, api_key=KEY, max_retries=20),
vectorstore,
document_content,
metadata_field_info,
verbose=True,
enable_limit=True
)
r = retriever.invoke("give me all documents in the last two days?")
print(r)
```
### Error Message and Stack Trace (if applicable)
r = retriever.invoke("give me all documents in the last two days?")
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 121, in invoke
return self.get_relevant_documents(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 224, in get_relevant_documents
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
File "/usr/local/lib/python3.10/dist-packages/langchain/retrievers/self_query/base.py", line 171, in _get_relevant_documents
docs = self._get_docs_with_query(new_query, search_kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/retrievers/self_query/base.py", line 145, in _get_docs_with_query
docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores.py", line 139, in search
return self.similarity_search(query, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/elasticsearch.py", line 632, in similarity_search
results = self._search(
File "/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/elasticsearch.py", line 815, in _search
response = self.client.search(
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/utils.py", line 402, in wrapped
return api(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/__init__.py", line 3733, in search
return self.perform_request( # type: ignore[return-value]
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/_base.py", line 320, in perform_request
raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
**elasticsearch.BadRequestError: BadRequestError(400, 'x_content_parse_exception', '[range] query does not support [date]')**
### Description
The ElasticsearchTranslator should not put comparison value in the field directly since it cause a syntax error in the query, instead if it's a date it should put the value of the date (just like in the issue #16022)
### System Info
System Information
------------------
> OS: Linux
> OS Version: #15~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Jan 12 18:54:30 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langserve: 0.0.37
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | ElasticsearchTranslator generating invalid queries for Date type | https://api.github.com/repos/langchain-ai/langchain/issues/17042/comments | 2 | 2024-02-05T15:39:52Z | 2024-02-13T20:26:40Z | https://github.com/langchain-ai/langchain/issues/17042 | 2,118,854,903 | 17,042 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
pdf_file = '/content/documents/Pre-proposal students.pdf'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0)
# Create a retriever for the vector database
document_content_description = "Description of research papers and research proposal"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# retriever.get_relevant_documents("What is the title of the proposal")
# logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
retriever.get_relevant_documents("main research method")
```
below's the output
`[Document(page_content='Training and evaluation corpora inlow-resource\nlanguages may notbeaseffective due tothepaucity of\ndata.\n3.Create acentral dialect tomediate between the\nvarious Gondi dialects, which can beused asa\nstandard language forallGondi speakers.\n4.Low BLEU scores formachine translation model :\nThere isaneed forbetter methods oftraining and\nevaluating machine translation models.\nPOS Tagging\nData Collection', metadata={'page': 0, 'source': '/content/documents/Pre-proposal PhD students.pdf'}))]`
where as in the langchain selfQueryRetriver documentation, below's the output which has been shown
`StructuredQuery(query='taxi driver', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2000)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Luc Besson')]), limit=None)`
where i can see the query above which is classified as taxi driver
### Idea or request for content:
_No response_ | now showing query field when trying to retrieve the documents using SelfQueryRetriver | https://api.github.com/repos/langchain-ai/langchain/issues/17040/comments | 1 | 2024-02-05T14:49:58Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17040 | 2,118,743,981 | 17,040 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am going through the examples in the documentation in a Jupyter Lab notebook. I'm running the code from [here](https://python.langchain.com/docs/expression_language/get_started):
```
# Requires:
# pip install langchain docarray tiktoken
from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai.chat_models import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
vectorstore = DocArrayInMemorySearch.from_texts(
["harrison worked at kensho", "bears like to eat honey"],
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
chain.invoke("where did harrison work?")
```
I'm getting this error message on the chain.invoke():
```
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
```
Is the code in the docs obsolete or is this a problem of my setup? I'm using langchain 0.1.5 and Python 3.11.
### Idea or request for content:
_No response_ | DOC: RAG Search example validation error | https://api.github.com/repos/langchain-ai/langchain/issues/17039/comments | 4 | 2024-02-05T14:23:15Z | 2024-02-06T12:05:14Z | https://github.com/langchain-ai/langchain/issues/17039 | 2,118,686,216 | 17,039 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of research papers"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,)
# # Use the chain to answer a question
# query = "how many are injured and dead in christchurch Mosque?"
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
below's the output of above code
```
I'm sorry, I don't have any information about the Christchurch Mosque incident.
Sources:
/content/10.pdf
51 dead and 49 injured.
Sources:
/content/11.pdf
51 dead and 49 injured
Sources:
/content/11.pdf
CPU times: user 4.38 s, sys: 79.8 ms, total: 4.46 s
Wall time: 8.9 s
```
in the above output if you see it has returned the same answer twice from the same document. How to fix this? Is there any issue with Chroma vector database?
### Idea or request for content:
_No response_ | Chroma db repeating same data and output which is irreleavant | https://api.github.com/repos/langchain-ai/langchain/issues/17038/comments | 1 | 2024-02-05T14:03:23Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17038 | 2,118,642,283 | 17,038 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of research papers"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
above code is returning output like below
```
I'm sorry, I don't have enough information to answer your question.
Sources:
/content/11.pdf
51 dead and 49 injured.
Sources:
/content/10.pdf
I'm sorry, I don't have enough context to answer this question.
Sources:
/content/110.pdf
CPU times: user 4.12 s, sys: 68.5 ms, total: 4.19 s
Wall time: 9.8 s
```
How to print the queries which were self generated by SelfQueryRetriever function?
### Idea or request for content:
_No response_ | how to print the self generated queries by SelfQueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17037/comments | 1 | 2024-02-05T13:44:44Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17037 | 2,118,601,820 | 17,037 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which is using multiqueryretrieval
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.chat_models import ChatOpenAI
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorstore = FAISS.from_documents(texts, embeddings)
# vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=llm)
# docs = retriever.get_relevant_documents(query="how many are injured and dead in christchurch Mosque?")
# print(docs)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
I just want to know what are the queries multiqueryretrieval generated. How do i get those?
### Idea or request for content:
_No response_ | how to get the generated queries output? | https://api.github.com/repos/langchain-ai/langchain/issues/17034/comments | 3 | 2024-02-05T12:10:34Z | 2024-02-14T03:34:51Z | https://github.com/langchain-ai/langchain/issues/17034 | 2,118,387,709 | 17,034 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
llm = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True, temperature=0)
[SystemMessage(content="Use the following optional pieces of information to fullfil the user's request in French and in markdown format.\n\nPotentially Useful Information:\n\nQuestion: Qu'est-ce qu'une question débile ?"), HumanMessage(content="Qu'est-ce qu'une question débile ?")]
llm.invoke(msgs)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 165, in invoke
self.generate_prompt(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 375, in _generate
response = chat.send_message(
^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 709, in send_message
return self._send_message(
^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 805, in _send_message
raise ResponseBlockedError(
vertexai.generative_models._generative_models.ResponseBlockedError: The response was blocked.
### Description
The code snippet you provided is trying to use the Gemini Pro language model through the Langchain library to answer a question in French. However, it's encountering an error because the Gemini Pro model has a safety feature that Langchain doesn't currently handle.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.11.7 (main, Jan 26 2024, 08:55:53) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | Vertex AI Gemini Pro doesn't handle safety measures | https://api.github.com/repos/langchain-ai/langchain/issues/17032/comments | 2 | 2024-02-05T10:49:05Z | 2024-06-08T16:09:36Z | https://github.com/langchain-ai/langchain/issues/17032 | 2,118,222,920 | 17,032 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
import pickle
import faiss
from langchain.vectorstores import FAISS
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
import textwrap
root_dir = "/content/data"
pdf_files = ['11.pdf', '12.pdf', '13.pdf']
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
embeddings = OpenAIEmbeddings()
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
# row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}")
print("\n")
else:
print("No sources available.")
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
in the above code, how to add the self query retrieval?
### Idea or request for content:
_No response_ | how to add self query retrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17031/comments | 8 | 2024-02-05T09:12:55Z | 2024-07-02T15:32:48Z | https://github.com/langchain-ai/langchain/issues/17031 | 2,118,031,252 | 17,031 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
import pickle
import faiss
from langchain.vectorstores import FAISS
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
import textwrap
root_dir = "/content/data"
pdf_files = ['/content/documents/11.pdf', '10.pdf', '12.pdf']
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
embeddings = OpenAIEmbeddings()
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
# row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}")
print("\n")
else:
print("No sources available.")
query = 'how many are injured and dead in christchurch Mosque?'
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Use the chain to answer a question
llm_response = qa_chain(query)
process_llm_response(llm_response)
```
and below's the prompt_template
```
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
```
Can you assist me in how to integrate prompt_template to th existing code?
### Idea or request for content:
_No response_ | how to add prompt template to RetrivealQA function? | https://api.github.com/repos/langchain-ai/langchain/issues/17029/comments | 6 | 2024-02-05T08:00:36Z | 2024-02-14T03:34:51Z | https://github.com/langchain-ai/langchain/issues/17029 | 2,117,905,289 | 17,029 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
here is my code which is using ConversationBufferMemory to store the memory
os.environ['OPENAI_API_KEY'] = openapi_key
Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
model_name="gpt-3.5-turbo-instruct"
db = SQLDatabase(engine, view_support=True, include_tables=['RND360_ChatGPT_BankView'])
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
db_chain = None
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT = """
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
The question: {question}
"""
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Unless the user specifies in his question a specific number of examples he wishes to obtain,
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Return the answer in user friendly form.
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
global db_chain
global memory
if memory == None:
# llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
memory = ConversationBufferMemory()
# db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
# while True:
# try:
# print("*****")
# print(memory.load_memory_variables({})['history'])
# question = input("Enter your Question : ")
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
else:
answer = db_chain.run(question)
print(memory.load_memory_variables({}))
return answer
### Error Message and Stack Trace (if applicable)
Entering new SQLDatabaseChain chain...
what is jyothi employee id
SQLQuery:SELECT EmployeeID FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'Jyothi'
SQLResult: [('AD23020933',)]
Answer:Jyothi's employee ID is AD23020933.
Finished chain.
{'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933."}
Jyothi's employee ID is AD23020933.
127.0.0.1 - - [05/Feb/2024 11:01:28] "GET /get_answer?questions=what%20is%20jyothi%20employee%20id HTTP/1.1" 200 -
what is her mail id
Entering new SQLDatabaseChain chain...
what is her mail id
SQLQuery:SELECT UserMail
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeName = 'Jyothi'
Answer: Jyothi's email ID is [jyothi@example.com](mailto:jyothi@example.com).[2024-02-05 11:01:45,039] ERROR in app: Exception on /get_answer [GET]
Traceback (most recent call last):
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is [jyothi@example.com](mailto:jyothi@example.com).'. (105)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\flask8.py", line 46, in generate_answer
answer = chat1(questions)
^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main5.py", line 180, in chat1
answer = db_chain.run(question)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 310, in call
raise e
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 304, in call
self._call(inputs, run_manager=run_manager)
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 208, in _call
raise exc
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 143, in _call
result = self.database.run(sql_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 433, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 411, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute
return meth(
^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\sql\elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is [jyothi@example.com](mailto:jyothi@example.com).'. (105)")
[SQL: SELECT UserMail
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeName = 'Jyothi'
Answer: Jyothi's email ID is [jyothi@example.com](mailto:jyothi@example.com).]
(Background on this error at: https://sqlalche.me/e/20/f405)
127.0.0.1 - - [05/Feb/2024 11:01:45] "GET /get_answer?questions=what%20is%20her%20mail%20id HTTP/1.1" 500 -
what is employee name of ad22050853
Entering new SQLDatabaseChain chain...
what is employee name of ad22050853
SQLQuery:SELECT EmployeeName
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeID = 'AD22050853'
SQLResult: [('Harin Vimal Bharathi',)]
Answer:The employee name of AD22050853 is Harin Vimal Bharathi.
Finished chain.
{'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933.\nHuman: what is employee name of ad22050853\nAI: The employee name of AD22050853 is Harin Vimal Bharathi."}
The employee name of AD22050853 is Harin Vimal Bharathi.
127.0.0.1 - - [05/Feb/2024 11:03:48] "GET /get_answer?questions=%20what%20is%20employee%20name%20of%20ad22050853 HTTP/1.1" 200 -
### Description
here when im asking 2nd question its trowing error as "pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'." but when im asking 3rd question which is not related to first 2, its giving answer and storing memory,
### System Info
python: 3.11
langchain: latest | while using ConversationBufferMemory to store the memory i the chatbot "sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near " | https://api.github.com/repos/langchain-ai/langchain/issues/17026/comments | 9 | 2024-02-05T05:57:05Z | 2024-05-14T16:08:16Z | https://github.com/langchain-ai/langchain/issues/17026 | 2,117,740,798 | 17,026 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.messages import _message_from_dict
_message_from_dict({"type": "ChatMessageChunk", "data": {...}})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using the momory runnables and hitting an issue when chatmessagechunk types are used.
### System Info
issue exists in latest:
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/messages/__init__.py#L71-L96 | The helper function for converting dicts to message types doesn't handle ChatMessageChunk message types. | https://api.github.com/repos/langchain-ai/langchain/issues/17022/comments | 1 | 2024-02-05T04:11:20Z | 2024-05-13T16:10:22Z | https://github.com/langchain-ai/langchain/issues/17022 | 2,117,636,935 | 17,022 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
class EmbeddingStore(BaseModel):
"""Embedding store."""
__tablename__ = "langchain_pg_embedding"
collection_id = sqlalchemy.Column(
UUID(as_uuid=True),
sqlalchemy.ForeignKey(
f"{CollectionStore.__tablename__}.uuid",
ondelete="CASCADE",
),
)
collection = relationship(CollectionStore, back_populates="embeddings")
embedding: Vector = sqlalchemy.Column(Vector(vector_dimension))
document = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# Using JSONB is better to process special characters
cmetadata = sqlalchemy.Column(JSON, nullable=True)
# custom_id : any user defined id
custom_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
_classes = (EmbeddingStore, CollectionStore)
return _classes
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I find the column langchain_pg_embedding.cmetadata uses the json type, so when I want to save some special characters, they will be processed into unicode, not utf-8. I looked at the source code of the pgvector.py file and found the EmbeddingStore class. When cmetadata is defined as type JSONB, special characters are not saved as unicode, but saved as utf-8, and also it can be queried by SQL. Could this part be modified in the main branch of the new version?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:41) [Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
| The column langchain_pg_embedding.cmetadata uses the json type resulting in special characters being processed into unicode | https://api.github.com/repos/langchain-ai/langchain/issues/17020/comments | 1 | 2024-02-05T03:36:08Z | 2024-05-13T16:10:19Z | https://github.com/langchain-ai/langchain/issues/17020 | 2,117,607,473 | 17,020 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import pandas as pd
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from typing import Any, List, Mapping, Optional
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[: self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
df = pd.read_csv("./titanic.csv")
llm=CustomLLM(n=10)
agent = create_pandas_dataframe_agent(llm,df,verbose=True)
result = agent.run("这艘船上存活的男性有多少人")
print(result)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/langchain_demos.py", line 30, in <module>
agent = create_pandas_dataframe_agent(llm,df,verbose=True,input_variables=input_vars)
File "/Users/yyl/opt/anaconda3/envs/ragpy310/lib/python3.10/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py", line 264, in create_pandas_dataframe_agent
runnable=create_react_agent(llm, tools, prompt), # type: ignore
File "/Users/yyl/opt/anaconda3/envs/ragpy310/lib/python3.10/site-packages/langchain/agents/react/agent.py", line 97, in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
ValueError: Prompt missing required variables: {'tool_names', 'tools'}
### Description
I'm trying to use the pandas agent function from Langchain and encountering this error.
### System Info
langchain 0.1.5
langchain-community 0.0.17
langchain-core 0.1.18
langchain-experimental 0.0.50
macos
python 3.10.13 | Pandas agent got error:Prompt missing required variables: {'tool_names', 'tools'} | https://api.github.com/repos/langchain-ai/langchain/issues/17019/comments | 17 | 2024-02-05T01:51:46Z | 2024-07-23T13:33:36Z | https://github.com/langchain-ai/langchain/issues/17019 | 2,117,496,757 | 17,019 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The provided code combines multiple PDF files into one and then extracts a single answer from all of them using Vectordb. But, I'm interested in code that can extract answers separately from each individual PDF file.
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
import pickle
import faiss
from langchain.vectorstores import FAISS
import textwrap
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content/documents', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
"""### Divide and Conquer"""
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
"""### Get Embeddings for OUR Documents"""
# !pip install faiss-cpu
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("which method did Ventirozos use?")
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 5})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}, Row: {row_number}")
else:
print("No sources available.")
query = 'which method did Ventirozos use??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
query = 'which method did Ventirozos use??'
print('-------------------OpenAI Embeddings------------------\n')
llm_response = qa_chain_openai(query)
process_llm_response(llm_response)
```
Can you have a look into the above code and help me with this? I presume we need to save vector db's separately for every pdf, then iterate through those and return answer for every pdf file
### Idea or request for content:
_No response_ | Unable to return answers from every pdf | https://api.github.com/repos/langchain-ai/langchain/issues/17008/comments | 3 | 2024-02-04T19:02:29Z | 2024-02-14T03:34:50Z | https://github.com/langchain-ai/langchain/issues/17008 | 2,117,259,978 | 17,008 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
embedding = HuggingFaceBgeEmbeddings(
model_name=model,
model_kwargs={'device': 'cuda'},
encode_kwargs={'normalize_embeddings': True}
)
connection_args = {
'uri': milvus_cfg.REMOTE_DATABASE['url'],
'user': milvus_cfg.REMOTE_DATABASE['username'],
'password': milvus_cfg.REMOTE_DATABASE['password'],
'secure': True,
}
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True,
auto_id=True,
)
# I omitted some document split part here
md_docs = r_splitter.split_documents(head_split_docs)
vector_db.add_documents(md_docs)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\program\python\KnowledgeBot\InitDatabase.py", line 100, in <module>
load_md(config.MD_PATH)
File "D:\program\python\KnowledgeBot\utils\TimeUtil.py", line 8, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\program\python\KnowledgeBot\InitDatabase.py", line 82, in load_md
vector_db.add_documents(md_docs)
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_core\vectorstores.py", line 119, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_community\vectorstores\milvus.py", line 586, in add_texts
insert_list = [insert_dict[x][i:end] for x in self.fields]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_community\vectorstores\milvus.py", line 586, in <listcomp>
insert_list = [insert_dict[x][i:end] for x in self.fields]
~~~~~~~~~~~^^^
KeyError: 'pk'
### Description
This is how my original code looked like:
```python
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True
)
```
It was able to run successfully.
The version information at that time was:
- python: 3.11
- langchain==0.1.4
- langchain_community==0.0.16
- pymilvus==2.3.5
However, when I updated the version information and tried to run it directly, an error occurred:
_A list of valid ids are required when auto_id is False_
By checking, I found that a new parameter called `auto_id` was added. And after I modified the Milvus setting to the code like this:
```python
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True,
auto_id=True
)
```
the error has changed to the current one.
### System Info
- python: 3.11
- langchain==0.1.5
- langchain_community==0.0.17
- pymilvus==2.3.6 | An error occurred while adding a document to the Zilliz vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/17006/comments | 8 | 2024-02-04T16:24:36Z | 2024-05-21T03:13:23Z | https://github.com/langchain-ai/langchain/issues/17006 | 2,117,180,017 | 17,006 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
loader = CSVLoader(file_path='file.csv', csv_args={
'fieldnames': ['column_name_inside_dataset'], #if uncommented the load method fails
"delimiter": ',',
})
docs = loader.load()
```
### Error Message and Stack Trace (if applicable)
```code
AttributeError Traceback (most recent call last)
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70), in CSVLoader.load(self)
[69](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:69) with open(self.file_path, newline="", encoding=self.encoding) as csvfile:
---> [70](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70) docs = self.__read_file(csvfile)
[71](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:71) except UnicodeDecodeError as e:
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105), in CSVLoader.__read_file(self, csvfile)
[102](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:102) raise ValueError(
[103](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:103) f"Source column '{self.source_column}' not found in CSV file."
[104](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:104) )
--> [105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105) content = "\n".join(
[106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106) f"{k.strip()}: {v.strip() if v is not None else v}"
[107](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:107) for k, v in row.items()
[108](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:108) if k not in self.metadata_columns
[109](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:109) )
[110](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:110) metadata = {"source": source, "row": i}
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106), in <genexpr>(.0)
[102](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:102) raise ValueError(
[103](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:103) f"Source column '{self.source_column}' not found in CSV file."
[104](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:104) )
[105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105) content = "\n".join(
--> [106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106) f"{k.strip()}: {v.strip() if v is not None else v}"
[107](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:107) for k, v in row.items()
...
[85](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:85) except Exception as e:
---> [86](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:86) raise RuntimeError(f"Error loading {self.file_path}") from e
[88](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:88) return docs
```
### Description
As illustrated in the code example, passing fieldnames attribute within the csv_args makes it fail when loading the doc.
If fieldnames is not passed, printing doc[0].page_content correctly loads all columns, including the column that I wanted to filter in.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18 | Passing dictionary key "fieldnames" within csv_args paramenter to CSVLoader fails | https://api.github.com/repos/langchain-ai/langchain/issues/17001/comments | 3 | 2024-02-04T12:48:43Z | 2024-02-04T13:03:28Z | https://github.com/langchain-ai/langchain/issues/17001 | 2,117,082,044 | 17,001 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import LlamaCpp
from langchain_core.runnables import ConfigurableField
def main():
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
config: dict = {
'model_path': "../../holograph-llm/backend/models/zephyr-7b-beta.Q2_K.gguf",
'n_gpu_layers': n_gpu_layers,
'n_batch': n_batch,
'echo': True,
'callback_manager': callback_manager,
'verbose': True, # Verbose is required to pass to the callback manager
"max_tokens": 250,
"temperature": 0.1
}
# Make sure the model path is correct for your system!
llm_a = LlamaCpp(**config).configurable_fields(
temperature=ConfigurableField(
id="llm_temperature",
name="LLM Temperature",
description="The temperature of the LLM",
),
max_tokens=ConfigurableField(
id="llm_max_tokens",
name="LLM max output tokens",
description="The maximum number of tokens to generate",
),
top_p=ConfigurableField(
id="llm_top_p",
name="LLM top p",
description="The top-p value to use for sampling",
),
top_k=ConfigurableField(
id="llm_top_k",
name="LLM top-k",
description="The top-k value to use for sampling",
)
)
# Working call that overrides the temp, if you removed conditional import of LlamaGrammar.
llm_a.with_config(configurable={
"llm_temperature": 0.9,
"llm_top_p": 0.9,
"llm_top_k": 0.2,
"llm_max_tokens": 15,
}).invoke("pick a random number")
if __name__ == "__main__":
main()
```
A notebook replicating the issue to open in Google Colab on a T4 is available [here](https://gist.github.com/fpaupier/d978e8809bc2b699df9ea3c12c433080)
### Error Message and Stack Trace (if applicable)
```log
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
<ipython-input-10-16b4c6671e0e> in <cell line: 1>()
4 "llm_top_k": 0.2,
5 "llm_max_tokens": 15,
----> 6 }).invoke("pick a random number")
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py in invoke(self, input, config, **kwargs)
4039 **kwargs: Optional[Any],
4040 ) -> Output:
-> 4041 return self.bound.invoke(
4042 input,
4043 self._merge_configs(config),
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/configurable.py in invoke(self, input, config, **kwargs)
92 self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
93 ) -> Output:
---> 94 runnable, config = self._prepare(config)
95 return runnable.invoke(input, config, **kwargs)
96
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/configurable.py in _prepare(self, config)
289 if configurable:
290 return (
--> 291 self.default.__class__(**{**self.default.__dict__, **configurable}),
292 config,
293 )
/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py in __init__(self, **kwargs)
105
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
109
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model()
/usr/local/lib/python3.10/dist-packages/pydantic/fields.cpython-310-x86_64-linux-gnu.so in pydantic.fields.ModelField.validate()
ConfigError: field "grammar" not yet prepared so type is still a ForwardRef, you might need to call LlamaCpp.update_forward_refs().
```
### Description
- I'm trying to pass parameters to my `LlamaCpp` model at inference, such as `temperaure`, as described in the [Langchain doc](https://python.langchain.com/docs/expression_language/how_to/configure).
- Yet, when using `configurable`, at inference your model will go create a new instance of your LLM
see the [`_prepare`](https://github.com/langchain-ai/langchain/blob/849051102a6e315072e3a1d8dfdcee1527436c92/libs/core/langchain_core/runnables/configurable.py#L94) function in `langchain_core/runnables/configurable.py`.
```python
return (
self.default.__class__(**{**self.default.__dict__, **configurable}),
config,
)
```
See [source here](https://github.com/langchain-ai/langchain/blob/849051102a6e315072e3a1d8dfdcee1527436c92/libs/core/langchain_core/runnables/configurable.py#L291)
- Here, with a `LlamaCpp` langchain community wrapper, see [llamacpp.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py), you can see `LlamaCpp` class has several attribute among which a `grammar` one:
```python
class LlamaCpp(LLM):
"""llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example:
.. code-block:: python
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(model_path="/path/to/llama/model")
"""
client: Any #: :meta private:
model_path: str
"""The path to the Llama model file."""
...
grammar: Optional[Union[str, LlamaGrammar]] = None
"""
grammar: formal grammar for constraining model outputs. For instance, the grammar
can be used to force the model to generate valid JSON or to speak exclusively in
emojis. At most one of grammar_path and grammar should be passed in.
"""
```
This `grammar` as potential type `LlamaGrammar` which is **only invoked** when the `typing` constant `TYPE_CHECKING` evaluates to true, see the import at the top of the file:
```python
if TYPE_CHECKING:
from llama_cpp import LlamaGrammar
```
This is the root cause of the issue since when we are preparing the model to perform an inference with a `configurable` we need to create a new instance of the `LlamaCpp` class (remember `self.default.__class__(**{**self.default.__dict__, **configurable}),` described above), but here the `LlamaGrammar` won't be available in the context, leading to a Pydantic validation error that this type is unknown and crashing the program.
A simple issue is to import the `LlamaGrammar` anytime, without the `TYPE_CHECKING` check, this will enable the `LlamaGrammar` type is always available, preventing such issues. Since this import is only a type definition, it will not create circular dependencies or other issues and performance degradation of having this additional import of a type file should be minor compared to the other LLM inference related functions.
I will open a PR proposing a fix.
### System Info
- package versions:
```text
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
llama_cpp_python==0.2.30 # just for information, note that the issue is not due to llama_cpp_python codebase, hence posting here.
```
- Bug reproduced on macOS and Linux (Google colab with a T4)
- python version 3.11.6 on macOS, Python 3.10.12 on Google Collab | Error in LlamaCpp with Configurable Fields, 'grammar' custom type not available | https://api.github.com/repos/langchain-ai/langchain/issues/16994/comments | 1 | 2024-02-04T08:41:51Z | 2024-05-12T16:09:05Z | https://github.com/langchain-ai/langchain/issues/16994 | 2,116,960,550 | 16,994 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I create a LLM:
def mixtral() -> BaseLanguageModel:
llm = HuggingFaceHub(
repo_id="mistralai/Mixtral-8x7B-Instruct-v0.1",
task="text-generation",
model_kwargs={
"max_new_tokens": 16384,
"top_k": 30,
"temperature": 0.1,
"repetition_penalty": 1.03,
"max_length": 16384,
},
)
return ChatHuggingFace(llm=llm)
And then use in other code:
@classmethod
def default_bot(cls, sys_msg: str, llm: BaseLanguageModel):
h_temp = "{question}"
# Init Prompt
prompt = ChatPromptTemplate(
messages=[
SystemMessage(content=sys_msg),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template(h_temp)
],
)
memory = ConversationSummaryBufferMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
max_token_limit=2048,
)
chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory,
# verbose=True,
)
return cls(chain=chain)
### Error Message and Stack Trace (if applicable)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 68, in _generate
llm_input = self._to_chat_prompt(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 100, in _to_chat_prompt
return self.tokenizer.apply_chat_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1742, in apply_chat_template
rendered = compiled_template.render(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 1, in top-level template code
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call
return __context.call(__obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1776, in raise_exception
raise TemplateError(message)
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
### Description
I want to know what the root cause of this issue is? I simply replaced llm with ChatHuggingFace from openaiGPT4. Why is there such incompatibility? Can the official team consider the compatibility of BaseLanguageModel?
### System Info
❯ pip freeze | grep langchain
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-experimental==0.0.28
langchain-google-genai==0.0.3
langchain-openai==0.0.2
platform: mac M1
| HuggingFace ChatHuggingFace raise Conversation roles must alternate user/assistant/user/assistant/... | https://api.github.com/repos/langchain-ai/langchain/issues/16992/comments | 4 | 2024-02-04T07:12:49Z | 2024-06-27T01:58:12Z | https://github.com/langchain-ai/langchain/issues/16992 | 2,116,920,427 | 16,992 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
# Indexing Code
textbook_directory_number_metadata = {
'Chapter Number': chapter['Chapter Number'],
...
}
record_metadatas = [{
**textbook_directory_number_metadata,'Text': text
}]
metadatas =[]
texts = []
metadatas.extend(record_metadatas)
texts.extend(text)
ids = [str(uuid4()) for _ in range(len(texts))]
embeds = embed.embed_documents(texts)
index.upsert(vectors=zip(ids, embeds, metadatas))
# Query Code
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={
'k': 8,
'filter': filter_request_json
}
)
```
### Error Message and Stack Trace (if applicable)
No error or exception, it's just the type got changed.
### Description
We have a metadata field that looks like "Chapter Number": 1. We then indexed the document with this metadata in Pinecone VDB. In Query Retrieval we got the meta data field that looks like this: "Chapter Number": 1.0. The number '1' got turned into a floating point '1.0'. There is no type casting in my code.
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
langchain-openai==0.0.3
Platform: mac
Python Version: Python 3.11.5
python -m langchain_core.sys_info:
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.5 (v3.11.5:cce6ba91b3, Aug 24 2023, 10:50:31) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.13
> langchain: 0.1.1
> langchain_community: 0.0.13
> langserve: Not Found | Type casting mistake for metadata when indexing documents using Pinecone VDB | https://api.github.com/repos/langchain-ai/langchain/issues/16983/comments | 1 | 2024-02-03T18:35:50Z | 2024-05-11T16:09:32Z | https://github.com/langchain-ai/langchain/issues/16983 | 2,116,635,155 | 16,983 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from src.utils.search import SearchUtil
from langchain.agents import AgentExecutor, create_openai_tools_agent, AgentExecutorIterator
from langchain.schema import SystemMessage
from langchain import hub
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI()
multiply_tool = Tool(
name="multiply",
description="Multiply two numbers",
func=lambda x, y: x * y,
)
tools = [multiply_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
async for chunk in agent_executor.astream({'input': 'write a long text'}):
print(chunk, end="|", flush=True)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am having an issue with streaming chunks from an instance of an AgentExecutor, Here is a very simple high level example of what I am doing
```
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from src.utils.search import SearchUtil
from langchain.agents import AgentExecutor, create_openai_tools_agent, AgentExecutorIterator
from langchain.schema import SystemMessage
from langchain import hub
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI()
multiply_tool = Tool(
name="multiply",
description="Multiply two numbers",
func=lambda x, y: x * y,
)
tools = [multiply_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
async for chunk in agent_executor.astream({'input': 'write a long text'}):
print(chunk, end="|", flush=True)
```
When I apply the same chunk loop on a llm or a chain, their implementation of astream seems to be fine. but when I do it on an agent, I get everything back in an object such as:
{'output': 'llm response', intermediate_steps=[], .. } |
I found some recent discussions with people facing the same issue and it seems to be a bug with the AgentExecutor implementation.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langchainhub==0.1.14 | Streaming Agent responses | https://api.github.com/repos/langchain-ai/langchain/issues/16980/comments | 3 | 2024-02-03T15:21:09Z | 2024-02-06T19:18:14Z | https://github.com/langchain-ai/langchain/issues/16980 | 2,116,530,409 | 16,980 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` Python
chat_history = []
query = "What considerations should the HSS follow during emergency registrations?"
result = chain({"question": query, "chat_history": chat_history})
print(result['answer'])
```
### Error Message and Stack Trace (if applicable)
```
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:392: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.1` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
[<ipython-input-26-85f83d314c4a>](https://localhost:8080/#) in <cell line: 3>()
1 chat_history = []
2 query = "What considerations should the HSS follow during emergency registrations?"
----> 3 result = chain({"question": query, "chat_history": chat_history})
4 print(result['answer'])
44 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py](https://localhost:8080/#) in _make_causal_mask(input_ids_shape, dtype, device, past_key_values_length, sliding_window)
154 """
155 bsz, tgt_len = input_ids_shape
--> 156 mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)
157 mask_cond = torch.arange(mask.size(-1), device=device)
158 mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
OutOfMemoryError: CUDA out of memory. Tried to allocate 104.22 GiB. GPU 0 has a total capacty of 14.75 GiB of which 8.83 GiB is free. Process 252083 has 5.91 GiB memory in use. Of the allocated memory 5.63 GiB is allocated by PyTorch, and 156.29 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Description
How I can resolve this error
### System Info
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0 | OutOfMemoryError: CUDA out of memory. Tried to allocate 104.22 GiB. GPU 0 has a total capacty of 14.75 GiB of which 8.83 GiB is free. Process 252083 has 5.91 GiB memory in use. Of the allocated memory 5.63 GiB is allocated by PyTorch, and 156.29 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF | https://api.github.com/repos/langchain-ai/langchain/issues/16978/comments | 1 | 2024-02-03T14:34:42Z | 2024-02-04T14:23:31Z | https://github.com/langchain-ai/langchain/issues/16978 | 2,116,511,940 | 16,978 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The issue is in `langchain/libs/community/langchain_community/vectorstores/faiss.py` at line 333
```
if filter is not None and filter_func(doc.metadata):
docs.append((doc, scores[0][j]))
else:
docs.append((doc, scores[0][j]))
```
If there's a filter, this will always add a document whether or not the function succeeds or fails (confirmed with AI: https://chat.openai.com/share/1b68d90b-ed4d-4e7d-9aff-9195acf18f96)
It should be something like:
```
if filter is None:
docs.append((doc, scores[0][j]))
elif filter_func(doc.metadata):
docs.append((doc, scores[0][j]))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* using FAISS with a filter just basically ignores the filter
### System Info
```
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-mistralai==0.0.3
``` | FAISS documenting filtering is broken | https://api.github.com/repos/langchain-ai/langchain/issues/16977/comments | 3 | 2024-02-03T14:04:17Z | 2024-05-22T16:08:32Z | https://github.com/langchain-ai/langchain/issues/16977 | 2,116,495,155 | 16,977 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` Python
# Load Directory that contains all the documents related to RAG:
from langchain_community.document_loaders import DirectoryLoader
directory = '/content/drive/MyDrive/QnA Pair Documents'
```
``` Python
def load_docs(directory):
loader = DirectoryLoader(directory)
documents = loader.load()
return documents
```
``` Python
documents = load_docs(directory)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/common.py](https://localhost:8080/#) in convert_office_doc(input_filename, output_directory, target_format, target_filter)
406 try:
--> 407 process = subprocess.Popen(
408 command,
15 frames
FileNotFoundError: [Errno 2] No such file or directory: 'soffice'
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/common.py](https://localhost:8080/#) in convert_office_doc(input_filename, output_directory, target_format, target_filter)
412 output, error = process.communicate()
413 except FileNotFoundError:
--> 414 raise FileNotFoundError(
415 """soffice command was not found. Please install libreoffice
416 on your system and try again.
FileNotFoundError: soffice command was not found. Please install libreoffice
on your system and try again.
- Install instructions: https://www.libreoffice.org/get-help/install-howto/
- Mac: https://formulae.brew.sh/cask/libreoffice
- Debian: https://wiki.debian.org/LibreOffice
```
### Description
I have downloaded and installed the `libreoffice` from provided link but I am still getting this error.
### System Info
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0 | FileNotFoundError: soffice command was not found. Please install libreoffice on your system and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/16973/comments | 2 | 2024-02-03T10:54:46Z | 2024-04-15T09:27:30Z | https://github.com/langchain-ai/langchain/issues/16973 | 2,116,424,404 | 16,973 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import HuggingFaceHub
HuggingFaceHub(repo_id="gpt2")("Linux is")
#Exprcted: "a opensource operate system"
#Real: "Linux is a opensource operate system"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Some text-generation models on huggingface repeat the prompt in their generated response
### System Info
langchain==0.1.4
langchain-community==0.0.17
langchain-core==0.1.18
langchain-experimental==0.0.29
langchain-google-genai==0.0.6
langchainhub==0.1.14
| HuggingFaceHub still needs leading characters removal | https://api.github.com/repos/langchain-ai/langchain/issues/16972/comments | 2 | 2024-02-03T09:02:23Z | 2024-05-13T16:10:07Z | https://github.com/langchain-ai/langchain/issues/16972 | 2,116,370,059 | 16,972 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=1,
include_tables=['transient_table_name'],
view_support=True,
)
### Error Message and Stack Trace (if applicable)
ValueError: include_tables {'TRANSIENT_TABLE_NAME'} not found in database
### Description
the table i want to use is a 'transient table with the following table info like below:
create or replace TRANSIENT TABLE DB_NAME.SCHEMA_NAME.TRANSIENT_TABLE_NAME (
UNIQUE_ID VARCHAR(32),
PRODUCT VARCHAR(255),
CITY VARCHAR(100)),
### System Info
black==24.1.1
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
pip_audit==2.6.0
pre-commit==3.6.0
pylint==2.17.4
pylint_quotes==0.2.3
pylint_pydantic==0.3.2
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==2.0.25
watchdog==3.0.0 | SQLDatabase.from_uri() not recognizing transient table in snowflake | https://api.github.com/repos/langchain-ai/langchain/issues/16971/comments | 4 | 2024-02-03T07:26:16Z | 2024-02-05T20:01:51Z | https://github.com/langchain-ai/langchain/issues/16971 | 2,116,324,244 | 16,971 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The code from the [Qdrant documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) shows the error:
```python
from dotenv import load_dotenv
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Qdrant
from langchain_openai import OpenAIEmbeddings
load_dotenv()
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
print(found_docs[0].page_content)
```
The only adjustment here was how to set the `OPENAI_API_KEY` value (any mechanism works).
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2023.1.2\plugins\python-ce\helpers\pydev\pydevd.py", line 1534, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\JetBrains\PyCharm Community Edition 2023.1.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\sfitts\GitHub\ag2rs\modelmgr\src\main\python\qdrant_example.py", line 26, in <module>
found_docs = qdrant.similarity_search(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 286, in similarity_search
results = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 362, in similarity_search_with_score
return self.similarity_search_with_score_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 620, in similarity_search_with_score_by_vector
return [
^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 622, in <listcomp>
self._document_from_scored_point(
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 1946, in _document_from_scored_point
metadata["_collection_name"] = scored_point.collection_name
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ScoredPoint' object has no attribute 'collection_name'
python-BaseException
Process finished with exit code 1
```
### Description
I'm trying to use Qdrant to perform similarity searches as part of a RAG chain. This was working fine in `langchain-community==0.0.16`, but produces the error above in `langchain-community==0.0.17`. The source of the break is this PR -- https://github.com/langchain-ai/langchain/pull/16608. While it would be nice to have access to the collection name, the QDrant class `ScoredPoint` does not have the referenced property (and AFAICT never has).
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.4
> langchain_community: 0.0.17
> langchain_openai: 0.0.5
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
```
Note that the failure was originally found using `langchain==0.1.5`. | Qdrant: Performing a similarity search results in an "AttributeError" | https://api.github.com/repos/langchain-ai/langchain/issues/16962/comments | 3 | 2024-02-02T23:22:06Z | 2024-05-17T16:08:03Z | https://github.com/langchain-ai/langchain/issues/16962 | 2,115,989,043 | 16,962 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following cdee
``` python
llm = Bedrock(
credentials_profile_name="bedrock-admin",
model_id="amazon.titan-text-express-v1")
```
Does not correctly retrieve the credentials
### Error Message and Stack Trace (if applicable)
I can't directly copy the error due to corporate security policy.
The error clear says access denied due to the role.
However the profile works in the aws cli.
### Description
I am trying to use credentials_profile_name to assume a role that works with Bedrock.
I added a profile to ~/.aws/config
[profile bedrock-admin]
role_arn = arn:aws:iam::123456789012:role/mybedrockrole
credential_source = Ec2InstanceMetadata
The role does have suitable permissions and I can create a Bedrock client via boto3.
The AWS CLI works. I can run aws s3 ls --profile bedrock-admin and it picks up the profile.
But creating the LLM as shown in the docs does not get the permissions and fails
llm = Bedrock(
credentials_profile_name="bedrock-admin",
model_id="amazon.titan-text-express-v1")
In my case, I am forced to use the EC2 profile as a starting point for credentials. IMDS should still allow the new role to be assumed.
### System Info
langchain 0.1.4 with langserve
AWS linux
Python 3.9.6 | Bedrock credentials_profile_name="bedrock-admin" fails with IMDS | https://api.github.com/repos/langchain-ai/langchain/issues/16959/comments | 3 | 2024-02-02T21:57:50Z | 2024-02-05T18:37:47Z | https://github.com/langchain-ai/langchain/issues/16959 | 2,115,869,570 | 16,959 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```py
async def stream_tokens():
async for event in agent.astream_events(
{"input": prompt},
{"configurable": {"session_id": "some_hard_coded_value"}},
version="v1",
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
yield content
yield ""
```
```py
agent_with_history = RunnableWithMessageHistory(
agent,
lambda session_id: CassandraChatMessageHistory(
keyspace="some_hard_coded_value",
session=cluster.connect(),
session_id="some_hard_coded_value"
),
input_messages_key="input",
history_messages_key="history",
)
agent = AgentExecutor(
agent=agent_with_history,
tools=tools,
verbose=True,
handle_parsing_errors="Check your output and make sure it conforms, use the Action/Action Input syntax",
)
```
### Error Message and Stack Trace (if applicable)
[chain/error] [1:chain:AgentExecutor] [2ms] Chain run errored with error:
"ValueError(\"Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})\")Traceback (most recent call last):\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent_iterator.py\", line 240, in __aiter__\n async for chunk in self.agent_executor._aiter_next_step(\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1262, in _aiter_next_step\n output = await self.agent.aplan(\n ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py\", line 422, in aplan\n async for chunk in self.runnable.astream(\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 4123, in astream\n self._merge_configs(config),\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/history.py\", line 454, in _merge_configs\n raise ValueError(\n\n\nValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})"
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__
await super().__call__(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
raise exceptions[0]
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/x/x/Documents/programming/x/backend-python/source/server/routes/ask_agent_endpoint/ask_agent_endpoint.py", line 39, in stream_tokens
async for event in agent.astream_events(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 889, in astream_events
async for log in _astream_log_implementation( # type: ignore[misc]
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 612, in _astream_log_implementation
await task
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 567, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1551, in astream
async for step in iterator:
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 240, in __aiter__
async for chunk in self.agent_executor._aiter_next_step(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1262, in _aiter_next_step
output = await self.agent.aplan(
^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 422, in aplan
async for chunk in self.runnable.astream(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4123, in astream
self._merge_configs(config),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/history.py", line 454, in _merge_configs
raise ValueError(
ValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})
### Description
I'm trying to integrate chat history with Cassandra, like I did for chain. With astream_events on AgentExecutor I have problem that config isn't passed down, so it throws error like in traceback. I tried to manually edit source code at /Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py line 4123, and add on my own to final config configurable["session_id"]. After that, it's working. I'm working with GPT-4, but this bug should appear no matter which LLM you are using.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-experimental==0.0.49
langchain-openai==0.0.5
langchainhub==0.1.14
MacOS Sonoma 14.1.2 with ARM M3 CPU
Python 3.11.7 (main, Dec 20 2023, 12:17:39) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin | astream_events doesn't pass config properly | https://api.github.com/repos/langchain-ai/langchain/issues/16944/comments | 6 | 2024-02-02T14:50:34Z | 2024-02-07T02:34:07Z | https://github.com/langchain-ai/langchain/issues/16944 | 2,115,112,004 | 16,944 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_google_vertexai.chat_models import ChatVertexAI
llm = ChatVertexAI(
model_name="gemini-pro",
max_output_tokens=1,
)
llm.invoke("foo")
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 165, in invoke
self.generate_prompt(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 356, in _generate
return generate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 64, in generate_from_stream
for chunk in stream:
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 499, in _stream
for response in responses:
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 926, in _send_message_streaming
raise ResponseBlockedError(
vertexai.generative_models._generative_models.ResponseBlockedError: The response was blocked.
### Description
When using Vertex AI's Gemini generative models, an exception called ResponseBlockedError is being raised when the generated text reaches either the maximum allowed token limit or a natural stopping point, as defined in the Google Vertex AI Python library (https://github.com/googleapis/python-aiplatform/blob/93036eda04566501e74916814e950236d9dbed62/vertexai/generative_models/_generative_models.py#L640-L644). However, instead of handling the error gracefully within Langchain, the exception is being propagated to the top level of the instance call, causing unexpected behavior and potentially interrupting workflows.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1-Alpine SMP PREEMPT_DYNAMIC Wed, 29 Nov 2023 18:56:40 +0000
> Python Version: 3.11.7 (main, Feb 2 2024, 12:35:14) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | Vertex AI fail on successful finish reason | https://api.github.com/repos/langchain-ai/langchain/issues/16939/comments | 1 | 2024-02-02T12:57:29Z | 2024-05-10T16:10:00Z | https://github.com/langchain-ai/langchain/issues/16939 | 2,114,872,848 | 16,939 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the complete code
```
# !pip -q install langchain openai tiktoken chromadb pypdf sentence-transformers==2.2.2 InstructorEmbedding faiss-cpu
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# List of file paths for your CSV files
csv_files = ['one.csv', '1.csv', 'one-no.csv', 'one-yes.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
import pickle
import faiss
from langchain.vectorstores import FAISS
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
retriever.search_type
retriever.search_kwargs
docs = retriever.get_relevant_documents("Can you tell me about natixis risk mapping?")
docs[0]
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 2})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
"""### Testing both MODELS"""
## Cite sources
import textwrap
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
# print('\nSources:')
# print("\n")
if llm_response["source_documents"]:
for source in llm_response["source_documents"]:
print(wrap_text_preserve_newlines(source.page_content))
source_name = source.metadata['source']
row_number = source.metadata.get('row', 'Not specified')
print(f"Source: {source_name}, Row: {row_number}\n")
else:
print("No sources available.")
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
```
the above is giving below output
```
-------------------Instructor Embeddings------------------
Snippet: Natixis conducted a systematic identification and materiality assessment of climate risk impacts.
This exercise leveraged existing Natixis risk mapping and relied on a qualitative analysis of the materiality
of impacts by Environmental and Social Responsibility and risk experts in the short medium term ( 5 years) and
long term (5.30 years). The analysis led to distinguish between indirect impactsresulting from Natixis
exposure to other entities (clientsassetsetc.) exposed to climate risksand direct impacts to which Natixis is
exposed through its own activities.
Source: conflicts.csv, Row: 14
Snippet: All risksincluding climate related risksare identified and evaluated at the regional level with the
help of regional experts. They cover the entire range of climate related issues (transitional and physical
issues). Risks are assessed on a gross risk basis. Gross risk is defined as risk without mitigation controls.
The risks are analyzed according to the criteria “EBIT effect� and “probability.�
Source: conflicts.csv, Row: 13
Snippet: Wärtsilä identifies and assesses on an annual basis its sustainability risksincluding climate
change risksin both its strategic and operative risk assessments.
Source: conflicts.csv, Row: 16
Snippet: Climate risk has been identified as one of the most significant risks.
Source: conflicts.csv, Row: 50
Snippet: Impact & implication Aurubis is since 2013 part of the EU-ETS.
Source: conflicts1.csv, Row: 17
```
it is returning multiple outputs from the same source, but i was expecting one output from each and every source document. Seems like multiple snippets from the same source should be combined and based out of that combined text it should return that one output for that source. Can you please look into this?
### Idea or request for content:
_No response_ | Unable to return output from every source (i.e. every document), rather it is returning only one output even if there are multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16938/comments | 1 | 2024-02-02T12:41:27Z | 2024-02-14T03:35:25Z | https://github.com/langchain-ai/langchain/issues/16938 | 2,114,842,720 | 16,938 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
in the below code, it returns only one answer even if there are multiple documents (multiple csv files). I'm bit skeptical on which line of code should i make changes to get the output for every answer.
```
# !pip -q install langchain openai tiktoken chromadb pypdf sentence-transformers==2.2.2 InstructorEmbedding faiss-cpu
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# List of file paths for your CSV files
csv_files = ['one.csv', '1.csv', 'one-no.csv', 'one-yes.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
import pickle
import faiss
from langchain.vectorstores import FAISS
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
retriever.search_type
retriever.search_kwargs
docs = retriever.get_relevant_documents("Can you tell me about natixis risk mapping?")
docs[0]
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 2})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
"""### Testing both MODELS"""
## Cite sources
import textwrap
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}, Row: {row_number}")
else:
print("No sources available.")
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
query = 'Can you tell me about natixis risk mapping??'
print('-------------------OpenAI Embeddings------------------\n')
llm_response = qa_chain_openai(query)
process_llm_response(llm_response)
```
Below is the actual output
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
```
-------------------Instructor Embeddings------------------
Answer:
Natixis conducts a systematic identification and materiality assessment of climate risk impacts through their
risk mapping process. This involves evaluating all risks, including climate related risks, at the regional
level with the help of regional experts. The risks are assessed on a gross risk basis, meaning without
mitigation controls, and are analyzed according to the criteria "EBIT effect" and "probability." This process
also distinguishes between indirect impacts resulting from Natixis' exposure to other entities and direct
impacts from their own activities.
Sources:
Source 1: one.csv, Row: 14
Source 2: 1.csv, Row: 13
Source 3: one-no.csv, Row: 16
Source 4: one-yes.csv, Row: 50
```
Expected output:
```
Answer:
Natixis conducts a systematic identification and materiality assessment of climate risk impacts through their
risk mapping process. This involves evaluating all risks, including climate related risks, at the regional
level with the help of regional experts. The risks are assessed on a gross risk basis, meaning without
mitigation controls, and are analyzed according to the criteria "EBIT effect" and "probability." This process
also distinguishes between indirect impacts resulting from Natixis' exposure to other entities and direct
impacts from their own activities.
Sources:
Source: one.csv, Row: 14
Answer:
I'm not sure.
Sources:
Source: 1.csv, Row: 13
```
so on
it has returned only 1 answer for multiple sources I need answers for each and every source. Can anyone please help me constructing the code?
### Idea or request for content:
_No response_ | Unable to return output from every souce (i.e. every document), rather it is returning only one output even if there are multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16935/comments | 7 | 2024-02-02T11:41:23Z | 2024-03-19T05:56:19Z | https://github.com/langchain-ai/langchain/issues/16935 | 2,114,743,405 | 16,935 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
This is how I get the Azure OpenAI LLM object
```python
def getLlmObject():
getToken()
model = AzureChatOpenAI(
openai_api_version=os.environ['OPENAI_API_VERSION'],
azure_deployment=os.environ['AZURE_OPENAI_DEPLOYMENT'],
azure_endpoint = os.environ['AZURE_ENDPOINT'],
openai_api_type = 'azure',
user = f'{{"appkey": "{APP_KEY}"}}'
)
return model
```
It would be ideal to change line 205 to detect non Streaming capability of the model or provide an option to set ```streaming=False``` during instantiation using ```AzureChatOpenAI``` class.
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/root/volume1/iris/onex-gen-ai-experimental/crew/crew.py", line 93, in <module>
result = crew.kickoff()
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/crew.py", line 127, in kickoff
return self._sequential_loop()
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/crew.py", line 134, in _sequential_loop
task_output = task.execute(task_output)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/task.py", line 56, in execute
result = self.agent.execute_task(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agent.py", line 146, in execute_task
result = self.agent_executor.invoke(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agents/executor.py", line 59, in _call
next_step_output = self._take_next_step(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agents/executor.py", line 103, in _iter_next_step
output = self.agent.plan(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1035, in transform
for chunk in input:
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4145, in transform
yield from self.bound.transform(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 242, in stream
assert generation is not None
AssertionError
### Description
I use enterprise Azure OpenAI instance to work with CrewAI (For Autonomous Agents). Our Azure OpenAI endpoint does not support streaming. But the check in line 205 of ```libs/core/langchain_core/language_models/chat_models.py``` (https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py) causes the else block to get executed and thus raising the error during execution of the statement ``` assert generation is not None ```
### System Info
```linux
(condaenv) [root@iris crew]# pip freeze | grep langchain
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-experimental==0.0.20
langchain-openai==0.0.2.post1
(condaenv) [root@iris crew]# python --version
Python 3.10.9
``` | LangChain Core Chatmodels.py goes to a streaming block causing "generation is not None" assertion error when the AzureChatOpenAI llm object does not support streaming. | https://api.github.com/repos/langchain-ai/langchain/issues/16930/comments | 5 | 2024-02-02T08:06:16Z | 2024-08-05T17:09:07Z | https://github.com/langchain-ai/langchain/issues/16930 | 2,114,346,425 | 16,930 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I do have below code
```
if len(year_lst) != 0 and len(company_lst) == 0:
response = []
for i in year_lst:
vectorstore_retriver_args = {
"k": vector_count,
"pre_filter": {"$and": [{"year": {"$eq": int(i.strip())}}]},
}
final_question = question.replace("[ ]", i)
print(f"Final Question : {final_question}")
query_llm = RetrievalQA.from_chain_type(
llm=llm,
verbose=True,
chain_type="stuff",
retriever=vectorstore_ind.as_retriever(
search_kwargs=vectorstore_retriver_args
),
return_source_documents=True,
chain_type_kwargs={"prompt": prompt},
)
response.append(query_llm({"query": final_question.strip().lower()}))
```
I have developed a program that allows for the uploading of multiple PDF files, and it includes a feature where you can specify a "vector count" for each PDF. This vector count determines how many similar snippets the program will identify and return from each uploaded PDF. For example, if the vector count is set to 5, the program will find and return 5 similar snippets from each PDF file. My question concerns how the program processes these snippets to answer queries for each PDF file: does it compile the top 5 similar snippets from each PDF, concatenate these snippets together, and then generate a response based on the combined content from each file? Or does it select the most relevant snippet from those top 5 and base its response solely on that single snippet? I just want to know how **RetrievalQA** works
### Idea or request for content:
_No response_ | How the 'RetrievalQA' function works? | https://api.github.com/repos/langchain-ai/langchain/issues/16927/comments | 3 | 2024-02-02T06:58:40Z | 2024-02-14T03:35:24Z | https://github.com/langchain-ai/langchain/issues/16927 | 2,114,240,162 | 16,927 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Below is my code
``` python
def generate_embeddings(config: dict = None, urls = None, file_path = None, persist_directory=None):
texts=None
if file_path:
_, file_extension = os.path.splitext(file_path)
file_extension = file_extension.lower()
image_types=['jpeg','jpg','png','gif']
if file_path.lower().endswith(".pdf"):
loader = PyPDFLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=300)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".csv"):
loader = CSVLoader(file_path, encoding="utf-8", csv_args={'delimiter': ','})
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=300)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".xlsx") or file_path.lower().endswith(".xls"):
loader = UnstructuredExcelLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".docx") or file_path.lower().endswith(".doc"):
loader = UnstructuredWordDocumentLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
texts = text_splitter.split_documents(documents=document)
elif any(file_path.lower().endswith(f".{img_type}") for img_type in image_types):
loader=UnstructuredImageLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".txt"):
loader=TextLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif config is not None:
confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = []
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
try:
if space_key[1] is True:
print('add attachment')
documents.extend(loader.load(space_key=space_key[0],include_attachments=True,limit=100))
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
else:
print("without attachment")
documents.extend(loader.load(space_key=space_key[0],limit=100))
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
except:
documents=[]
elif urls:
all_urls=[]
for url in urls:
if url[1] is True:
crawl_data=crawl(url[0])
all_urls.extend(crawl_data)
if url[1] is False:
dummy = []
dummy.append(url[0])
all_urls.extend(dummy)
loader = UnstructuredURLLoader(urls=all_urls)
urlDocument = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents=urlDocument)
else:
raise ValueError("Invalid source_type. Supported values are 'pdf', 'confluence', and 'url'.")
if texts:
embedding = OpenAIEmbeddings()
Chroma.from_documents(documents=texts, embedding=embedding, persist_directory=persist_directory)
file_crawl_status = True
file_index_status = True
else:
file_crawl_status = False
file_index_status = False
return file_crawl_status, file_index_status
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
```
### Error Message and Stack Trace (if applicable)
[02/Feb/2024 11:54:15] "GET /create-project/ HTTP/1.1" 200 19747
Bad Request: /create-project/
[02/Feb/2024 11:56:43] "POST /create-project/ HTTP/1.1" 400 45
### Description
# Confluence Project Issue
## Problem Description
I am working on a Confluence project where I have implemented an "include attachments=True" feature. The functionality works fine locally, and the code is deployed on two servers. However, on one of the servers, I am encountering a "Bad request" error. Despite having all the necessary dependencies installed, the issue persists.
## Dependency Information
Here are the dependencies installed on all servers:
- Django 4.0
- Django Rest Framework
- Langchain 0.1.1
- Markdownify
- Pillow
- Docx2txt
- Xlrd
- Pandas
- Reportlab
- Svglib
- Pdf2image
- Chromadb
- Unstructured
- OpenAI
- Pypdf
- Tiktoken
- Django-cors-headers
- Django-environ
- Pytesseract 0.3.10
- Beautifulsoup4 4.12.2
- Atlassian-python-api 3.41.9
- Lxml
- Langchain-community
- Langchain-openai
- Python-docx
- Unstructured-inference
- Unstructured[all-docs]
- Pydantic
- Langchainhub
## Additional Information
- The issue occurs specifically on one server.
- The "include attachments" feature is working fine on the local environment and another server.
- All dependencies are installed on the problematic server.
- The server where the issue occurs has Django 4.0 installed.
## Steps Taken
- Checked server logs for any specific error messages.
- Verified that all necessary dependencies are installed on the problematic server.
- Ensured that the codebase is the same across all servers.
## Error Message
On the problematic server, I am receiving a "Bad request" error.
## Request for Assistance
I would appreciate any guidance or suggestions on how to troubleshoot and resolve this issue. If anyone has encountered a similar problem or has insights into Confluence projects and attachment inclusion, your assistance would be invaluable.
Thank you!
### System Info
below is my server configuration
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
total used free shared buff/cache available
Mem: 7.7Gi 5.4Gi 238Mi 20Mi 2.1Gi 2.1Gi
Swap: 8.0Gi 2.1Gi 5.9Gi | Include attachments=True is not working in Confluence | https://api.github.com/repos/langchain-ai/langchain/issues/16926/comments | 3 | 2024-02-02T06:50:43Z | 2024-07-18T16:07:44Z | https://github.com/langchain-ai/langchain/issues/16926 | 2,114,223,300 | 16,926 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
record_manager.get_time or record_manager.get_time() will make error:
> record_manager.update(
> File ".../venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 269, in update
> update_time = self.get_time()
>
### Error Message and Stack Trace (if applicable)
> ERROR: Exception in ASGI application
> Traceback (most recent call last):
> File "/chat-service/main.py", line 196, in ingress
> ingest_docs()
> File "/chat-service/ingest.py", line 168, in ingest_docs
> indexing_stats = index(
> ^^^^^^
> File "/chat-service/_index.py", line 158, in index
> record_manager.update(
> File "/chat-service/venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 269, in update
> update_time = self.get_time()
> ^^^^^^^^^^^^^^^
> File "/chat-service/venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 205, in get_time
> raise NotImplementedError(f"Not implemented for dialect {self.dialect}")
> NotImplementedError: Not implemented for dialect mysql
>
> During handling of the above exception, another exception occurred:
>
### Description
- record_manager.get_time() is not working
- record_manager.get_time will getting this error
### System Info
python3.11 -m pip freeze | grep langchain
langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.9
langchain-google-genai==0.0.4
langchain-google-vertexai==0.0.1.post1
langchain-openai==0.0.2 | record_manager.get_time error | https://api.github.com/repos/langchain-ai/langchain/issues/16919/comments | 1 | 2024-02-02T04:21:56Z | 2024-05-10T16:09:55Z | https://github.com/langchain-ai/langchain/issues/16919 | 2,114,024,734 | 16,919 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
the following code """
from langchain.chains import LLMChain
from langchain.chains import ConversationChain
my_functions = [
{
'name': 'raise_ticket',
'description': 'Get the details for a ticket',
'parameters': {
'type': 'object',
'properties': {
'projectName':{
'type': 'string',
'description': "Project Name : (e.g. 'ABC', 'XYZ')"
},
'issueType':{
'type': 'string',
'description': "Issue Type : (e.g. 'Change Request', 'Service Request')"
},
},
...
...
"required":["projectName","issueType"]
}
}
]
llm = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo-0613")
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=False
)
message= conversation.invoke([HumanMessage(content='What are the choices of the Issue Type')],
functions=my_functions,
memory=memory)
"""
### Error Message and Stack Trace (if applicable)
Not an error but Function call is not happening. I am always getting a generic response from the model.
### Description
I was expecting the langchain library to recognize the function... but it is not recognizing it.
Same function works with OpenAI
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17 | Function calling not working with ConversationChain with Memory | https://api.github.com/repos/langchain-ai/langchain/issues/16917/comments | 3 | 2024-02-02T03:19:20Z | 2024-07-28T16:05:48Z | https://github.com/langchain-ai/langchain/issues/16917 | 2,113,960,320 | 16,917 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Goal
Improve streaming in LangChain for chat models / language models.
## Background
Many chat and language models implement a streaming mode in which they stream tokens one at a time.
LangChain has a callback system that is useful for logging and important APIs like "stream", "stream_log" and "stream_events".
Currently many models incorrectly yield the token (chat generation) before invoking the callback.
## Acceptance criteria
For a PR to be accepted and merged, the PR should:
- [ ] Fix the code to make sure that the callback is called before the token is yielded
- [ ] Link to this issue
- [ ] Change ONE and only ONE model
- [ ] FIx sync and async implementation if both are defined
## Example PR
Here is an example PR that shows the fix for the OpenAI chat model:
https://github.com/langchain-ai/langchain/pull/16909
## Find models that need to be fixed
The easiest way to find places in the code that may need to be fixed is using git grep
```bash
git grep -C 5 "\.on_llm_new"
```
Examine the output to determine whether the callback is called before the token is yielded (correct) or after (needs to be fixed). | Callback for on_llm_new_token should be invoked before the token is yielded by the model | https://api.github.com/repos/langchain-ai/langchain/issues/16913/comments | 1 | 2024-02-02T00:39:51Z | 2024-06-27T20:09:31Z | https://github.com/langchain-ai/langchain/issues/16913 | 2,113,804,737 | 16,913 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Any chat models that support function calling should have an example of function calling in their integration page | DOC: Add function calling example to all chat model integration pages that support | https://api.github.com/repos/langchain-ai/langchain/issues/16911/comments | 0 | 2024-02-02T00:26:53Z | 2024-05-10T16:09:45Z | https://github.com/langchain-ai/langchain/issues/16911 | 2,113,792,568 | 16,911 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
#### code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from sentence_transformers import SentenceTransformer
text = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur varius sodales bibendum. Nulla nec ornare ipsum. Nam eleifend convallis mi eget gravida. Cras mi lacus, varius ut feugiat et, sollicitudin ultricies ipsum. Cras varius odio eget facilisis scelerisque. Sed mauris risus, luctus at sagittis at, venenatis eget turpis. Ut euismod non est a accumsan. Sed pretium velit sed tellus iaculis gravida a sed elit. Nam luctus tristique sem et tincidunt. Nam cursus semper lectus, non dapibus nunc. Nulla et lectus in erat tempus eleifend sit amet non purus. Proin ut vestibulum lectus, vitae convallis tortor.
Ut turpis nibh, lacinia in odio ac, interdum volutpat lectus. Donec fermentum hendrerit arcu et fringilla. Etiam placerat vestibulum magna, non pellentesque orci convallis ac. Nunc eget risus pharetra, consectetur lacus eget, vehicula est. Quisque blandit orci in posuere porttitor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Praesent pellentesque varius nibh ut iaculis. Morbi mi justo, imperdiet in vestibulum at, condimentum quis sem. Aliquam malesuada lorem tortor, eu accumsan dui euismod quis. Nullam rutrum libero at mauris mollis sodales. Cras scelerisque non risus vel auctor. Suspendisse dapibus volutpat eros id malesuada.
Curabitur dictum laoreet ultrices. Nulla orci erat, pharetra euismod dictum a, consequat vel lorem. Aenean euismod massa felis, ut lobortis nisl accumsan in. Duis dolor lacus, tempor in rhoncus sed, fringilla id mi. Duis in eros at purus sagittis ultricies vitae a orci. Maecenas felis nunc, dapibus nec turpis id, consectetur semper eros. Vivamus tincidunt pretium urna, nec condimentum felis ultrices ut. Donec tempor urna in nisl pharetra, eu viverra enim facilisis. Nullam blandit nibh dictum vestibulum congue. Duis interdum ornare rutrum. Maecenas aliquam sem non lorem venenatis, eget facilisis mauris finibus. In hac habitasse platea dictumst. Vivamus vitae tincidunt eros.
Curabitur ac diam vitae ligula elementum aliquam. Donec posuere egestas pretium. Nulla eget lorem dapibus, tempus sapien maximus, eleifend dui. Aenean placerat nec nisl at tincidunt. Fusce vel nibh nec sapien rutrum varius sed ullamcorper nisi. Duis venenatis, tortor non hendrerit rhoncus, augue enim sollicitudin lectus, in accumsan ante nulla a nunc. Donec odio arcu, sodales in ligula vitae, dignissim molestie neque. Pellentesque dignissim pharetra nisi sit amet molestie. Curabitur at laoreet purus. Curabitur posuere sapien eu urna iaculis egestas eget et ipsum. Fusce porta sit amet orci non auctor. Praesent facilisis porttitor luctus. Interdum et malesuada fames ac ante ipsum primis in faucibus.
Suspendisse accumsan ante eget magna condimentum, sit amet eleifend enim auctor. Maecenas lorem enim, tempus at lacinia non, condimentum sed justo. Nam iaculis viverra lorem ut mollis. Vivamus convallis lacus quis diam pellentesque pulvinar. Donec vel mauris mattis, dictum nulla vel, volutpat metus. Sed tincidunt mi vitae sem tristique, vitae pretium sapien facilisis. Vestibulum condimentum dui dictum, molestie mauris et, pharetra tortor. Nunc feugiat orci ac lectus imperdiet, ut bibendum quam egestas. Mauris bibendum at nisl eu placerat. Aenean mollis ligula et metus tincidunt aliquam. Integer maximus porta purus at convallis. Maecenas lectus dui, tempus eget efficitur sit amet, ullamcorper ut mauris.
"""
model_name = "distilbert-base-uncased"
model = SentenceTransformer(model_name)
tokenizer = model.tokenizer
# Using from_huggingface_tokenizer
splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=200, chunk_overlap=10, separators=[" "])
chunks = splitter.split_text(text)
print("chunk sizes: from_huggingface_tokenizer, which uses tokenizer.encode uner the hood:\n", [cnt(c) for c in chunks])
# same tokenizer, but with length function
def cnt(txt):
return len(tokenizer.tokenize(txt))
splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=10, separators=[" "], length_function=cnt)
chunks = splitter.split_text(text)
print("Using length function with tokenizer.tokenize \n", [cnt(c) for c in chunks])
```
#### output
``` python
chunk sizes: from_huggingface_tokenizer, which uses tokenizer.encode uner the hood:
[77, 78, 71, 78, 75, 78, 78, 73, 74, 78, 80, 75, 79, 74, 76, 80, 76, 22]
Using length function with tokenizer.tokenize
[198, 198, 200, 199, 200, 198, 133]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Bug: TextSplitter produces smaller chunks than specified with chunk_size, when instantiated `from_huggingface_tokenizer()`.
Explanation: `_merge_splits` counts `total` incorrectly when instantiated with `from_huggingface_tokenizer()`. BertTokenizer appends start and end tokens to any string it is ran on `['101', '102']`. It results in incorrect string length computation during splits merge, with total overestimating "real" chunk length. I.e. :
```python
>>>> self._length_function("")
2
```
therefore, here
```python
177: separator_len = self._length_function(separator)
...
183: _len = self._length_function(d)
...
210: total += _len + (separator_len if len(current_doc) > 1 else 0)
```
total is over counting the real chunk length, and stops the merge earlier than reaching the desired chunk_size. **This impacts resulting chunk sizes significantly when using very commonly occurring separators, i.e. whitespaces. See example.**
Suggested solution:
Replace all occurrences of `tokenizer.encode` for `tokenizer.tokenize` in [text_splitter.py](https://github.com/langchain-ai/langchain/blob/7d03d8f586f123e5059cbd0f45cb4c701bf0976f/libs/langchain/langchain/text_splitter.py#L702).
i.e.
```python
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text)) # replace this line with return len(tokenizer.tokenize(text))
```
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023
> Python Version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Small chunks bug in TextSplitter when instantiated with from_huggingface_tokenizer | https://api.github.com/repos/langchain-ai/langchain/issues/16894/comments | 3 | 2024-02-01T19:20:15Z | 2024-05-31T23:49:19Z | https://github.com/langchain-ai/langchain/issues/16894 | 2,113,287,411 | 16,894 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain_community.vectorstores.pgvector import PGVector
#import psycopg2
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
model = ChatGoogleGenerativeAI(model="gemini-pro")
data = [
"Leo has salary 90000 in IT department.",
"Mary has salary 60000 in IT department.",
"Tom has salary 30000 in IT department."
]
CONNECTION_STRING = "postgresql+psycopg2://postgres:1111@localhost:5432/b2b"
COLLECTION_NAME = 'salary_vectors'
db = PGVector.from_texts(
embedding=embeddings,
texts=data,
collection_name=COLLECTION_NAME,
connection_string = CONNECTION_STRING ,
)
print("Done")
### Error Message and Stack Trace (if applicable)
Exception ignored in: <function PGVector.__del__ at 0x000001E3D6CB5080>
Traceback (most recent call last):
File "C:\Users\philip.chao\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\vectorstores\pgvector.py", line 229, in __del__
AttributeError: 'NoneType' object has no attribute 'Connection'
### Description
postgres version 15 on docker
PS: OpenAI can run, but failed when using google Gemini
### System Info
OS: Windows 10 pro | PGVector from_texts got error when using gemini | https://api.github.com/repos/langchain-ai/langchain/issues/16879/comments | 2 | 2024-02-01T13:43:34Z | 2024-05-15T16:07:24Z | https://github.com/langchain-ai/langchain/issues/16879 | 2,112,553,113 | 16,879 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_mistralai.embeddings import MistralAIEmbeddings
all_documents = []
embeddings = MistralAIEmbeddings()
for file_data in selected_files:
with tempfile.NamedTemporaryFile(mode="wb", delete=False, suffix='.pdf') as temp_file:
temp_file.write(file_data)
file_name = temp_file.name
loader = PyPDFLoader(file_name).load()
docs = text_splitter.split_documents(loader)
all_documents.extend(docs)
db = FAISS.from_documents(all_documents,embeddings)
### Error Message and Stack Trace (if applicable)
An error occurred with MistralAI: Cannot stream response. Status: 400
### Description
by uploading multiple pdf i cannot do document probing. due to mistral ai embeddings erroring out.
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-experimental==0.0.49
langchain-google-genai==0.0.5
langchain-mistralai==0.0.3
langchain-nvidia-ai-endpoints==0.0.1
python 3.11.7
macos 14.2.1 (23C71) sonoma | Mistral AI embedding cannot stream response. status 400 | https://api.github.com/repos/langchain-ai/langchain/issues/16869/comments | 3 | 2024-02-01T10:51:40Z | 2024-05-09T16:10:09Z | https://github.com/langchain-ai/langchain/issues/16869 | 2,112,193,668 | 16,869 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Same code as in the docs [here](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache)
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/dingusagar/inference.py", line 181, in infer
response = self.chain.invoke(inputs)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/schema/runnable/base.py", line 1213, in invoke
input = step.invoke(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 142, in invoke
self.generate_prompt(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 500, in _generate_with_cache
cache_val = llm_cache.lookup(prompt, llm_string)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/cache.py", line 392, in lookup
results = self.redis.hgetall(self._key(prompt, llm_string))
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/commands/core.py", line 4867, in hgetall
return self.execute_command("HGETALL", name)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/client.py", line 1255, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1441, in get_connection
connection.connect()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 704, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to 127.0.0.1:6379. Connection refused.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 698, in connect
sock = self.retry.call_with_retry(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 699, in <lambda>
lambda: self._connect(), lambda error: self.disconnect(error)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1089, in _connect
sock = super()._connect()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 987, in _connect
raise err
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 975, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
```
### Description
I am using RedisCache as per the docs [here](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache)
Was testing how robust the system is if the redis connection fails for somereason or it goes out of memory.
Looks like if the redis connection URL is not reachable, the system throws an error.
I wanted langchain to internally handle redis failure and do a direct API calls to the LLM on failure. we can ofcourse log the error. This will make the system more robust to failures.
Does it make sense to add this feature? If so, I can help with raising a PR.
### System Info
langchain==0.0.333 | RedisCache does't handle errors from redis. | https://api.github.com/repos/langchain-ai/langchain/issues/16866/comments | 5 | 2024-02-01T10:12:47Z | 2024-02-21T17:15:20Z | https://github.com/langchain-ai/langchain/issues/16866 | 2,112,095,388 | 16,866 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class GitHubIssuesLoader(BaseGitHubLoader):
"""Load issues of a GitHub repository."""
include_prs: bool = True
"""If True include Pull Requests in results, otherwise ignore them."""
milestone: Union[int, Literal["*", "none"], None] = None
"""If integer is passed, it should be a milestone's number field.
If the string '*' is passed, issues with any milestone are accepted.
If the string 'none' is passed, issues without milestones are returned.
"""
state: Optional[Literal["open", "closed", "all"]] = None
"""Filter on issue state. Can be one of: 'open', 'closed', 'all'."""
assignee: Optional[str] = None
"""Filter on assigned user. Pass 'none' for no user and '*' for any user."""
creator: Optional[str] = None
"""Filter on the user that created the issue."""
mentioned: Optional[str] = None
"""Filter on a user that's mentioned in the issue."""
labels: Optional[List[str]] = None
"""Label names to filter one. Example: bug,ui,@high."""
sort: Optional[Literal["created", "updated", "comments"]] = None
"""What to sort results by. Can be one of: 'created', 'updated', 'comments'.
Default is 'created'."""
direction: Optional[Literal["asc", "desc"]] = None
"""The direction to sort the results by. Can be one of: 'asc', 'desc'."""
since: Optional[str] = None
"""Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ."""
```
### Error Message and Stack Trace (if applicable)
This class lacks the page and per_page parameters, and I want to add these two parameters to implement pagination functionality.
### Description
The current implementation of GitHubIssuesLoader lacks pagination support, which can lead to inefficiencies when retrieving a large number of GitHub issues. This enhancement aims to introduce pagination functionality to the loader, allowing users to retrieve issues in smaller, manageable batches.
This improvement will involve adding page and per_page parameters to control the pagination of API requests, providing users with greater flexibility and performance optimization. Additionally, proper validation will be implemented to ensure valid and non-negative values for the pagination parameters.
This enhancement will enhance the usability and efficiency of the GitHubIssuesLoader class, making it more suitable for handling repositories with a substantial number of issues.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP Tue Nov 21 19:25:02 UTC 2023
> Python Version: 3.10.13 (main, Dec 8 2023, 04:58:09) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Enhancement: Add Pagination Support to GitHubIssuesLoader for Efficient Retrieval of GitHub Issues | https://api.github.com/repos/langchain-ai/langchain/issues/16864/comments | 3 | 2024-02-01T09:23:25Z | 2024-05-15T16:07:19Z | https://github.com/langchain-ai/langchain/issues/16864 | 2,111,992,332 | 16,864 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
import os
import requests
from langchain.llms import HuggingFaceHub
from langchain.chains import LLMChain
os.environ['HUGGINGFACEHUB_API_TOKEN'] = "your_token"
prompt = "The answer to universe is "
repo_id = "mistralai/Mistral-7B-v0.1"
llm = HuggingFaceHub(
repo_id=repo_id,
model_kwargs={
"max_new_tokens": 10
}
)
langchain_response = llm.invoke(prompt)
url = f"https://api-inference.huggingface.co/models/{repo_id}"
headers = {"Authorization": f"Bearer {os.environ['HUGGINGFACEHUB_API_TOKEN']}"}
def query(payload):
response = requests.post(url, headers=headers, json=payload)
return response.json()
huggingfacehub_response = query({
"inputs": prompt,
"parameters": {
"max_new_tokens": 10
}
})
print([{"generated_text": langchain_response}])
print(huggingfacehub_response)
```
### Error Message and Stack Trace (if applicable)
```
[{'generated_text': 'The answer to universe is 42.\n\nThe answer to life is 42.\n\nThe answer to everything is 42.\n\nThe answer to the question of why is 42.\n\nThe answer to the question of what is 42.\n\nThe answer to the question of how is 42.\n\nThe answer to the question of who is 42.\n\nThe answer to the question of when is 42.\n\nThe answer to'}]
[{'generated_text': 'The answer to universe is 42.\n\nThe answer to life is'}]
```
### Description
It looks like the `HuggingFaceHub` LLM sends `model_kwargs` with a wrong parameters name in the JSON payload. The correct name should be `parameters` instead of `params` according to the [HuggingFace API documentation](https://huggingface.co/docs/api-inference/en/detailed_parameters#text-generation-task).
https://github.com/langchain-ai/langchain/blob/2e5949b6f8bc340a992b9f9f9fb4751f87979e15/libs/community/langchain_community/llms/huggingface_hub.py#L133
As a result, `model_kwargs` has no effect on the model output as can be seen in the example above.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:31 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchainplus_sdk: 0.0.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | HugginFaceHub LLM has a wrong parameters name | https://api.github.com/repos/langchain-ai/langchain/issues/16849/comments | 2 | 2024-01-31T23:19:41Z | 2024-05-09T16:09:58Z | https://github.com/langchain-ai/langchain/issues/16849 | 2,111,153,617 | 16,849 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```
from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint
from langchain_community.llms.azureml_endpoint import AzureMLEndpointApiType
from langchain_community.llms.azureml_endpoint import DollyContentFormatter
# ------------------------------------
# Allow Self Signed Https code
# ------------------------------------
llm = AzureMLOnlineEndpoint(
endpoint_url="https://myproject.eastus2.inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.realtime,
endpoint_api_key="my-key",
content_formatter=DollyContentFormatter(),
model_kwargs={"temperature": 0.8, "max_tokens": 300},
)
response = llm.invoke("Write me a song about sparkling water:")
response
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/abel/Desktop/source/lang/dolly.py", line 24, in <module>
response = llm.invoke("Write me a song about sparkling water:")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 235, in invoke
self.generate_prompt(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 530, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 703, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 567, in _generate_helper
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 554, in _generate_helper
self._generate(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_community/llms/azureml_endpoint.py", line 489, in _generate
response_payload = self.http_client.call(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_community/llms/azureml_endpoint.py", line 50, in call
response = urllib.request.urlopen(req, timeout=kwargs.get("timeout", 50))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 424: Failed Dependency
### Description
AzureMLOnlineEndpoint not working, 424 error, but same url and api key works with standard http. The working plain http is below:
```
import urllib.request
import json
import os
import ssl
def allowSelfSignedHttps(allowed):
# bypass the server certificate verification on client side
if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
allowSelfSignedHttps(True) # this line is needed if you use self-signed certificate in your scoring service.
# Request data goes here
# The example below assumes JSON formatting which may be updated
# depending on the format your endpoint expects.
# More information can be found here:
# https://docs.microsoft.com/azure/machine-learning/how-to-deploy-advanced-entry-script
data = {
"input_data": [
"Write me a super short song about sparkling water"
],
"params": {
"top_p": 0.9,
"temperature": 0.2,
"max_new_tokens": 50,
"do_sample": True,
"return_full_text": True
}
}
body = str.encode(json.dumps(data))
url = 'https://myProject.eastus2.inference.ml.azure.com/score'
# Replace this with the primary/secondary key or AMLToken for the endpoint
api_key = 'my-key'
if not api_key:
raise Exception("A key should be provided to invoke the endpoint")
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key), 'azureml-model-deployment': 'databricks-dolly-v2-12b-15' }
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
print(result)
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(error.read().decode("utf8", 'ignore'))
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112
> Python Version: 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AzureMLOnlineEndpoint not working, 424 error, but same url and api key works with standard http | https://api.github.com/repos/langchain-ai/langchain/issues/16845/comments | 7 | 2024-01-31T18:37:32Z | 2024-07-11T11:24:43Z | https://github.com/langchain-ai/langchain/issues/16845 | 2,110,730,749 | 16,845 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.llms import Ollama
llm = Ollama(model="codellama:70b-python")
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_csv_agent, create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI, OpenAI
import pandas as pd
import os
import re
from datetime import datetime
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
df = pd.read_csv("sales_purchase_20Jan.csv")
agent = create_pandas_dataframe_agent(
llm, df,
verbose=True, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True, number_of_head_rows=5
)
instructions = """
Perform the following steps to address the given query:
Step 1: Begin by verifying if the provided dataframe and instructions contain sufficient information for the required analysis. In case of insufficient details, respond with:
```json
{
"table": {},
"message": ["Please review and modify the prompt with more specifics."]
}
```
Step 2: Should the query necessitate generating a table, structure your response using the following format:
```json
{
"table": {
"columns": ["column1", "column2", ...],
"data": [[value1, value2, ...], [value1, value2, ...], ...]
},
"message": []
}
```
Step 3: For queries requiring solely a textual response, utilize the following format:
```json
{
"table": {},
"message": ["Your text response here"]
}
```
Step 4: Ensure consistent usage of standard decimal format without scientific notation. Replace any None/Null values with 0.0."
Query: """
prompt = instructions + '''Create a summary table that displays the cumulative sales for each item category ('Atta', 'Salt', 'Salt-C') across different months ('Month_Year'). The table should contain columns for 'Month_Year,' individual Item categories, and a 'Grand Total' column. The values in the table should represent the total sales value ('Sale_Value') for each Item category within the corresponding month.'''
agent.invoke(prompt)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
1128 **inputs,
1129 )
1130 except OutputParserException as e:
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:695, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
694 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> 695 return self.output_parser.parse(full_output)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py:63, in MRKLOutputParser.parse(self, text)
62 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 63 raise OutputParserException(
64 f"Could not parse LLM output: `{text}`",
65 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,
66 llm_output=text,
67 send_to_llm=True,
68 )
69 elif not re.search(
70 r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
71 ):
OutputParserException: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[23], line 1
----> 1 agent.invoke(prompt)
File /usr/local/lib/python3.10/dist-packages/langchain/chains/base.py:162, in Chain.invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
File /usr/local/lib/python3.10/dist-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
149 run_manager = callback_manager.on_chain_start(
150 dumpd(self),
151 inputs,
152 name=run_name,
153 )
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
160 except BaseException as e:
161 run_manager.on_chain_error(e)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1136, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1134 raise_error = False
1135 if raise_error:
-> 1136 raise ValueError(
1137 "An output parsing error occurred. "
1138 "In order to pass this error back to the agent and have it try "
1139 "again, pass `handle_parsing_errors=True` to the AgentExecutor. "
1140 f"This is the error: {str(e)}"
1141 )
1142 text = str(e)
1143 if isinstance(self.handle_parsing_errors, bool):
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
### Description
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1136, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1134 raise_error = False
1135 if raise_error:
-> 1136 raise ValueError(
1137 "An output parsing error occurred. "
1138 "In order to pass this error back to the agent and have it try "
1139 "again, pass `handle_parsing_errors=True` to the AgentExecutor. "
1140 f"This is the error: {str(e)}"
1141 )
1142 text = str(e)
1143 if isinstance(self.handle_parsing_errors, bool):
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #184-Ubuntu SMP Tue Oct 31 09:21:49 UTC 2023
> Python Version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_experimental: 0.0.49
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ValueError: An output parsing error occurred | https://api.github.com/repos/langchain-ai/langchain/issues/16843/comments | 3 | 2024-01-31T18:15:28Z | 2024-05-09T16:09:53Z | https://github.com/langchain-ai/langchain/issues/16843 | 2,110,694,010 | 16,843 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```python
llm = Bedrock(
credentials_profile_name="Bedrock",
model_id="amazon.titan-text-express-v1",
model_kwargs={
"temperature": 0.9,
},
verbose=True
)
agent_executor = create_sql_agent(
llm,
db=db,
verbose=True
)
agent_executor.invoke("Retrieve all table data from the last 3 months.")
```
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 533, in _prepare_input_and_invoke_stream
response = self.client.invoke_model_with_response_stream(**request_options)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
### Description
The function 'create_react_agent' in langchain/agent/react/agent.py binds the stop sequence ["\nObservation"] to the runnable, making it incompatible with Bedrock's validation regex: ^(\|+|User:)$
When line 103 is changed from
```python
llm_with_stop = llm.bind(stop=["\nObservation"])
```
to
```python
llm_with_stop = llm.bind(stop=["User:"])
```
the call to invoke the model is successful as part of the agent executor chain because "User:" is one of the allowed stop sequences according to AWS. I think these limitations on the stop sequences allowed are a bit nonsensical, and this may be a bug with AWS itself. However, hard coding the stop sequence into the react agent constructor prevents this from being fixed without modifying the Langchain code.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Jan 24 2024, 14:54:55) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langserve: 0.0.41 | create_react_agent incompatible with AWS Bedrock input validation due to hard coded ['\nObservation:'] stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/16840/comments | 9 | 2024-01-31T17:20:49Z | 2024-03-05T08:57:13Z | https://github.com/langchain-ai/langchain/issues/16840 | 2,110,601,047 | 16,840 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
},
{
"input": "List all tracks in the 'Rock' genre.",
"query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');",
},
{
"input": "Find the total duration of all tracks.",
"query": "SELECT SUM(Milliseconds) FROM Track;",
},
{
"input": "List all customers from Canada.",
"query": "SELECT * FROM Customer WHERE Country = 'Canada';",
},
{
"input": "How many tracks are there in the album with ID 5?",
"query": "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;",
},
{
"input": "Find the total number of invoices.",
"query": "SELECT COUNT(*) FROM Invoice;",
},
{
"input": "List all tracks that are longer than 5 minutes.",
"query": "SELECT * FROM Track WHERE Milliseconds > 300000;",
},
{
"input": "Who are the top 5 customers by total purchase?",
"query": "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;",
},
{
"input": "Which albums are from the year 2000?",
"query": "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';",
},
{
"input": "How many employees are there",
"query": 'SELECT COUNT(*) FROM "Employee"',
},
]
We can create a few-shot prompt with them like so:
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")
prompt = FewShotPromptTemplate(
examples=examples[:5],
example_prompt=example_prompt,
prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.",
suffix="User input: {input}\nSQL query: ",
input_variables=["input", "top_k", "table_info"],
)
print(prompt.format(input="How many artists are there?", top_k=3, table_info="foo"))
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
logging.info(f"Login successful: {db_config['username']}")
response = agent_executor.run(query)
### Error Message and Stack Trace (if applicable)
not able to add FewShotPromptTemplate to create_sql_agent
### Description
not able to add FewShotPromptTemplate to create_sql_agent on z=azure openai bot
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to add FewShotPromptTemplate to create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/16837/comments | 2 | 2024-01-31T16:36:45Z | 2024-07-19T16:06:56Z | https://github.com/langchain-ai/langchain/issues/16837 | 2,110,509,305 | 16,837 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### Error Message and Stack Trace (if applicable)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### Description
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to pass the few shot examples create_sql_agent parameter | https://api.github.com/repos/langchain-ai/langchain/issues/16833/comments | 1 | 2024-01-31T13:43:33Z | 2024-05-08T16:07:59Z | https://github.com/langchain-ai/langchain/issues/16833 | 2,110,127,590 | 16,833 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_core.agent_executor import AgentExecutor
### Error Message and Stack Trace (if applicable)
ModuleNotFoundError: No module named 'langchain_core.agent_executor'
### Description
i am trying to use langchain_core module as below but it's giving error
from langchain_core.agent_executor import AgentExecutor
from langchain_core.toolkits.sql import SQLDatabaseToolkit
ModuleNotFoundError: No module named 'langchain_core.agent_executor'
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to import langchain_core modules | https://api.github.com/repos/langchain-ai/langchain/issues/16827/comments | 1 | 2024-01-31T10:50:24Z | 2024-05-08T16:07:54Z | https://github.com/langchain-ai/langchain/issues/16827 | 2,109,819,243 | 16,827 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
# Set the OPENAI_API_KEY environment variable
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
# Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
greetings = ["hi", "hello", "hey"]
# if any(greeting in question.lower() for greeting in greetings):
if any(greeting == question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
PROMPT = """
Given an input question, create a syntactically correct MSSQL query,
then look at the results of the query and return the answer.
Do not execute any query if the question is not relavent.
If a question lacks specific details, do not write and execute the query, like 'what is the employee name'.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in user friendly form.
The question: {question}
"""
answer = None
memory = ConversationBufferMemory(input_key='input', memory_key="history")
# conn = engine.connect()
# If not in SQL format, create a database chain and run the question
# db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
try:
answer = db_chain.run(PROMPT.format(question=question))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
# return "Unknown ProgrammingError Occured"
return "Invalid question."
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
print(str(e.message))
# return err_msg
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
here is m code in which i'm trying to integrate the memory so that the model can remember previous question and and answer for next question, but im not sure with the exact method,
### Error Message and Stack Trace (if applicable)
in above code while running with
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this
The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'
[('H########i',)]
SQLResult: [('H########',)]
Answer:The employee name of AD#### is H######.
> Finished chain.
Error Occured while processing
'input'
Unknown Error Occured
while using like this
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in <module>
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
### Description
in above code while running with
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this
The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'
[('H########i',)]
SQLResult: [('H########',)]
Answer:The employee name of AD#### is H######.
> Finished chain.
Error Occured while processing
'input'
Unknown Error Occured
while using like this
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in <module>
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
### System Info
python : 3.11
langchain: latest | How to add memory in SQLDatabaseChain chatbot with sql to natural language query | https://api.github.com/repos/langchain-ai/langchain/issues/16826/comments | 8 | 2024-01-31T10:37:43Z | 2024-05-13T16:10:31Z | https://github.com/langchain-ai/langchain/issues/16826 | 2,109,796,406 | 16,826 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The Method given below when invoked without using explicit cache clearing eventually produces CUDA out of memory error.
```python import os
def answer_in_parellel(questions,batch_size=3):
questions_and_answers = {}
while questions:
temp = questions[:batch_size]
questions = questions[batch_size:]
query_batch = []
for question in temp:
query = {"question":question}
query_batch.append(query)
answers = RAG_fusion_chain.batch(query_batch)
"""
If this is not uses the error occours
"""
#torch.cuda.empty_cache()
for i in range(len(temp)):
questions_and_answers[temp[i]]=answers[i]
return questions_and_answers
def answer(questions,experiment_name):
q_and_a = answer_in_parellel(questions)
save_path = os.getcwd()
save_path = os.path.join(os.getcwd(),experiment_name+".txt")
with open(save_path,'w') as f:
for question in q_and_a.keys():
f.write(question+"\n")
f.write("\n")
f.write(q_and_a[question]+"\n")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to make Mistral-7B instruct answer a series of questions and write it to a text file for testing.
I am using a local HuggingFace pipeline and RAG fusion
This bug occurred while calling the batch method
there was no problem encountered while using invoke method
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Issue with GPU Cache while using batch method | https://api.github.com/repos/langchain-ai/langchain/issues/16824/comments | 6 | 2024-01-31T10:05:47Z | 2024-05-08T16:07:44Z | https://github.com/langchain-ai/langchain/issues/16824 | 2,109,730,154 | 16,824 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
}]
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")
prompt = FewShotPromptTemplate(
examples=examples[:5],
example_prompt=example_prompt,
prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.",
suffix="User input: {input}\nSQL query: ",
input_variables=["input", "top_k", "table_info"],
)
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
response = agent_executor.run(query)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to add few_shots on agent_executor for sqldbtoolkit , but it's not supported
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47 | not able to add few_shots on agent_executor for sqldbtoolkit | https://api.github.com/repos/langchain-ai/langchain/issues/16821/comments | 2 | 2024-01-31T09:05:34Z | 2024-05-08T16:07:39Z | https://github.com/langchain-ai/langchain/issues/16821 | 2,109,615,518 | 16,821 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
llm = ChatOpenAI(
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
temperature=0,
openai_api_key="xxx",
openai_api_base="http://0.0.0.0:8000/v1/" ,)
qa_chain = ConversationalRetrievalChain.from_llm(
llm= llm,
retriever= compression_retriever,
chain_type='stuff',
combine_docs_chain_kwargs = chain_type_kwargs
)
a = qa_chain(
{
"question": question,
"chat_history": chat_history,
"output_key": 'answer',
},
)
### Error Message and Stack Trace (if applicable)
.
### Description
When I type in a question, it always repeats the question in the answer like this :
question :'电子票据查验平台如何获取票据明细?'
answer:'电子票据查验平台如何获取票据明细? 电子票据查验平台只能查验票据信息,没有票据明细,如需票据明细,请联系开票单位。'
### System Info
.0 | When I use ConversationalRetrievalChain.from_llm to implement a knowledge base with context, the resulting stream will carry questions, so how can I remove the questions? | https://api.github.com/repos/langchain-ai/langchain/issues/16819/comments | 5 | 2024-01-31T08:38:55Z | 2024-07-04T08:49:55Z | https://github.com/langchain-ai/langchain/issues/16819 | 2,109,569,523 | 16,819 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
os.environ["AZURE_OPENAI_API_KEY"] = ""
os.environ["AZURE_OPENAI_ENDPOINT"] = ""
llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment=self.model,
streaming=True
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/mnt/workspace/workgroup/lengmou/Tars-Code-Agent/components/model/llm.py", line 80, in <module>
for i in res:
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
for chunk in self._stream(
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 399, in _stream
if len(chunk["choices"]) == 0:
TypeError: object of type 'NoneType' has no len()
```
### Description
gpt-4-vision cannot be used in AzureChatOpenAI?
gpt-3.5-turbo、gpt-4、gpt-4-32k、gpt-4-turbo can be used in AzureChatOpenAI, but, gpt-4-vision cannot be used in it.
However, in the following way, gpt-4-vision can be used.
```
curl http://xxxxxxxxxxxxxxxx/2023-12-01-preview/chat \
-H "Content-Type: application/json" \
-H "tenant: 请用租户名称替换我" \
-d '{
"model": "gpt-4-vision",
"stream":false,
"max_tokens":100,
"messages": [{"role": "user","content":[{"type":"text","text":"Describe this picture:"},{"type":"image_url","image_url": {"url":"image_path"}}]}]
}'
```
### System Info
langchain==0.0.351
langchain-community==0.0.4
langchain-core==0.1.17
langchain-openai==0.0.5
openai==1.10.0 | gpt-4-vision cannot be used in AzureChatOpenAI? | https://api.github.com/repos/langchain-ai/langchain/issues/16815/comments | 6 | 2024-01-31T05:30:00Z | 2024-06-19T16:06:58Z | https://github.com/langchain-ai/langchain/issues/16815 | 2,109,314,805 | 16,815 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I tried following example code:
https://python.langchain.com/docs/modules/agents/agent_types/react
And change the code from:
```Python
tools = [TavilySearchResults(max_results=1)]
```
to:
```Python
tools[]
```
it outputs following error:
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
And I also tried this example code:
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
And change the code from:
```Python
tools = [TavilySearchResults(max_results=1)]
```
to:
```Python
tools[]
```
it outputs following error:
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "[] is too short - 'functions'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 580, in stream
yield self.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 176, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1246, in _call_with_config
context.run(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 177, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
return self.parse(result[0].text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/output_parsers/react_single_input.py", line 84, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tq/code/langchain/test/Agent/test2.py", line 22, in <module>
agent_executor.invoke({"input": "what is LangChain?"})
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1136, in _iter_next_step
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
### Description
I think Agent should be robust enough to deal with empty tools problem.
### System Info
langchain 0.1.4
langchain-cli 0.0.20
langchain-community 0.0.15
langchain-core 0.1.17
langchain-openai 0.0.5
langchainhub 0.1.14
langgraph 0.0.19
langserve 0.0.39
langsmith 0.0.83 | Agent with empty tools is not working | https://api.github.com/repos/langchain-ai/langchain/issues/16812/comments | 5 | 2024-01-31T04:05:51Z | 2024-06-07T12:27:46Z | https://github.com/langchain-ai/langchain/issues/16812 | 2,109,240,604 | 16,812 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class MyCustomAsyncHandler(AsyncCallbackHandler):
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("RESPONSE: ", response)
print("Hi! I just woke up. Your llm is ending")
async def ask_assistant(input: str) -> str:
prompt = PromptTemplate.from_template(prompt_raw)
prompt = prompt.partial(
language="Spanish",
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm = ChatOpenAI(
temperature=0,
model_name="gpt-4",
openai_api_key=os.environ["OPENAI_API_KEY"],
callbacks=[MyCustomAsyncHandler()],
)
llm_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
max_execution_time=60,
handle_parsing_errors=True,
)
with get_openai_callback() as cb:
clara_ai_resp = await agent_executor.ainvoke({"input": input})
clara_ai_output = clara_ai_resp["output"]
print("CB: ", cb)
return clara_ai_output, input, cb
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use the get_openai_callback from langchain_community.callbacks to get the number of token and costs incurred in using the agent but I am getting zero on everything, as you can see here when I print.

I have also set up a custom callback handler to go deep into the issue and what I found is that ChatOpenAI from langchain_openai does not call ainvoke as ChatOpenAI langchain.chat_models did.
THank you for your help
### System Info
python 3.11.5 | get_openai_callback not working when using Agent Executor after updating to latest version of Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/16798/comments | 38 | 2024-01-30T18:34:08Z | 2024-06-06T13:22:26Z | https://github.com/langchain-ai/langchain/issues/16798 | 2,108,509,299 | 16,798 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```from langchain.memory import ConversationBufferMemory
llm = AzureChatOpenAI(
azure_endpoint=AZURE_TEXT_ENDPOINT,
openai_api_version=OPEN_API_VERSION,
deployment_name=AZURE_TEXT_DEPLOYMENT, #"gpt-4_32k",
openai_api_key=OPENAI_TEXT_API_KEY,
openai_api_type=OPENAI_API_TYPE, #"azure",
temperature=0
)
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=ai_search_endpoint,
azure_search_key=ai_search_admin_key,
index_name=index_name,
embedding_function=embeddings.embed_query,
content_key="xxx"
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type="hybrid",
k=3,
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
prompt = ChatPromptTemplate.from_messages(
[
("system", """Remember the previous chats: {chat_history}. Respond to the human as helpfully and accurately as possible. You are a helpful assistant who retrieves information from a database of documents. If you cannot find the answer in the documents please write: 'I do not have the answer from the given information'. You have access to the following tools:\n\n{tools}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'"""),
("user", "{input}\n\n{agent_scratchpad}\n (reminder to respond in a JSON blob no matter what)"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history")
memory.save_context({"input": "hi"}, {"output": "whats up"})
try:
agent = create_structured_chat_agent(llm, [retriever_tool], prompt)
agent_executor = AgentExecutor(tools=[retriever_tool],
agent=agent,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
max_iterations=15,
memory=memory
)
except Exception as e:
print(e)
print("error instantiating the agent")
text = "Who is Julia Roberts?"
answer = agent_executor.invoke(
{
"input": text,
}
)
answer
```
### Error Message and Stack Trace (if applicable)
ValueError Traceback (most recent call last)
File <command-1017101766750907>, line 64
63 text = "Who is Julia Roberts?"
---> 64 answer = agent_executor.invoke(
65 {
66 "input": text,
67 }
68 )
69 answer
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:164, in Chain.invoke(self, input, config, **kwargs)
162 raise e
163 run_manager.on_chain_end(outputs)
--> 164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
167 if include_run_info:
168 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
438 self._validate_outputs(outputs)
439 if self.memory is not None:
--> 440 self.memory.save_context(inputs, outputs)
441 if return_only_outputs:
442 return outputs
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37, in BaseChatMemory.save_context(self, inputs, outputs)
35 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
36 """Save context from this conversation to buffer."""
---> 37 input_str, output_str = self._get_input_output(inputs, outputs)
38 self.chat_memory.add_user_message(input_str)
39 self.chat_memory.add_ai_message(output_str)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:29, in BaseChatMemory._get_input_output(self, inputs, outputs)
27 if self.output_key is None:
28 if len(outputs) != 1:
---> 29 raise ValueError(f"One output key expected, got {outputs.keys()}")
30 output_key = list(outputs.keys())[0]
31 else:
ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])
### Description
I am trying to output the intermediate steps as well as save the previous chat history, but it seems I cannot do both at the same time. The code attached above works when return_intermediate_steps is set to False.
### System Info
langchain==0.1.1
openai==1.7.0 | ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps']) when using create_structured_chat_agent with chat_memory and intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/16791/comments | 3 | 2024-01-30T16:03:18Z | 2024-05-07T16:08:53Z | https://github.com/langchain-ai/langchain/issues/16791 | 2,108,219,577 | 16,791 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import weaviate
import json
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
client = weaviate.Client(url="http://localhost:8080")
weave = Weaviate(client=client,index_name="people4",text_key="age")
file = TextLoader("file.txt",encoding="utf-8")
pages = file.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=100,
chunk_overlap=20,
length_function=len))
weave.from_documents(documents=pages,client=client,embedding=None,index_name="people4",text_key="age",vectorizer="text2vec-transformers")
```
### Error Message and Stack Trace (if applicable)
TypeError: Weaviate.__init__() got an unexpected keyword argument 'vectorizer'
### Description
Weaviate allows users to mention a key value pair of vectorizer while creating a class so that users can leverage local vectorization or basically vectorization of their choice for each class.
Currently this is not implemented in langchain and only a default type schema gets created with a singular data property when using the from_documents or from_texts function calls.
Motivation:
I was using langchain weaviate modules as my library to manage my weaviate storage. But the main problem was that I wanted to use weaviate's local text2vec transformers but in langchain there was no way to pass this argument to make sure that particular documents are embedded with particular vectorizers.
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.2.0
Authlib==1.3.0
certifi==2023.11.17
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.1
dataclasses-json==0.6.3
frozenlist==1.4.1
greenlet==3.0.3
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langsmith==0.0.83
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
packaging==23.2
pycparser==2.21
pydantic==2.5.3
pydantic_core==2.14.6
PyYAML==6.0.1
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
validators==0.22.0
weaviate-client==3.26.2
yarl==1.9.4
System Information
OS: Windows
OS Version: 10.0.22621
Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
langchain_core: 0.1.16
langchain: 0.1.4
langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
langgraph
langserve | community: Weaviate should allow the flexibility for the user to mention what vectorizer module that they want to use | https://api.github.com/repos/langchain-ai/langchain/issues/16787/comments | 3 | 2024-01-30T15:01:16Z | 2024-05-07T16:08:48Z | https://github.com/langchain-ai/langchain/issues/16787 | 2,108,074,367 | 16,787 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.vectorstores import Neo4jVector
neo4j_db = Neo4jVector(
url=url, username=username, password=password, embedding=embedding
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
url, username, password are gotten from environment variable even if the user gives them.
### System Info
No info required. | User specified args are not used when initializing Neo4jVector | https://api.github.com/repos/langchain-ai/langchain/issues/16782/comments | 1 | 2024-01-30T13:50:16Z | 2024-01-30T14:05:24Z | https://github.com/langchain-ai/langchain/issues/16782 | 2,107,915,309 | 16,782 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
def chat1(question):
greetings = ["hi", "hello", "hey"]
# if any(greeting in question.lower() for greeting in greetings):
if any(greeting == question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
PROMPT = """
Given an input question, create a syntactically correct MSSQL query,
then look at the results of the query and return the answer.
Do not execute any query if the question is not relavent.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in user friendly form.
The question: {question}
"""
answer = None
# conn = engine.connect()
# If not in SQL format, create a database chain and run the question
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
try:
# answer = db_chain.run(PROMPT.format(question=question))
# return answer
# except Exception as e:
# return f"An error occurred: {str(e)}"
# result_df = pd.read_sql(answer, conn)
# if result_df.empty:
# return "No results found"
answer = db_chain.run(PROMPT.format(question=question))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
# return "Unknown ProgrammingError Occured"
return "Invalid question."
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
# print(str(e.errcode))
print(str(e))
# return e.message
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
Answer:Error Occured while fetching the answer
Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
how to return just message present in the error code
### Error Message and Stack Trace (if applicable)
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
if the openai.BadRequestError is coming how to return just the message in exception in the code in error handling
### System Info
python 3.11
langchain: latest | how to display just the message from this, openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} | https://api.github.com/repos/langchain-ai/langchain/issues/16781/comments | 4 | 2024-01-30T13:49:16Z | 2024-05-08T16:07:29Z | https://github.com/langchain-ai/langchain/issues/16781 | 2,107,913,399 | 16,781 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
we are doing a simple call with stuff chain ,
```
LLM_DM_PROMPT = PromptTemplate(
template=dialogue_template,
input_variables=["entities_context", "chat_history", "human_input", "entity_definition",
"state_context"]
)
chain = LLMChain(
llm=args.llm,
prompt=LLM_DM_PROMPT,
memory=preparation_context.chat_history,
verbose=True
)
answer = chain.predict(human_input=user_input, state_context=previous_state,
entity_definition=intent_obj.entities, entities_context=entities_data,
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We have obtained the lsof (list open files) output for the process. To clarify, we are utilizing Langchain to initiate calls to OpenAI using Stuff Chain its sync client. We have verified this by inspecting the IPs associated with the TCP CLOSED_STATE connections.
version
`langchain==0.0.353
`
You can find the lsof output in the file : [HA4.txt](https://github.com/langchain-ai/langchain/files/14094973/HA4.txt)

found some similar issues related to the close_wait state : https://github.com/langchain-ai/langchain/issues/13509 in some other llm call
### System Info
```
langchain==0.0.353
langchain-community==0.0.13
langchain-core==0.1.12
langchain-google-genai==0.0.6
``` | lots of Open Files and TCP Connections in CLOSE_WAIT State When Calling OpenAI via Langchain ( Streaming ) | https://api.github.com/repos/langchain-ai/langchain/issues/16770/comments | 12 | 2024-01-30T08:45:01Z | 2024-06-24T16:07:20Z | https://github.com/langchain-ai/langchain/issues/16770 | 2,107,255,534 | 16,770 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code
```Python
from langchain_core.tools import tool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_openai.chat_models import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo-1106")
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
model_with_tools = model.bind(
tools=[convert_to_openai_tool(multiply)], tool_choice="multiply")
print(model_with_tools.invoke("What is 4 times 5?"))
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/tq/code/langchain/test/tools/test.py", line 21, in <module>
print(model_with_tools.invoke("What is 4 times 5?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4041, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
self.generate_prompt(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
raise e
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
self._generate_with_cache(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 451, in _generate
response = self.client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 659, in create
return self._post(
^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 1180, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 869, in request
return self._request(
^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 922, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'$.tool_choice' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Description
I have already provide tool_choice, why it is invalid?
### System Info
langchain 0.1.4
langchain-cli 0.0.20
langchain-community 0.0.15
langchain-core 0.1.17
langchain-openai 0.0.5
langchainhub 0.1.14
langgraph 0.0.19
langserve 0.0.39
langsmith 0.0.83 | Error code: 400 - {'error': {'message': "'$.tool_choice' is invalid. when add tool to LLM | https://api.github.com/repos/langchain-ai/langchain/issues/16767/comments | 2 | 2024-01-30T08:06:16Z | 2024-01-30T08:28:05Z | https://github.com/langchain-ai/langchain/issues/16767 | 2,107,191,383 | 16,767 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Page URL: https://python.langchain.com/docs/use_cases/chatbots
Sample Code:
chat(
[
HumanMessage(
content="Translate this sentence from English to French: I love programming."
)
]
)
**Warning:**
Users/randolphhill/govbotics/deepinfra/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
**Revised Code:**
rsp = chat.invoke(
[
HumanMessage(
content="Translate this sentence from English to Bahasa Indoensia: Good Morning, How are you?. "
)
]
)
### Idea or request for content:
_No response_ | DOC: Sample Chatbots Quick Start Needs to updated to new API'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/16755/comments | 1 | 2024-01-30T03:48:39Z | 2024-05-07T16:08:38Z | https://github.com/langchain-ai/langchain/issues/16755 | 2,106,895,485 | 16,755 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
with sync_playwright() as p:
browser = p.chromium.launch(headless=self.headless)
for url in self.urls:
try:
page = browser.new_page()
response = page.goto(url)
if response is None:
raise ValueError(f"page.goto() returned None for url {url}")
text = self.evaluator.evaluate(page, browser, response)
metadata = {"source": url}
docs.append(Document(page_content=text, metadata=metadata))
except Exception as e:
if self.continue_on_failure:
logger.error(
f"Error fetching or processing {url}, exception: {e}"
)
else:
raise e
browser.close()
```
### Error Message and Stack Trace (if applicable)
This piece of code doesn't have any errors
But in large-scale data retrieval, the absence of proxies may result in data scraping failures. Therefore, it is recommended to incorporate proxy functionality here to enhance engineering capabilities and efficiency
### Description
## Problem Overview:
The current PlaywrightEvaluator class lacks proxy support, limiting flexibility when processing pages using this class. To enhance functionality and improve the class's applicability, it is suggested to add proxy support within the class.
## Proposed Enhancement:
Modify the PlaywrightEvaluator class to accept proxy parameters, allowing the use of proxies when creating Playwright pages. This improvement would enable users to conveniently utilize proxies for accessing pages, thereby expanding the class's use cases.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #93-Ubuntu SMP Tue Sep 5 17:16:10 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.0.333
> langchain_community: 0.0.15
> langserve: 0.0.39
| Enhancement: Add Proxy Support to PlaywrightURLLoader Class | https://api.github.com/repos/langchain-ai/langchain/issues/16751/comments | 2 | 2024-01-30T02:24:52Z | 2024-05-07T16:08:33Z | https://github.com/langchain-ai/langchain/issues/16751 | 2,106,822,896 | 16,751 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code: ```python
import io
from langchain.llms import LlamaCpp
f = io.BytesIO(b"\x00\x00\x00\x00\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01")
llm = LlamaCpp(model_path=f,temperature=0.1)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mlenv_3/lib64/python3.8/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/mlenv_3/lib64/python3.8/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/home/mlenv_3/lib64/python3.8/site-packages/pydantic/v1/main.py", line 1102, in validate_model
values = validator(cls_, values)
File "/home/mlenv_3/lib64/python3.8/site-packages/langchain/llms/llamacpp.py", line 151, in validate_environment
model_path = values["model_path"]
KeyError: 'model_path'
### Description
We need to pass model as a bytes.io object to llamacpp interfact. For example, we cannot pass a path to the model that is saved on disk. We need support for bytes.io object.
### System Info
pip freeze | grep langchain
langchain==0.0.325
platform:
Linux
Python 3.8.14 | Langchain Llamacpp interface does not accept bytes.io object as input | https://api.github.com/repos/langchain-ai/langchain/issues/16745/comments | 1 | 2024-01-29T22:47:13Z | 2024-05-06T16:09:09Z | https://github.com/langchain-ai/langchain/issues/16745 | 2,106,572,791 | 16,745 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
<img width="678" alt="Screen Shot 2024-01-29 at 1 45 35 PM" src="https://github.com/langchain-ai/langchain/assets/77302524/a174f38b-3bbf-416e-8f58-6cdcbf2f2a79">
### Error Message and Stack Trace (if applicable)
<img width="1019" alt="Screen Shot 2024-01-29 at 1 47 01 PM" src="https://github.com/langchain-ai/langchain/assets/77302524/45447078-03f7-4545-8823-cbe9d267bf8b">
### Description
I'm creating my Pinecone DB as documentation via Pinecone says and trying to upsert my langchain documents into Pinecone, however it is saying I haven't specified my API key which I clearly have. The index is created fine, but I'm hitting a brick wall trying to get my documents into it.
I'm using pinecone as my vector DB and help from langchain for a RAG application. The langchain documentation for Pinecone is outdated (using pinecone init which is not supported anymore) and I'm seeing other people online say they are getting this issue too ([https://www.reddit.com/r/LangChain/comments/199mklo/langchain_011_is_not_working_with_pineconeclient/](https://www.reddit.com/r/LangChain/comments/199mklo/langchain_011_is_not_working_with_pineconeclient/))
### System Info
pip 23.3.2 from /opt/conda/lib/python3.10/site-packages/pip (python 3.10)
langchain==0.0.354
pinecone-client==3.0.0
Using kaggle notebook on personal mac | Pinecone VectorStore Issue | https://api.github.com/repos/langchain-ai/langchain/issues/16744/comments | 1 | 2024-01-29T22:05:14Z | 2024-05-06T16:09:04Z | https://github.com/langchain-ai/langchain/issues/16744 | 2,106,520,632 | 16,744 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
model_id = "facebook/opt-2.7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
base_model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
device_map='auto'
)
pipe = pipeline("text-generation",
model=base_model,
tokenizer=tokenizer,
max_length=256,
temperature=0.6,
top_p=0.95,
repetition_penalty=1.2)
llm = HuggingFacePipeline(pipeline=pipe)
class Booking(BaseModel):
date_of_arrival: str = Field(description="Time of check in")
date_of_departure: str = Field(description="Time of check out")
number_of_guests: int = Field(description="number of guests")
room_type: str = Field(description="name of room")
special_requests: list = Field(description="list of special requests")
contact_information: list = Field(description="list of contanct information like email and phone")
booking_message = """Hi there! I'm interested in booking a meeting room for a small business conference. The dates I have in mind are February 5th to February 6th, 2024, and we expect around 6 guests to attend. We'll need a standard meeting room with basic amenities. However, we'll also require a projector and whiteboard for presentations. You can reach me at """
chain = create_extraction_chain_pydantic(pydantic_schema=Booking, llm=llm)
chain.run(booking_message)
### Error Message and Stack Trace (if applicable)
OutputParserException: This output parser can only be used with a chat generation.
### Description
OutputParserException: This output parser can only be used with a chat generation.
### System Info
. | OutputParserException: This output parser can only be used with a chat generation. | https://api.github.com/repos/langchain-ai/langchain/issues/16743/comments | 3 | 2024-01-29T21:03:41Z | 2024-04-14T17:17:44Z | https://github.com/langchain-ai/langchain/issues/16743 | 2,106,422,630 | 16,743 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | docs: Add page for Vision models in modules/model_io/chat | https://api.github.com/repos/langchain-ai/langchain/issues/16739/comments | 1 | 2024-01-29T19:17:29Z | 2024-05-06T16:08:59Z | https://github.com/langchain-ai/langchain/issues/16739 | 2,106,243,552 | 16,739 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | docs: Add page for ImagePromptTemplate in modules/model_io/prompts | https://api.github.com/repos/langchain-ai/langchain/issues/16738/comments | 1 | 2024-01-29T19:17:08Z | 2024-05-06T16:08:54Z | https://github.com/langchain-ai/langchain/issues/16738 | 2,106,242,922 | 16,738 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Similar to JsonOutputFunctionsParser, JsonOutputToolsParser should be able to parse partial results | Add streaming support to JsonOutputToolsParser | https://api.github.com/repos/langchain-ai/langchain/issues/16736/comments | 1 | 2024-01-29T18:41:01Z | 2024-05-06T16:08:49Z | https://github.com/langchain-ai/langchain/issues/16736 | 2,106,183,569 | 16,736 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
class SQLDbTool(BaseTool):
"""Tool SQLDB Agent"""
name = "@DSR"
description = "useful when the questions includes the term: @DSR.\n"
llm: AzureChatOpenAI
def _run(self, query: str) -> str:
try:
# Key Vault details
key_vault_name = 'XXXXXXXXXXX'
vault_url = f"https://xxxxxxxxxxxx.vault.azure.net/"
# Authenticate using DefaultAzureCredential
credential = DefaultAzureCredential()
#Create a SecretClient using your credentials
client = SecretClient(vault_url, credential)
# Access Key Vault secrets
secret_name = 'source-XXXX-sql-XX'
SQL_SERVER_PASSWORD = client.get_secret(secret_name).value
# Update db_config with dynamic username and password
db_config = {
'drivername': 'mssql+pyodbc',
'username': os.environ["SQL_SERVER_USERNAME"],
'password': SQL_SERVER_PASSWORD,
'host': os.environ["SQL_SERVER_ENDPOINT"],
'port': 14XX,
'database': os.environ["SQL_SERVER_DATABASE"],
'query': {'driver': 'ODBC Driver 17 for SQL Server'}
}
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
# Define your examples
examples = [
{
"input": "what are the top 5 brands?",
"query": "SELECT TOP 5 Brand",
}
]
# Define the prompt template for each example
example_prompt = PromptTemplate.from_messages(
[('human', '{input}'), ('ai', '{query}')]
)
# Create the FewShotPromptTemplate
few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="You are a helpful AI Assistant",
suffix="{input}",
example_separator="\n\n",
template_format="f-string",
validate_template=True,
input_variables=["input"]
)
# Add the FewShotPromptTemplate to your agent_executor
agent_executor.add_prompt(few_shot_prompt)
logging.info(f"Login successful: {db_config['username']}")
response = agent_executor.run(query)
logger.info(f"Langchain log is:{response}")
log_stream.seek(0)
blob_client = blob_service_client.get_container_client(CONTAINER_NAME).get_blob_client(BLOB_NAME)
blob_client.upload_blob(log_stream.read(), overwrite=True)
except Exception as e:
response = str(e)
return response
### Error Message and Stack Trace (if applicable)
_No response_
### Description
getting error : type object ‘PromptTemplate’ has no attribute ‘from_messages'
### System Info
langchain==0.0.352
langchain-core==0.1.11 | getting error : type object ‘PromptTemplate’ has no attribute ‘from_messages | https://api.github.com/repos/langchain-ai/langchain/issues/16735/comments | 1 | 2024-01-29T17:08:08Z | 2024-05-06T16:08:44Z | https://github.com/langchain-ai/langchain/issues/16735 | 2,106,006,795 | 16,735 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
i want to add few shots example to make my prompts better to understand the complex questions to create_sql_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### Error Message and Stack Trace (if applicable)
i want to add few shots example to make my prompts better to understand the complex questions to create_sql_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### Description
i want to add FewShotPromptTemplate to my agent_executor
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
},
{
"input": "List all tracks in the 'Rock' genre.",
"query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');",
}]
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### System Info
langchain version 0.0.352 | not able to add few shots example to create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/16731/comments | 5 | 2024-01-29T15:27:52Z | 2024-07-09T16:06:14Z | https://github.com/langchain-ai/langchain/issues/16731 | 2,105,755,700 | 16,731 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@gcheron,
```python
from langchain_community.storage.sql import SQLDocStore
SQLDocStore(connection_string="sqlite:///tmp/test.db")
```
Result:
```
sqlalchemy.exc.CompileError: (in table 'langchain_storage_collection', column 'uuid'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 139, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_UUID'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6522, in visit_create_table
processed = self.process(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 912, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 143, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6553, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/dialects/sqlite/base.py", line 1534, in get_column_specification
coltype = self.dialect.type_compiler_instance.process(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 957, in process
return type_._compiler_dispatch(self, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 141, in _compiler_dispatch
return visitor.visit_unsupported_compilation(self, err, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 962, in visit_unsupported_compilation
raise exc.UnsupportedCompilationError(self, element) from err
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID (Background on this error at: https://sqlalche.me/e/20/l7de)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-aac51bc0ae83>", line 1, in <module>
SQLDocStore(connection_string="sqlite:////tmp/test.db")
File "/usr/lib/python3.10/typing.py", line 957, in __call__
result = self.__origin__(*args, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 185, in __init__
self.__post_init__()
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 194, in __post_init__
self.__create_tables_if_not_exists()
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 204, in __create_tables_if_not_exists
Base.metadata.create_all(self._conn)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 5832, in create_all
bind._run_ddl_visitor(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2448, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 671, in traverse_single
return meth(obj, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 919, in visit_metadata
self.traverse_single(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 671, in traverse_single
return meth(obj, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 957, in visit_table
)._invoke_with(self.connection)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 315, in _invoke_with
return bind.execute(self)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 181, in _execute_on_connection
return connection._execute_ddl(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1525, in _execute_ddl
compiled = ddl.compile(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 309, in compile
return self._compiler(dialect, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 69, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 867, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 912, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 143, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6532, in visit_create_table
raise exc.CompileError(
sqlalchemy.exc.CompileError: (in table 'langchain_storage_collection', column 'uuid'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID
### Description
This new class presents a few problems.
- It is not sqlite compatible
- It uses a `connection_string` parameter, whereas `db_url` is used everywhere.
- It does not allow the call to be encapsulated in a transaction, as it cannot receive an engine parameter instead of `db_url`.
The usage with langchain is to propose an engine parameter to manipulate SQL.
Is the case for :
langchain_community.cache.SQLAlchemyCache
langchain_community.cache.SQLAlchemyMd5Cache
### System Info
langchain==0.1.4
langchain-community @ file:///home/pprados/workspace.bda/langchain/libs/community
langchain-core==0.1.16
langchain-openai==0.0.2.post1
langchain-qa-with-references==0.0.330
| SQLDocStore is incompatible with sqlite | https://api.github.com/repos/langchain-ai/langchain/issues/16726/comments | 3 | 2024-01-29T13:17:36Z | 2024-02-14T03:45:58Z | https://github.com/langchain-ai/langchain/issues/16726 | 2,105,482,362 | 16,726 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I have followed the documentations guide and create an agent with vector retriever with a Pydantic schema passed as function into the agent, but it seems that there is no information from. the retriever is passed to the schema function, the schema function will just give source [1] as output. I have also tried a exact replicate of the code in the documentation with my own data since I don't have the state_of_the_union text file and I got the same error as what I am getting in my own implementation
This is the documentation link https://python.langchain.com/docs/modules/agents/how_to/agent_structured
during my experiments
<img width="861" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/26197d99-c0b3-4cc5-b2b3-9f83f1e619a1">
You can see that this piece of information containing capstone =. 5.5 is at page chuck 10
<img width="841" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/1b98f72e-061f-482d-91cf-8d6ae808ef31">
In the response to what is my capstone grade, it gives source [] which is definitely not the right information
### Idea or request for content:
I am not too exactly sure about what is causing the problem, maybe I should flag this as a bug or maybe just some configuration error.
My suspicion is that after the creation of the retriever tool, it formats the document using a prompt which strip away the meta data information. I have tried passing an additional prompt when im creating the retriever tool so that it returns the meta data output together with the content.
<img width="629" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/bfce8431-09a6-4c42-adbe-c04eb94cf418">
this will result in the answer formatted to be
<img width="289" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/a7828f65-0eff-4835-9580-5ef331f3b68b">
But it takes a lot of luck and tuning to get the final answer to get the correct source information (have succeeded before once in a while but not most of the time)
Please advise on what could be the potential solution to this, thank you very much | DOC: Returning structured output from agent documentations not correct | https://api.github.com/repos/langchain-ai/langchain/issues/16720/comments | 10 | 2024-01-29T10:13:29Z | 2024-05-23T11:03:23Z | https://github.com/langchain-ai/langchain/issues/16720 | 2,105,135,404 | 16,720 |
[
"langchain-ai",
"langchain"
] | ### Description
I'm trying to integrate my Nemotron LLM with langchain, I use the source code in langchain_nvidia_trt.llms.py, for having streaming but gives an exception.
### Example Code
```python
from llms import TritonTensorRTLLM
llm = TritonTensorRTLLM(server_url="localhost:8001", model_name="Nemotron-rlhf")
res = llm.invoke("HI")
```
### Error Message and Stack Trace (if applicable)
and the Exeption is bellow
```Traceback (most recent call last):
File "/workspace/workspace/tens.py", line 4, in <module>
res = llm.invoke("HI")
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 230, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 525, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 698, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 562, in _generate_helper
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 549, in _generate_helper
self._generate(
File "/workspace/workspace/llms.py", line 153, in _generate
result: str = self._request(
File "/workspace/workspace/llms.py", line 206, in _request
result_str += token
TypeError: can only concatenate str (not "InferenceServerException") to str
```
the InferenceServerException is bellow:
`unexpected inference output 'text_output' for model 'Nemotron-rlhf'`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023
> Python Version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Nvidia Nemotron integration with langchain with TritonTensorRTLLM | https://api.github.com/repos/langchain-ai/langchain/issues/16719/comments | 2 | 2024-01-29T09:38:42Z | 2024-05-06T16:08:39Z | https://github.com/langchain-ai/langchain/issues/16719 | 2,105,059,012 | 16,719 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
using the following code:
```python
self.agent = (
{
"input": itemgetter("input"),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
'chat_history': itemgetter("chat_history")
}
| self.prompt
| self.condense_prompt
# | self.moderate
| self.llm.bind(functions=[convert_to_openai_function(t) for t in self.tools])
| OpenAIFunctionsAgentOutputParser()
)
from langchain_core.runnables.history import RunnableWithMessageHistory
self.agent_executor = AgentExecutor(
agent=self.agent,
tools=self.tools,
memory=self.memory,
verbose=True,
handle_parsing_errors=True,
return_intermediate_steps=True,
# max_iterations=2,
# max_execution_time=1,
)
self.agent_with_chat_history = RunnableWithMessageHistory(
self.agent_executor,
# This is needed because in most real world scenarios, a session id is needed
# It isn't really used here because we are using a simple in memory ChatMessageHistory
lambda session_id: RedisChatMessageHistory(self.session_id, url=f"redis://{self.redis_server}/0"),
input_messages_key="input",
history_messages_key=self.memory_key,
output_messages_key="output"
)
```
and a tool:
```python
class QuotationTool(BaseTool):
name = "create_quotation_tool"
description = """
a useful tool to create products or services quotations, all required Field must be available in order
to complete check the args_schema [QuotationToolSchema] schema for it,
"""
args_schema: Type[QuotationToolSchema] = QuotationToolSchema
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
# Get the tool arguments
phone = kwargs.get("phone")
name = kwargs.get("name")
email = kwargs.get('email')
product_name = kwargs.get("product_name")
unit_amount = kwargs.get("unit_amount")
currency = kwargs.get("currency")
quantity = kwargs.get("quantity")
# Search for the customer by phone
customer = stripe_helper.search_customer(phone)
# Check if the customer exists
if customer:
# Get the customer id
customer_id = customer[0]['id']
else:
# Create a new customer
customer_id = stripe_helper.create_customer(phone, name, email)
# Search for the product by name
product = stripe_helper.search_product(product_name)
# Check if the product exists
if product:
# Get the product id
product_id = product[0]['id']
else:
# Create a new product
product_id = stripe_helper.create_product(product_name)
# Search for the price by product id
price = stripe_helper.search_price(product_id)
# Check if the price exists
if price:
# Get the price id
price_id = price[0]['id']
else:
# Create a new price
price_id = stripe_helper.create_product_price(unit_amount, currency, product_id)['id']
# Create a line item with the price id and quantity
line_item = {
"price": price_id,
"quantity": quantity
}
# Create a quotation with the Stripe API
quotation = stripe_helper.create_quotation(customer_id, [line_item])
# Finalize the quotation
quotation = stripe_helper.finalize_quota(quotation['id'])
# Download the quotation PDF
pdf_name = f"{quotation['id']}.pdf"
stripe_helper.download_quota_pdf(quotation['id'], customer_id, pdf_name)
# Send the quotation PDF to the user
from events_producer.producer import KafkaProducerWrapper
producer = KafkaProducerWrapper()
producer.send_pdf_to_consumer('send_quota_topic',
pdf_name,
phone_number=phone,
quota_id=quotation['id']
)
return f"""
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id {customer_id} and {quotation['id']} to accomplish the task
"""
def _run(
self,
phone: str,
name: str,
product_name: str,
email: str,
unit_amount: int,
quantity: int = 1,
currency: str = 'aed',
) -> Any:
# Execute the tool asynchronously
return self._arun(phone=phone, name=name, product_name=product_name, unit_amount=unit_amount * 100,
currency=currency,
quantity=quantity)
```
### Error Message and Stack Trace (if applicable)
stack trace:
```command
Invoking: `place_order_tool` with `{'query': 'صمغ عربي مع دوم وكركديه 500 جرام - 4 علب'}`
to place an order trigger this sequence or tools:
1. ask the customer for his info.
2. call the create_quotation_tool to create the quota.
3. call {send_pdf_via_ses_tool} and {whatsapp_send_pdf_tool} tools afterwards to send quotation to customer
5. create a payment link using {payment_link_tool} for the quotation after the customer to approve it.
product price or unit_amount unit_amount = unit_amount * 100 always otherwise Stripe will not accept the amount.
for: {whatsapp_send_pdf_tool} tool make sure all captions are in customer original language
2024-01-29 12:06:01,769 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
شكرًا لاختياركم. إجمالي السعر لـ 4 علب من صمغ عربي مع كركديه 500 جرام هو 320 درهم إماراتي.
لإتمام الطلب، أحتاج إلى بعض المعلومات منك:
1. رقم الهاتف
2. البريد الإلكتروني
3. عنوان الشحن
يرجى تزويدي بهذه المعلومات لنتمكن من متابعة الطلب.
> Finished chain.
2024-01-29 12:06:11,463 - BOTS Service - INFO - [*] response: شكرًا لاختياركم. إجمالي السعر لـ 4 علب من صمغ عربي مع كركديه 500 جرام هو 320 درهم إماراتي.
لإتمام الطلب، أحتاج إلى بعض المعلومات منك:
1. رقم الهاتف
2. البريد الإلكتروني
3. عنوان الشحن
يرجى تزويدي بهذه المعلومات لنتمكن من متابعة الطلب.
2024-01-29 12:06:31,146 - BOTS Service - INFO - [*] topic:gpt_message_response_topic, event: {'requestId': '969722f0-8991-4602-8c4a-d36a2cb4237c', 'message': '+٩٧١٥٦٩٩٣٣٨٩١', 'phoneNumber': '971565531542', 'customerName': 'Yafa Cloud Services'}
2024-01-29 12:06:31,146 - BOTS Service - INFO - [*] question: +٩٧١٥٦٩٩٣٣٨٩١, phone_number: 971565531542, customer_name: Yafa Cloud Services
> Entering new AgentExecutor chain...
2024-01-29 12:06:34,174 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': 'customer@example.com', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1OdpraIPds9mVdeaVWfGlAMh to accomplish the task
2024-01-29 12:06:51,122 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': 'customer@example.com', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1OdprqIPds9mVdea8lLxmUKf to accomplish the task
2024-01-29 12:07:06,480 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': 'customer@example.com', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1Odps5IPds9mVdeawJLdpkp2 to accomplish the task
2024-01-29 12:07:21,690 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': 'customer@example.com', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
```
### Description
i will attache a screenshot to show how the agent is triggering the tool multiple times.

### System Info
already:
pip install --upgrade langchain
version: Successfully installed langchain-0.1.4
| LangChain retrigger tools multiple times until hit agent limits | https://api.github.com/repos/langchain-ai/langchain/issues/16712/comments | 3 | 2024-01-29T08:12:43Z | 2024-03-13T09:25:12Z | https://github.com/langchain-ai/langchain/issues/16712 | 2,104,900,864 | 16,712 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.base_language import BaseLanguageModel
from langchain_core.runnables import ConfigurableField
from langchain_core.runnables.base import RunnableSerializable
from typing import Optional
from langchain_openai import OpenAI
class MyRunnable(RunnableSerializable):
llm: Optional[BaseLanguageModel] = None
def invoke(self):
return "hi"
configurable_runnable = MyRunnable().configurable_fields(
llm=ConfigurableField(
id= "llm",
annotation= BaseLanguageModel,
name= "Language Model",
description= "The language model to use for generation"
)
)
llm = OpenAI()
chain = configurable_runnable.with_config({"configurable": {"llm": llm}})
chain.invoke({})
```
### Error Message and Stack Trace (if applicable)
```
{
"name": "ValidationError",
"message": "1 validation error for MyRunnable
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)",
"stack": "---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 25
22 llm = OpenAI()
24 chain = configurable_runnable.with_config({\"configurable\": {\"llm\": llm}})
---> 25 chain.invoke({})
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:3887, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3881 def invoke(
3882 self,
3883 input: Input,
3884 config: Optional[RunnableConfig] = None,
3885 **kwargs: Optional[Any],
3886 ) -> Output:
-> 3887 return self.bound.invoke(
3888 input,
3889 self._merge_configs(config),
3890 **{**self.kwargs, **kwargs},
3891 )
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/configurable.py:94, in DynamicRunnable.invoke(self, input, config, **kwargs)
91 def invoke(
92 self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
93 ) -> Output:
---> 94 runnable, config = self._prepare(config)
95 return runnable.invoke(input, config, **kwargs)
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/configurable.py:291, in RunnableConfigurableFields._prepare(self, config)
283 configurable = {
284 **configurable_fields,
285 **configurable_single_options,
286 **configurable_multi_options,
287 }
289 if configurable:
290 return (
--> 291 self.default.__class__(**{**self.default.__dict__, **configurable}),
292 config,
293 )
294 else:
295 return (self.default, config)
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for MyRunnable
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)"
}
```
### Description
I wanted to make the LLM passed to a runnable a configurable parameters (makes sense semantically in my application). It fails with the error above. Interestingly, if I instead invoke the runnable with a config dict it works:
```
test = MyRunnable().invoke({}, config={"configurable": {"llm": llm}})
```
I looked a little into it, the exact reason still eludes me but it seems like for some reason, when `with_config` is called, the passed parameters are validated by Pydantic *which tries to instantiate them in order to do so*, which fails since the `llm` attribute is annotated with an ABC that cannot be directly instantiated.
This is likely related to https://github.com/langchain-ai/langchain/issues/2636 .
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 05 Jan 2024 15:35:19 +0000
> Python Version: 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.2.post1
> langgraph: 0.0.12
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Cannot pass an OpenAI model instance with `with_config`, Pydantic gives a type error. | https://api.github.com/repos/langchain-ai/langchain/issues/16711/comments | 1 | 2024-01-29T08:05:04Z | 2024-05-06T16:08:35Z | https://github.com/langchain-ai/langchain/issues/16711 | 2,104,887,870 | 16,711 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
FastAPI installation step is missing in documentation
### Idea or request for content:
In this https://python.langchain.com/docs/get_started/quickstart#serving-with-langserve
we need to add dependency installation step.
```pip install FastAPI``` | DOC: Missing dependency installation steps | https://api.github.com/repos/langchain-ai/langchain/issues/16703/comments | 2 | 2024-01-28T17:52:15Z | 2024-01-29T00:51:39Z | https://github.com/langchain-ai/langchain/issues/16703 | 2,104,276,260 | 16,703 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is no suggestion to install **langchain_openai** in LLM agents section, because of which user might get errors while following the documentation
### Idea or request for content:
In this section along with **langchainhub** , https://python.langchain.com/docs/get_started/quickstart#agent we can add these steps.
``` pip install langchain_openai ```
And
```export OPENAI_API_KEY=...``` | DOC: 'Missing dependency installation step in documentation for LLM Agents part' | https://api.github.com/repos/langchain-ai/langchain/issues/16702/comments | 1 | 2024-01-28T17:48:17Z | 2024-05-05T16:06:52Z | https://github.com/langchain-ai/langchain/issues/16702 | 2,104,274,819 | 16,702 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I tried out the tutorial at this link: https://python.langchain.com/docs/modules/model_io/output_parsers/types/retry
But I getting this error related to Retry Parser tutorial example `ValidationError: 1 validation error for Action action_input Field required [type=missing, input_value={'action': 'search'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing`.
After conducting various experiments to find the cause, I found out that changing the part `from pydantic import BaseModel, Field` in the code example to `from langchain_core.pydantic_v1 import BaseModel, Field` makes the example code run correctly. The version of langchain I tested is `0.1.3.`
It seems that the official documentation's examples have not been updated to reflect the changes in syntax due to version updates of langchain, so I'm leaving this issue.
#### 👉 Summary
- I tried out the Retry Parser tutorial example
- I founded an error that seems to be due to the example content not being updated following a version update of langchain.
- from `from pydantic import BaseModel, Field`
- to `from langchain_core.pydantic_v1 import BaseModel, Field`
- I used to `langchain v0.1.3` and I confirmed that the example works correctly when executed as I described.
### Idea or request for content:
I hope that the issue I raised will be reflected in the official documentation :) | DOC: Error in Retry Parser example documentation | https://api.github.com/repos/langchain-ai/langchain/issues/16698/comments | 1 | 2024-01-28T15:18:33Z | 2024-01-29T00:53:14Z | https://github.com/langchain-ai/langchain/issues/16698 | 2,104,210,604 | 16,698 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import weaviate
client = weaviate.Client(url="http://localhost:8080")
weave = Weaviate(client=client,index_name="people",text_key="age")
file = TextLoader("file.txt",encoding="utf-8")
pages = file.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=100,
chunk_overlap=20,
length_function=len))
weave.from_documents(documents=pages,client=client,embedding=None,index_name="people",text_key="age")
props = client.schema.get(class_name="people")['properties']
for prop in props:
print(prop['name'])
```
### Error Message and Stack Trace (if applicable)
No error but a discrepancy due to lack of argument passing in the function call
### Description
* I am trying to use the weaviate vectorstore in langchain to store documents
* When using the **from_documents** function which internally calls **from_texts** there is a mismatch in the expected properties of the schema/class created.
* In the **from_texts** call there is a **_default_schema** function that is called without passing the **text_key**, because of which an additional property named "key" gets created which is not needed.
Example:
When i create a class with **from_documents** with a **text_key** let's say "age". The properties of the class should have only age and source as it's keys and not age,source and text.
**Solution: Pass the text_key inside the _default_schema function and create a class accordingly**
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.2.0
Authlib==1.3.0
certifi==2023.11.17
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.1
dataclasses-json==0.6.3
frozenlist==1.4.1
greenlet==3.0.3
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langsmith==0.0.83
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
packaging==23.2
pycparser==2.21
pydantic==2.5.3
pydantic_core==2.14.6
PyYAML==6.0.1
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
validators==0.22.0
weaviate-client==3.26.2
yarl==1.9.4
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Discrepancy in schema properties when using "from_documents" in vectorstore(Weaviate) | https://api.github.com/repos/langchain-ai/langchain/issues/16692/comments | 2 | 2024-01-28T07:27:23Z | 2024-01-29T00:53:32Z | https://github.com/langchain-ai/langchain/issues/16692 | 2,104,013,666 | 16,692 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.retrievers import AmazonKnowledgeBasesRetriever
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="<knowledge_base_id>"
retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 3}}
)
retriever_query = "TEST"
documents = retriever.get_relevant_documents(query=retriever_query)
```
### Error Message and Stack Trace (if applicable)
```
documents = retriever.get_relevant_documents(query=retriever_query)
File "/opt/python/langchain_core/retrievers.py", line 200, in get_relevant_documents
callback_manager = CallbackManager.configure(
File "/opt/python/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/opt/python/langchain_core/callbacks/manager.py", line 1947, in _configure
logger.warning(
File "/var/lang/lib/python3.12/logging/__init__.py", line 1551, in warning
self._log(WARNING, msg, args, **kwargs)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1684, in _log
self.handle(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1700, in handle
self.callHandlers(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1762, in callHandlers
hdlr.handle(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1028, in handle
self.emit(record)
File "/var/lang/lib/python3.12/site-packages/awslambdaric/bootstrap.py", line 303, in emit
msg = self.format(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 999, in format
return fmt.format(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 703, in format
record.message = record.getMessage()
File "/var/lang/lib/python3.12/logging/__init__.py", line 392, in getMessage
msg = msg % self.argsEND RequestId: c9f27447-0d68-4100-bacf-e2bde27a72ab
```
### Description
When I use knowledge bases directly with Boto3 client it works. This makes me suspect error is coming from or callbackManager, but I don't know why that would be used?
for result in results:
documents.append(
Document(
page_content=result["content"]["text"],
metadata={
"location": result["location"],
"score": result["score"] if "score" in result else 0,
},
)
)
### System Info
From Lambda Layer Python3.12 Runtime with ARM64 architecture. Langchain version 0.1.4. Boto3 version 1.34.29. | AmazonKnowledgeBasesRetriever breaks application. When using KnowledgeBase directly with Boto3 no error. | https://api.github.com/repos/langchain-ai/langchain/issues/16686/comments | 4 | 2024-01-28T02:19:00Z | 2024-01-28T18:59:49Z | https://github.com/langchain-ai/langchain/issues/16686 | 2,103,922,350 | 16,686 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Doesn't matter, only matters that you are using the latest stable LangChain and LangChain OpenAI packages. For example
```python
# testing.py
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
# Assuming OPENAI_API_KEY is set, which it is on my system
llm = ChatOpenAI(model_name="gpt-3.5-turbo")
sentiment = PromptTemplate(
input_variables=["text"],
template="Analyse the sentiment of the following text. Please choose an answer from (negative/neutral/positive). Text: {text}"
)
analyze_sentiment = LLMChain(llm=llm, prompt=sentiment, verbose=True)
if __name__=="__main__":
print(analyze_sentiment.run(text="I am very frustrated right now"))
```
For good measure, I ran the following cURL command from the [OpenAI Docs](https://platform.openai.com/docs/api-reference/chat/create).
```bash
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
Which worked fine.
### Error Message and Stack Trace (if applicable)
Here is the Python output.
```bash
$ poetry run python testing.py autostonks 04:21:33 PM
> Entering new LLMChain chain...
Prompt after formatting:
Analyse the sentiment of the following text. Please choose an answer from (negative/neutral/positive). Text: I am very upset
Traceback (most recent call last):
File "/Users/user/Projects/autostonks/testing.py", line 16, in <module>
print(analyze_sentiment.invoke({'text': "I am very upset"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 451, in _generate
response = self.client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 659, in create
return self._post(
^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1180, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 869, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 945, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 945, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
```
And here is the cURL output.
```bash
curl https://api.openai.com/v1/chat/completions \ ✘ INT autostonks 04:25:06 PM
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
{
"id": "chatcmpl-8lkSmyzfa5ZcdParBTBSWUSN3lDK2",
"object": "chat.completion",
"created": 1706390712,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 9,
"total_tokens": 28
},
"system_fingerprint": null
}
```
### Description
I'm confused as this is a freshly installed project, fresh API key, and I've been using LangChain just fine in other projects all day, and only happens in the latest version. It also happens with the deprecated `langchain.chat_models.ChatOpenAI` and `langchain_community.chat_models.ChatOpenAI`, but not in my other projects with older LangChain versions.
### System Info
```bash
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.3 (main, Apr 27 2023, 12:11:13) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| OpenAI Chat Model 429 with fresh API key | https://api.github.com/repos/langchain-ai/langchain/issues/16678/comments | 4 | 2024-01-27T21:28:23Z | 2024-01-29T00:55:59Z | https://github.com/langchain-ai/langchain/issues/16678 | 2,103,819,126 | 16,678 |
[
"langchain-ai",
"langchain"
] | Has anyone had any issues with getting the docs to build? I continuously get this error when running poetry install:
```
The current project could not be installed: No file/folder found for package langchain-monorepo
If you do not want to install the current project use --no-root
```
Additionally, both `make docs_build` & `make api_docs_build` fail. The docs_build fails because of
```
[ERROR] Error: Invalid sidebar file at "sidebars.js".
These sidebar document ids do not exist:
- langgraph
```
The api_docs_build has quite a few errors. Not sure if I'm missing downloading something critical.
_Originally posted by @rshah98626 in https://github.com/langchain-ai/langchain/issues/15664#issuecomment-1913262415_ | infra: Fix local docs and api ref builds | https://api.github.com/repos/langchain-ai/langchain/issues/16677/comments | 2 | 2024-01-27T19:28:11Z | 2024-05-20T16:08:30Z | https://github.com/langchain-ai/langchain/issues/16677 | 2,103,751,123 | 16,677 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add example of question decomposition using MultiQueryRetriever. Related to #11260. | docs: Show question decomposition with MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16676/comments | 0 | 2024-01-27T19:26:03Z | 2024-05-04T16:06:48Z | https://github.com/langchain-ai/langchain/issues/16676 | 2,103,749,960 | 16,676 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
st.header("`Interweb Explorer`")
st.info("`I am an AI that can answer questions by exploring, reading, and summarizing web pages."
"I can be configured to use different modes: public API or private (no data sharing).`")
def init_session_state():
return SessionState()
# .get(retriever=None, llm=None)
# Make retriever and llm
session_state = init_session_state()
# Make retriever and llm
if 'retriever' not in st.session_state:
st.session_state['retriever'], st.session_state['llm'] = settings()
# if session_state.retriever is None:
# session_state.retriever, session_state.llm = settings()
web_retriever = st.session_state.retriever
llm = st.session_state.llm
### Error Message and Stack Trace (if applicable)
`embedding_function` is expected to be an Embeddings object, support for passing in a function will soon be removed.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:378, in SessionState.__getitem__(self, key)
377 try:
--> 378 return self._getitem(widget_id, key)
379 except KeyError:
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:423, in SessionState._getitem(self, widget_id, user_key)
422 # We'll never get here
--> 423 raise KeyError
KeyError:
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:119, in SessionStateProxy.__getattr__(self, key)
118 try:
--> 119 return self[key]
120 except KeyError:
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:90, in SessionStateProxy.__getitem__(self, key)
89 require_valid_user_key(key)
---> 90 return get_session_state()[key]
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/safe_session_state.py:113, in SafeSessionState.__getitem__(self, key)
111 raise KeyError(key)
--> 113 return self._state[key]
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:380, in SessionState.__getitem__(self, key)
379 except KeyError:
--> 380 raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "retriever". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[12], line 18
14 st.session_state['retriever'], st.session_state['llm'] = settings()
16 # if session_state.retriever is None:
17 # session_state.retriever, session_state.llm = settings()
---> 18 web_retriever = st.session_state.retriever
19 llm = st.session_state.llm
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:121, in SessionStateProxy.__getattr__(self, key)
119 return self[key]
120 except KeyError:
--> 121 raise AttributeError(_missing_attr_error_message(key))
AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization
### Description
AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it?
### System Info
streamlit==1.25.0
langchain==0.0.244
# chromadb==0.4.3
openai==0.27.8
html2text==2020.1.16
google-api-core==2.11.1
google-api-python-client==2.95.0
google-auth==2.22.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.59.1
tiktoken==0.4.0
faiss-cpu==1.7.4 | AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it? | https://api.github.com/repos/langchain-ai/langchain/issues/16675/comments | 4 | 2024-01-27T18:32:14Z | 2024-05-05T16:06:47Z | https://github.com/langchain-ai/langchain/issues/16675 | 2,103,713,719 | 16,675 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/expression_language/interface
In the above link, we have a code that has a deprecated function which is chain.input_schema.schema()
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
print(chain.input_schema.schema()) - This here shows deprecated.
### Idea or request for content:
Please update the documentation. | The interface documentation is not updated | https://api.github.com/repos/langchain-ai/langchain/issues/16674/comments | 3 | 2024-01-27T17:40:59Z | 2024-05-06T16:08:29Z | https://github.com/langchain-ai/langchain/issues/16674 | 2,103,693,114 | 16,674 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The playground UI
### Error Message and Stack Trace (if applicable)

### Description
As show in above picture, the playground UInot works.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langserve==0.0.39
torch==2.1.2
transformers==4.36.2
fastapi==0.109.0 | The payground UI not works | https://api.github.com/repos/langchain-ai/langchain/issues/16668/comments | 4 | 2024-01-27T08:25:04Z | 2024-01-29T02:48:28Z | https://github.com/langchain-ai/langchain/issues/16668 | 2,103,448,996 | 16,668 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms.llamacpp import LlamaCpp
llm = LlamaCpp(
model_path="llama-2-7b.Q5_K_M.gguf",
temperature=0,
verbose=False, # Verbose is required to pass to the callback manager
grammar_path="json.gbnf"
)
llm.invoke("Hi")
```
### Error Message and Stack Trace (if applicable)
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from utils/models/llama-2-7b.Q5_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
....
### Description
When setting `verbose` to `False` I expect the debug message to not be printed, but they are.
[Related issue](https://github.com/ggerganov/llama.cpp/issues/999)
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
| LlamaCpp: Verbose flag does not work as intended | https://api.github.com/repos/langchain-ai/langchain/issues/16667/comments | 1 | 2024-01-27T07:32:53Z | 2024-07-04T16:07:43Z | https://github.com/langchain-ai/langchain/issues/16667 | 2,103,376,004 | 16,667 |
[
"langchain-ai",
"langchain"
] | Welcome to the LangChain repo!
## What's in this repo
Please only open Issues, PRs, and Discussions against this repo for the packages it contains:
- `langchain` python package
- `langchain-core` python package
- `langchain-community` python package
- certain partner python packages, e.g. `langchain-openai`, `langchain-anthropic`, etc.
- LangChain templates
- LangChain Python docs
This repo does NOT contain:
- LangChain JS: https://github.com/langchain-ai/langchainjs
- LangServe: https://github.com/langchain-ai/langserve
- LangSmith SDK: https://github.com/langchain-ai/langsmith-sdk
- LangGraph: https://github.com/langchain-ai/langgraph
- LangGraph JS: https://github.com/langchain-ai/langgraphjs
Please open issues related to those libraries in their respective repos.
## Contributing
Here's a quick overview of how to contribute to LangChain:
### Have a question or a feature request?
If you have a usage question or a feature request, please open a [Discussion](https://github.com/langchain-ai/langchain/discussions) for it. Questions can go in the Q&A section and feature requests can go in the Ideas section.
### Found a bug?
Please open an [Issue](https://github.com/langchain-ai/langchain/issues) using the Bug Report template. Please fully specify the steps to reproduce the bug — it'll greatly speed up our ability to fix it.
### Want to contribute?
#### For new contributors
Issues with the [good first issue](https://github.com/langchain-ai/langchain/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) tag are a great place to start.
#### For all contributors
There are certain things that are always helpful.
* Reviewing docs for mistakes, out-of-date functionality, pages that don't follow the latest conventions (especially applies to [Integrations](https://python.langchain.com/docs/integrations/))
* Improving test coverage
* Improving docstrings to make sure they fully specify Args, Returns, Example, and Raises (following [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#381-docstrings))
* Reporting bugs, providing feedback, suggesting features
* Fixing bugs and adding features!
#### For experienced contributors
* Help respond to Discussion items and Issues. Issues with the [help wanted](https://github.com/langchain-ai/langchain/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+) tag are a great place to start.
* Help review PRs
For more on how to contribute, check out the full [Developer's Guide](https://python.langchain.com/docs/contributing). | Start here: Welcome to LangChain! | https://api.github.com/repos/langchain-ai/langchain/issues/16651/comments | 2 | 2024-01-26T22:36:52Z | 2024-07-31T21:47:18Z | https://github.com/langchain-ai/langchain/issues/16651 | 2,103,008,354 | 16,651 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We need to fix this documentation: https://python.langchain.com/docs/expression_language/streaming#propagating-callbacks
To explain that:
1) Callbacks are only propagated automatically starting with python 3.11 (depends on asyncio.create_task context arg)
2) Show how to propagate callbacks manually for <3.11 | Doc: Fix documentation for @chain decorator in streaming | https://api.github.com/repos/langchain-ai/langchain/issues/16643/comments | 1 | 2024-01-26T20:16:05Z | 2024-05-03T16:07:05Z | https://github.com/langchain-ai/langchain/issues/16643 | 2,102,838,907 | 16,643 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
There are a lot of questions on how to use RunnableWithMessageHistory.
We need to improve this page:
https://python.langchain.com/docs/expression_language/how_to/message_history
This page should be updated to include the following information (in approximately this order):
- [ ] Example that uses in memory or on file system chat history to make it easier to test things out and debug
- [ ] Example that shows how to use the config to support user_id in addition to session_id (just passing through config)
- [ ] Clarifications that in production one should use persistent storage (e.g., Redis)
- [ ] Show how to use Redis persistent storage
There is an API reference here:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html
Example with file system here:
https://github.com/langchain-ai/langserve/blob/main/examples/chat_with_persistence_and_user/server.py
Here are some questions, some of which have answers:
* https://github.com/langchain-ai/langchain/discussions/16582
* https://github.com/langchain-ai/langchain/discussions/16636
| Doc: Improve Documentation for RunnableWithMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/16642/comments | 0 | 2024-01-26T19:59:21Z | 2024-05-03T16:07:00Z | https://github.com/langchain-ai/langchain/issues/16642 | 2,102,815,846 | 16,642 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.chains.sql_database.base import SQLDatabaseChain
from langchain_experimental.sql import SQLDatabaseChain
### Error Message and Stack Trace (if applicable)
cannot import name 'ensure_config' from 'langchain_core.runnables' (C:\Users\ashut\anaconda3\lib\site-packages\langchain_core\runnables\__init__.py)
### Description
I am trying langchain.chains.sql_database.base and langchain_experimental.sql to import SQLDatabaseChain but i am getting same error after running !pip install -U langchain langchain_experimental
### System Info
##ImportError: cannot import name 'ensure_config' from 'langchain_core.runnables' (C:\Users\ashut\anaconda3\lib\site-packages\langchain_core\runnables\__init__.py) | ImportError: cannot import name 'ensure_config' from 'langchain_core.runnables' | https://api.github.com/repos/langchain-ai/langchain/issues/16640/comments | 5 | 2024-01-26T19:10:59Z | 2024-02-16T08:50:53Z | https://github.com/langchain-ai/langchain/issues/16640 | 2,102,752,451 | 16,640 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| rag_prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": ensemble_retriever, "question": RunnablePassthrough()}
).assign(answer=qa_chain)
rag_chain_with_history = RunnableWithMessageHistory(
rag_chain_with_source,
lambda session_id: memory,
input_messages_key="question",
history_messages_key="chat_history",
)
config={"configurable": {"session_id": "SESSION_01"}}
try:
response = rag_chain_with_history.invoke({"question":query},config=config)
return response
except Exception as e:
return e
### Error Message and Stack Trace (if applicable)
'ConversationBufferMemory' object has no attribute 'messages'
### Description
i am trying to add chat history to LCEL qa with sources chain but i am getting error regarding the memory object
### System Info
python version - 3.11 | getting error while adding memory to LCEL chain | https://api.github.com/repos/langchain-ai/langchain/issues/16638/comments | 1 | 2024-01-26T17:48:32Z | 2024-01-26T19:45:52Z | https://github.com/langchain-ai/langchain/issues/16638 | 2,102,627,287 | 16,638 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
### First reproduction script:
```python
import os
# os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
os.environ["LANGCHAIN_COMET_TRACING"] = "true"
from langchain_openai import OpenAI
# import langchain_community.callbacks
llm = OpenAI(temperature=0.9)
llm_result = llm.generate(["Tell me a joke"])
```
### Second reproduction script:
```python
import os
# os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
os.environ["LANGCHAIN_COMET_TRACING"] = "true"
from langchain.chains import LLMChain
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.9)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I've encountered cases where my langchain runs were not traced with Comet tracer even when enabled. I was able to reproduce with the first minimal reproducing script.
To reproduce the issue, runs into an environment without `comet_ml` nor `wandb` being installed. If you run it as is, the script will runs just fine and won't try to log the LLM generation to Comet or Wandb. If you uncomment the import to `langchain_community.callbacks`, it will now fails with:
```
Traceback (most recent call last):
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 27, in import_comet_llm_api
from comet_llm import (
ModuleNotFoundError: No module named 'comet_llm'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/bug_tracers_langchain.py", line 11, in <module>
llm_result = llm.generate(["Tell me a joke"])
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 655, in generate
CallbackManager.configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1960, in _configure
var_handler = var.get() or cast(Type[BaseCallbackHandler], handler_class)()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 62, in __init__
self._initialize_comet_modules()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 65, in _initialize_comet_modules
comet_llm_api = import_comet_llm_api()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 38, in import_comet_llm_api
raise ImportError(
ImportError: To use the CometTracer you need to have the `comet_llm>=2.0.0` python package installed. Please install it with `pip install -U comet_llm`
```
It also happens with Wandb tracer. If you uncomment the line containing `LANGCHAIN_WANDB_TRACING` and run it, you should see the following error:
```
Traceback (most recent call last):
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/wandb.py", line 449, in __init__
import wandb
ModuleNotFoundError: No module named 'wandb'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/bug_tracers_langchain.py", line 11, in <module>
llm_result = llm.generate(["Tell me a joke"])
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 655, in generate
CallbackManager.configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1960, in _configure
var_handler = var.get() or cast(Type[BaseCallbackHandler], handler_class)()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/wandb.py", line 452, in __init__
raise ImportError(
ImportError: Could not import wandb python package.Please install it with `pip install -U wandb`.
```
I've tried a more advanced example using a chain (second reproduction script) and it always fails.
I'm not sure if it's a bug or not. This was definitely surprising for me as I was expected to get my langchain runs with the first shared script. If it's not a bug, it think it would be useful to clarify when users can expect community tracers to be injected and when they wouldn't.
Let me know what you think about this issue and how can I help.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Tue Jan 16 01:35:34 UTC 2024
> Python Version: 3.10.12 (main, Jul 27 2023, 14:43:19) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Community tracers not being injected properly in some cases | https://api.github.com/repos/langchain-ai/langchain/issues/16635/comments | 5 | 2024-01-26T17:24:03Z | 2024-04-16T15:25:01Z | https://github.com/langchain-ai/langchain/issues/16635 | 2,102,581,882 | 16,635 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add in code documentation for RunnableEach and Runnable Each Base
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L2810-L2810
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L3685-L3685
Here's an example of in code documentation PR for Runnable Parallel:
https://github.com/langchain-ai/langchain/pull/16600/files
And example documentation for Runnable Binding:
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L4014-L4014
| Doc: Add in code documentation for RunnableEach | https://api.github.com/repos/langchain-ai/langchain/issues/16632/comments | 1 | 2024-01-26T15:07:27Z | 2024-05-04T16:06:33Z | https://github.com/langchain-ai/langchain/issues/16632 | 2,102,374,739 | 16,632 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add in code documentation for Runnable Generator
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L2810-L2810
Here's an example of in code documentation PR for Runnable Parallel:
https://github.com/langchain-ai/langchain/pull/16600/files | Doc: Add in code documentation for RunnableGenerator | https://api.github.com/repos/langchain-ai/langchain/issues/16631/comments | 0 | 2024-01-26T15:05:53Z | 2024-05-03T16:06:50Z | https://github.com/langchain-ai/langchain/issues/16631 | 2,102,372,148 | 16,631 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.runnables import chain
from pydantic import BaseModel
class MyConversation(BaseModel):
messages: list
@chain
def conversation_to_history( conversation: MyConversation) -> str:
return "hi :)"
print(conversation_to_history.input_schema.schema())
```
### Error Message and Stack Trace (if applicable)
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], [line 25](vscode-notebook-cell:?execution_count=6&line=25)
[20](vscode-notebook-cell:?execution_count=6&line=20) history.add_user_message(message.body)
[22](vscode-notebook-cell:?execution_count=6&line=22) return history
---> [25](vscode-notebook-cell:?execution_count=6&line=25) conversation_to_history.input_schema.schema()
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664), in BaseModel.schema(cls, by_alias, ref_template)
[662](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:662) if cached is not None:
[663](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:663) return cached
--> [664](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664) s = model_schema(cls, by_alias=by_alias, ref_template=ref_template)
[665](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:665) cls.__schema_cache__[(by_alias, ref_template)] = s
[666](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:666) return s
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188), in model_schema(model, by_alias, ref_prefix, ref_template)
[186](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:186) model_name_map = get_model_name_map(flat_models)
[187](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:187) model_name = model_name_map[model]
--> [188](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188) m_schema, m_definitions, nested_models = model_process_schema(
[189](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:189) model, by_alias=by_alias, model_name_map=model_name_map, ref_prefix=ref_prefix, ref_template=ref_template
[190](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:190) )
[191](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:191) if model_name in nested_models:
[192](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:192) # model_name is in Nested models, it has circular references
[193](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:193) m_definitions[model_name] = m_schema
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582), in model_process_schema(model, by_alias, model_name_map, ref_prefix, ref_template, known_models, field)
[580](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:580) s['description'] = doc
[581](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:581) known_models.add(model)
--> [582](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582) m_schema, m_definitions, nested_models = model_type_schema(
[583](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:583) model,
[584](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:584) by_alias=by_alias,
[585](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:585) model_name_map=model_name_map,
[586](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:586) ref_prefix=ref_prefix,
[587](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:587) ref_template=ref_template,
[588](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:588) known_models=known_models,
[589](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:589) )
[590](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:590) s.update(m_schema)
[591](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:591) schema_extra = model.__config__.schema_extra
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623), in model_type_schema(model, by_alias, model_name_map, ref_template, ref_prefix, known_models)
[621](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:621) for k, f in model.__fields__.items():
[622](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:622) try:
--> [623](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623) f_schema, f_definitions, f_nested_models = field_schema(
[624](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:624) f,
[625](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:625) by_alias=by_alias,
[626](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:626) model_name_map=model_name_map,
[627](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:627) ref_prefix=ref_prefix,
[628](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:628) ref_template=ref_template,
[629](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:629) known_models=known_models,
[630](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:630) )
[631](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:631) except SkipField as skip:
[632](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:632) warnings.warn(skip.message, UserWarning)
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256), in field_schema(field, by_alias, model_name_map, ref_prefix, ref_template, known_models)
[253](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:253) s.update(validation_schema)
[254](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:254) schema_overrides = True
--> [256](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256) f_schema, f_definitions, f_nested_models = field_type_schema(
[257](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:257) field,
[258](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:258) by_alias=by_alias,
[259](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:259) model_name_map=model_name_map,
[260](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:260) schema_overrides=schema_overrides,
[261](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:261) ref_prefix=ref_prefix,
[262](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:262) ref_template=ref_template,
[263](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:263) known_models=known_models or set(),
[264](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:264) )
[266](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:266) # $ref will only be returned when there are no schema_overrides
[267](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:267) if '$ref' in f_schema:
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528), in field_type_schema(field, by_alias, model_name_map, ref_template, schema_overrides, ref_prefix, known_models)
[526](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:526) else:
[527](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:527) assert field.shape in {SHAPE_SINGLETON, SHAPE_GENERIC}, field.shape
--> [528](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528) f_schema, f_definitions, f_nested_models = field_singleton_schema(
[529](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:529) field,
[530](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:530) by_alias=by_alias,
[531](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:531) model_name_map=model_name_map,
[532](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:532) schema_overrides=schema_overrides,
[533](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:533) ref_prefix=ref_prefix,
[534](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:534) ref_template=ref_template,
[535](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:535) known_models=known_models,
[536](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:536) )
[537](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:537) definitions.update(f_definitions)
[538](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:538) nested_models.update(f_nested_models)
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952), in field_singleton_schema(field, by_alias, model_name_map, ref_template, schema_overrides, ref_prefix, known_models)
[949](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:949) if args is not None and not args and Generic in field_type.__bases__:
[950](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:950) return f_schema, definitions, nested_models
--> [952](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952) raise ValueError(f'Value not declarable with JSON Schema, field: {field}')
ValueError: Value not declarable with JSON Schema, field: name='__root__' type=Optional[MyConversation] required=False default=None
```
### Description
I expected to be able to pass an object annotated with a Pydantic model (Pydantic is used elsewhere in langchain for annotating input/output types), but when I try to get the input schema for the resulting chain, I get the above error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 05 Jan 2024 15:35:19 +0000
> Python Version: 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.2.post1
> langgraph: 0.0.12
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | When using a Pydantic model as input to a @chain-decorated function, `input_schema.schema()` gives an error | https://api.github.com/repos/langchain-ai/langchain/issues/16623/comments | 3 | 2024-01-26T10:22:09Z | 2024-01-26T17:38:47Z | https://github.com/langchain-ai/langchain/issues/16623 | 2,101,959,283 | 16,623 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.