issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code produces the pydantic exception: field "config" not yet prepared so type is still a ForwardRef):
```python
from langchain.chat_models import BedrockChat
from botocore.config import Config
model = BedrockChat(
region_name="us-east-1",
model_id="anthropic.claude-v2:1",
model_kwargs=dict(temperature=0, max_tokens_to_sample=10000),
verbose=False,
streaming=False,
config=Config(
retries=dict(max_attempts=10, mode='adaptive', total_max_attempts=100)
)
)
```
### Error Message and Stack Trace (if applicable)
`pydantic.v1.errors.ConfigError: field "config" not yet prepared so type is still a ForwardRef, you might need to call BedrockChat.update_forward_refs().`
The stack trace:
```
Traceback (most recent call last):
File "/Users/atigarev/PycharmProjects/isolated_lc/test.py", line 5, in <module>
model = BedrockChat(
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 1074, in validate_model
v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/fields.py", line 857, in validate
raise ConfigError(
pydantic.v1.errors.ConfigError: field "config" not yet prepared so type is still a ForwardRef, you might need to call BedrockChat.update_forward_refs().
```
### Description
Passing botocore.config.Config to BedrockChat produces the pydantic exception: field "config" not yet prepared so type is still a ForwardRef) (stack trace provided in its own field).
Adding the following code at the bottom of `langchain-community/chat_models/bedrock.py` **fixes it**:
```python
from botocore.config import Config
BedrockChat.update_forward_refs()
```
Just doing `BedrockChat.update_forward_refs()` in my code causes error:
```
Traceback (most recent call last):
File "/Users/atigarev/PycharmProjects/isolated_lc/test.py", line 4, in <module>
BedrockChat.update_forward_refs()
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 814, in update_forward_refs
update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 554, in update_model_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 520, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/typing.py", line 694, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'Config' is not defined
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 20.6.0: Thu Mar 9 20:39:26 PST 2023; root:xnu-7195.141.49.700.6~1/RELEASE_X86_64
> Python Version: 3.10.5 (v3.10.5:f377153967, Jun 6 2022, 12:36:10) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Can't pass botocore.config.Config to BedrockChat (pydantic error: field "config" not yet prepared so type is still a ForwardRef) | https://api.github.com/repos/langchain-ai/langchain/issues/17420/comments | 1 | 2024-02-12T15:56:47Z | 2024-05-20T16:09:25Z | https://github.com/langchain-ai/langchain/issues/17420 | 2,130,433,068 | 17,420 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
# Answer a question related to 'Cricket'
category = 'amd'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance
```
Now, i wanna extract the metadata and see what components are present in faiss_instance. Can you help me with code? | not able to retrieve metadata from FAISS instance | https://api.github.com/repos/langchain-ai/langchain/issues/17419/comments | 5 | 2024-02-12T15:34:38Z | 2024-02-14T03:34:58Z | https://github.com/langchain-ai/langchain/issues/17419 | 2,130,386,858 | 17,419 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance, where amd is a category
for doc_id, document in faiss_instance.docstore.items():
print(f"ID: {doc_id}, Metadata: {document.metadata}")
```
and the error is below
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-36-8d0aa86de55f>](https://localhost:8080/#) in <cell line: 2>()
1 faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance
----> 2 for doc_id, document in faiss_instance.docstore.items():
3 print(f"ID: {doc_id}, Metadata: {document.metadata}")
AttributeError: 'InMemoryDocstore' object has no attribute 'items'
```
how to fix the above error and return the metadata present in faiss and in that particular category? | unable to retrieve the metadata of FAISS vector db | https://api.github.com/repos/langchain-ai/langchain/issues/17417/comments | 1 | 2024-02-12T15:21:06Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17417 | 2,130,359,673 | 17,417 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'cricket': DirectoryLoader('/content/cricket', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'fifa': DirectoryLoader('/content/fifa', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
# Answer a question related to 'Cricket'
category = 'cricket'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
In the above code, we're not storing the vector databases on local storage, instead, they are kept in memory. Retrieving answers from an in-memory storage might take longer, compared to quickly loading them from a saved vector database, correct? If that's the case, could you also provide the code for this approach? | How to load data into vector db and use it retrieval and QA? | https://api.github.com/repos/langchain-ai/langchain/issues/17412/comments | 5 | 2024-02-12T13:58:31Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17412 | 2,130,198,517 | 17,412 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
below's the code
```
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content/files', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
unique_sources = set()
for doc in documents:
source = doc.metadata['source']
unique_sources.add(source)
num_unique_sources = len(unique_sources)
# print("Number of unique sources:", num_unique_sources)
# print()
# # print("Unique source names:")
# for source_name in unique_sources:
# print(source_name)
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a questions
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "can you return the objective"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
In the code mentioned above, it creates a single vector database (vectorDB) for all the files located in the files folder. Now, I'm interested in creating multiple vector databases for multiple files (let's say i want to create a vectordb which is related to Cricket and it has files related to cricket, again a vectordb related to football and it has files related to football etc) and would like assistance in extracting the details or metadata of these vector databases. Could you provide help with the code for this?
The answer you gave is below
Based on your request, you want to create multiple vector databases for different categories of files and extract their metadata. Here's how you can modify your code to achieve this:
First, you need to categorize your documents based on their content. You can create a dictionary where the keys are the categories (like 'Cricket', 'Football', etc.) and the values are lists of documents related to that category.
```
# Categorize documents
documents_dict = {
'Cricket': [],
'Football': [],
# Add more categories as needed
}
for doc in documents:
source = doc.metadata['source']
documents_dict[source].append(doc)
```
Next, for each category, you can create a vector database and a retriever:
```
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
```
Now, vector_stores is a dictionary where the keys are the categories and the values are the corresponding vector databases. Similarly, retrievers is a dictionary where the keys are the categories and the values are the corresponding retrievers.
To answer a question related to a specific category, you can use the corresponding retriever:
```
# Answer a question related to 'Cricket'
category = 'Cricket'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
```
```
# Use the chain to answer a questions
llm_response = qa_chain(query)
process_llm_response(llm_response)
```
To extract the metadata of a vector database, you can use the get_all_metadatas method:
```
# Extract the metadata of the 'Cricket' vector database
metadata = vector_stores['Cricket'].get_all_metadatas()
```
but i'm bit confused. So, you categorized documents, then you said to mention list of documents related to that. So is the below how i should insert
```
# Categorize documents
documents_dict = {
'Cricket': ['cricket1.pdf', 'cricket2.pdf'],
'Football': ['fifa1.pdf', 'fifa2.pdf'],
# Add more categories as needed
}
for doc in documents:
source = doc.metadata['source']
documents_dict[source].append(doc)
```
and again you're looping documents, where as documents has all the pdf files data in it. How come it'll understand? Can you please return the complete code so that it'd helpful and can understand easily. And also i want to save the vectorDB locally instead of using inmemory
### Idea or request for content:
_No response_ | How to create multiple vectorDB's for multiple files, then extract the details/metadata of it in FAISS and Chroma? | https://api.github.com/repos/langchain-ai/langchain/issues/17410/comments | 5 | 2024-02-12T12:59:45Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17410 | 2,130,090,684 | 17,410 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_openai import OpenAIEmbeddings
embedding_model = os.environ.get("EMBEDDING_MODEL")
print(embedding_model)
embedding_dimension = os.environ.get("EMBEDDING_DIMENSION")
print(embedding_dimension)
# the langchain way
embeddings_model_lg = OpenAIEmbeddings(api_key=OPENAI_API_KEY, model=embedding_model, deployment=embedding_model, dimensions=int(embedding_dimension))
vectorstore = SupabaseVectorStore(
client=supabase,
embedding=embeddings_model_lg,
table_name="documents",
query_name="match_documents",
)
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# %%
# specify a relevant query
query = "How does tree help the boy make the crown? return results with relevance scores"
embedded_query = embeddings_model_lg.embed_query(query)
response = retriever.get_relevant_documents(query)
```
and in my .env
```bash
EMBEDDING_DIMENSION=256
# edit this based on your model preference, e.g. text-embedding-3-small, text-embedding-ada-002
EMBEDDING_MODEL=text-embedding-3-large
```
### Error Message and Stack Trace (if applicable)
```bash
2024-02-12 21:49:08,618:WARNING - Warning: model not found. Using cl100k_base encoding.
2024-02-12 21:49:09,055:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-02-12 21:49:10,285:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2024-02-12 21:49:10,295:INFO - Generated Query: query='tree help boy crown' filter=None limit=None
2024-02-12 21:49:10,296:WARNING - Warning: model not found. Using cl100k_base encoding.
2024-02-12 21:49:10,584:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-02-12 21:49:11,104:INFO - HTTP Request: POST https://lhbeoisvtsilsquybifs.supabase.co/rest/v1/rpc/match_documents?limit=4 "HTTP/1.1 200 OK"
```
it's a warning.
### Description
I want it to use the model I designated. Can I change the default in base.py?
```python
.
.
.
client: Any = Field(default=None, exclude=True) #: :meta private:
async_client: Any = Field(default=None, exclude=True) #: :meta private:
model: str = "text-embedding-ada-002"
dimensions: Optional[int] = None
"""The number of dimensions the resulting o...
```
I can't believe the results are actually correct but this is a tiny tiny children's book so it could have been a fluke.
```bash
[Document(page_content='Once there was a tree.... and she loved a little boy. And everyday the boy would come and he would gather her leaves and make them into crowns and play king of the forest. He would climb up her trunk and swing from her branches and eat apples. And they would play hide-and-go-seek.'), Document(page_content='And the tree was happy. But time went by. And the boy grew older. And the tree was often alone. Then one day the boy came to the tree and the tree said, "Come, Boy, come and climb up my trunk and swing from my branches and eat apples and play in my shade and be happy.'), ...
```
### System Info
```bash
(langchain) nyck33@nyck33-lenovo:/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial$ pip freeze | grep langchain
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
langchain-openai==0.0.5
``` | OpenAIEmbeddings model argument does not work | https://api.github.com/repos/langchain-ai/langchain/issues/17409/comments | 4 | 2024-02-12T12:59:44Z | 2024-04-06T09:39:41Z | https://github.com/langchain-ai/langchain/issues/17409 | 2,130,090,661 | 17,409 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.chains.qa_generation.base import QAGenerationChain
from langchain.evaluation.qa.generate_chain import QAGenerateChain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Initialize the language model and QAGenerationChain
llm = ChatOpenAI(temperature=0.0, model=llm_model) # Replace
embeddings_model_lg = OpenAIEmbeddings(api_key=OPENAI_API_KEY, model=embedding_model, deployment=embedding_model, dimensions=int(embedding_dimension))
.
.
.
### load vectorstore already on Supabase
vectorstore = SupabaseVectorStore(
client=supabase,
embedding=embeddings_model_lg,
table_name="documents",
query_name="match_page_sections",
)
# %%
print(vectorstore.embeddings)
# %% [markdown]
### Create our self-querying retrieval model
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
# %% define the metaddata in the pgvector table
# want descriptions of the metadata fields
metadata_field_info = []
document_content_description = "Ordered segments of the book 'The Giving Tree' by Shel Silverstein"
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# %%
# specify a relevant query
query = "How does tree help the boy make the crown?"
embedded_query = embeddings_model_lg.embed_query(query)
response = retriever.get_relevant_documents(embedded_query)
# %%
# try using openai embeddings and calling methods on vectorstore
# https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.supabase.SupabaseVectorStore.html#
query_embedding_openai = get_embeddings(query)
results = vectorstore.similarity_search_with_relevance_scores(query, k=5)
```
both of those tries at the end throw errors like below
### Error Message and Stack Trace (if applicable)
```bash
{
"name": "APIError",
"message": "{'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}",
"stack": "---------------------------------------------------------------------------
APIError Traceback (most recent call last)
File /media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:5
3 query = \"How does tree help the boy make the crown?\"
4 embedded_query = embeddings_model_lg.embed_query(query)
----> 5 response = retriever.get_relevant_documents(embedded_query)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/retrievers.py:224, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
227 result,
228 )
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/retrievers.py:217, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
220 else:
221 result = self._get_relevant_documents(query, **_kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py:174, in SelfQueryRetriever._get_relevant_documents(self, query, run_manager)
172 logger.info(f\"Generated Query: {structured_query}\")
173 new_query, search_kwargs = self._prepare_query(query, structured_query)
--> 174 docs = self._get_docs_with_query(new_query, search_kwargs)
175 return docs
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py:148, in SelfQueryRetriever._get_docs_with_query(self, query, search_kwargs)
145 def _get_docs_with_query(
146 self, query: str, search_kwargs: Dict[str, Any]
147 ) -> List[Document]:
--> 148 docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
149 return docs
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/vectorstores.py:139, in VectorStore.search(self, query, search_type, **kwargs)
137 \"\"\"Return docs most similar to query using specified search type.\"\"\"
138 if search_type == \"similarity\":
--> 139 return self.similarity_search(query, **kwargs)
140 elif search_type == \"mmr\":
141 return self.max_marginal_relevance_search(query, **kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:182, in SupabaseVectorStore.similarity_search(self, query, k, filter, **kwargs)
174 def similarity_search(
175 self,
176 query: str,
(...)
179 **kwargs: Any,
180 ) -> List[Document]:
181 vector = self._embedding.embed_query(query)
--> 182 return self.similarity_search_by_vector(vector, k=k, filter=filter, **kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:191, in SupabaseVectorStore.similarity_search_by_vector(self, embedding, k, filter, **kwargs)
184 def similarity_search_by_vector(
185 self,
186 embedding: List[float],
(...)
189 **kwargs: Any,
190 ) -> List[Document]:
--> 191 result = self.similarity_search_by_vector_with_relevance_scores(
192 embedding, k=k, filter=filter, **kwargs
193 )
195 documents = [doc for doc, _ in result]
197 return documents
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237, in SupabaseVectorStore.similarity_search_by_vector_with_relevance_scores(self, query, k, filter, postgrest_filter, score_threshold)
231 query_builder.params = query_builder.params.set(
232 \"and\", f\"({postgrest_filter})\"
233 )
235 query_builder.params = query_builder.params.set(\"limit\", k)
--> 237 res = query_builder.execute()
239 match_result = [
240 (
241 Document(
(...)
248 if search.get(\"content\")
249 ]
251 if score_threshold is not None:
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119, in SyncSingleRequestBuilder.execute(self)
117 return SingleAPIResponse[_ReturnT].from_http_request_response(r)
118 else:
--> 119 raise APIError(r.json())
120 except ValidationError as e:
121 raise APIError(r.json()) from e
APIError: {'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}"
}
```
and
```bash
---------------------------------------------------------------------------
APIError Traceback (most recent call last)
File [/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7)
[1](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:1) # %%
[2](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:2) # try using openai embeddings and calling methods on vectorstore
[3](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:3) # https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.supabase.SupabaseVectorStore.html#
[5](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:5) query_embedding_openai = get_embeddings(query)
----> [7](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7) results = vectorstore.similarity_search_with_relevance_scores(query, k=5)
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207), in SupabaseVectorStore.similarity_search_with_relevance_scores(self, query, k, filter, **kwargs)
[199](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:199) def similarity_search_with_relevance_scores(
[200](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:200) self,
[201](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:201) query: str,
(...)
[204](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:204) **kwargs: Any,
[205](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:205) ) -> List[Tuple[Document, float]]:
[206](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:206) vector = self._embedding.embed_query(query)
--> [207](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207) return self.similarity_search_by_vector_with_relevance_scores(
[208](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:208) vector, k=k, filter=filter, **kwargs
[209](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:209) )
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237), in SupabaseVectorStore.similarity_search_by_vector_with_relevance_scores(self, query, k, filter, postgrest_filter, score_threshold)
[231](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:231) query_builder.params = query_builder.params.set(
[232](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:232) "and", f"({postgrest_filter})"
[233](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:233) )
[235](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:235) query_builder.params = query_builder.params.set("limit", k)
--> [237](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237) res = query_builder.execute()
[239](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:239) match_result = [
[240](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:240) (
[241](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:241) Document((...)
[248](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:248) if search.get("content")
[249](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:249) ]
[251](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:251) if score_threshold is not None:
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119), in SyncSingleRequestBuilder.execute(self)
[117](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:117) return SingleAPIResponse[_ReturnT].from_http_request_response(r)
[118](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:118) else:
--> [119](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119) raise APIError(r.json())
[120](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:120) except ValidationError as e:
[121](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:121) raise APIError(r.json()) from e
APIError: {'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}
```
but it is there:

that is schema public and my other code for the openai retrieval plugin works (it's their code from their repo that looks like the following):
```
async def _query(self, queries: List[QueryWithEmbedding]) -> List[QueryResult]:
"""
Takes in a list of queries with embeddings and filters and returns a list of query results with matching document chunks and scores.
"""
query_results: List[QueryResult] = []
for query in queries:
# get the top 3 documents with the highest cosine similarity using rpc function in the database called "match_page_sections"
params = {
"in_embedding": query.embedding,
}
if query.top_k:
params["in_match_count"] = query.top_k
if query.filter:
if query.filter.document_id:
params["in_document_id"] = query.filter.document_id
if query.filter.source:
params["in_source"] = query.filter.source.value
if query.filter.source_id:
params["in_source_id"] = query.filter.source_id
if query.filter.author:
params["in_author"] = query.filter.author
if query.filter.start_date:
params["in_start_date"] = datetime.fromtimestamp(
to_unix_timestamp(query.filter.start_date)
)
if query.filter.end_date:
params["in_end_date"] = datetime.fromtimestamp(
to_unix_timestamp(query.filter.end_date)
)
try:
logger.debug(f"RPC params: {params}")
data = await self.client.rpc("match_page_sections", params=params)
results: List[DocumentChunkWithScore] = []
for row in data:
document_chunk = DocumentChunkWithScore(...
```
### Description
I want to use the `retriever.get_relevant_documents("What are some movies about dinosaurs")
` from https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query
### System Info
```bash
(langchain) nyck33@nyck33-lenovo:~$ pip freeze | grep langchain
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
langchain-openai==0.0.5
```
on Ubuntu 23.04, conda environment | APIERROR: PGRST202, can't find function on Supabase pgvector database | https://api.github.com/repos/langchain-ai/langchain/issues/17407/comments | 1 | 2024-02-12T12:22:07Z | 2024-05-20T16:09:19Z | https://github.com/langchain-ai/langchain/issues/17407 | 2,130,026,799 | 17,407 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
# Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
# global db_chain
global memory
# prompt = """
# Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
# then look at the results of the query and return the answer.
# If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
# Write the query only for the column names which are present in view.
# Execute the query and analyze the results to formulate a response.
# Return the answer in sentence form.
# The question: {question}
# """
try:
if memory == None:
memory = ConversationBufferMemory()
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
else:
answer = db_chain.run(question)
# answer = db_chain.run(prompt.format(question=question))
# print(memory.load_memory_variables()["history"])
print(memory.load_memory_variables({}))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
return "Unknown ProgrammingError Occured"
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
### Error Message and Stack Trace (if applicable)
so far im trying with flask_caching for cache memory, insted i would want to use llm cache, here is the code below
app = Flask(__name__)
CORS(app) # Enable CORS if needed
cache = Cache(app, config={'CACHE_TYPE': 'SimpleCache'})
app.secret_key = uuid.uuid4().hex
# previous_question = []
# filename = "details"
csv_file = ""
pdf_file = ""
# This function will be used to get answers from the chatbot
# def get_chatbot_answer(questions):
# return chat(questions) # Call your chat function here
@app.route('/')
def index():
# return {"message": "welcome to home page"}
return render_template('chatbot5.html')
@cache.memoize(timeout=3600)
def store_chat_history(question, answer):
return {'question': question, 'answer': answer, 'timestamp': datetime}
@app.route('/get_previous_questions', methods=['GET'])
def get_previous_question():
previous_questions = cache.get('previous_questions') or []
return jsonify(previous_questions)
@app.route('/get_answer', methods=['GET'])
# @token_required
def generate_answer():
question = request.args.get('questions')
answer = chat1(question)
store_chat_history(question, answer)
previous_questions = cache.get('previous_questions') or []
previous_questions.append({'question' : question, 'answer': answer, 'timestamp': datetime.now()})
cache.set('previous_questions', previous_questions, timeout=3600)
return {'answer': answer}
### Description
1. while using flask-caching it doesnt retrieve the question from cache memory until the question is exactly same
2. when i ask memory based question like 1 what is xyz employee id, 2nd what is there mail id, 3rd what is xyz1 employee id 4th what is there mail id, here it return the answer for 2nd question which is stored in cache memory but the 4th question is based on 3rd question.
so for this reason can i use llm cache instead? will it solve the issue for above problemes?
and can u integrate the llm caching in above code with memory
### System Info
python: 3.11
langchain: latest | How does conversational buffer memory and llm caching can be used together? | https://api.github.com/repos/langchain-ai/langchain/issues/17402/comments | 13 | 2024-02-12T10:03:33Z | 2024-02-14T01:50:46Z | https://github.com/langchain-ai/langchain/issues/17402 | 2,129,783,031 | 17,402 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def chat_langchain(new_project_qa, query, not_uuid):
result = new_project_qa.invoke(query)
print(result,"***********************************************")
relevant_document = result['source_documents']
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The provided code is for accessing data in a database (ChromeDB), but I need to retrieve metadata, page content, and the source from pgvector.
how can I acheive this?
### System Info
I am using pgvector database | How to get source/Metadata in Pgvector? | https://api.github.com/repos/langchain-ai/langchain/issues/17400/comments | 1 | 2024-02-12T09:50:00Z | 2024-02-14T01:48:56Z | https://github.com/langchain-ai/langchain/issues/17400 | 2,129,758,298 | 17,400 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(
model_path=".../llama-2-13b-chat.Q4_0.gguf",
n_gpu_layers=30,
n_batch=1024,
f16_kv=True,
grammar_path=".../response.gbnf",
)
```
### Error Message and Stack Trace (if applicable)
llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from /static/llamacpp_model/llama-2-13b-chat.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 5120
llama_model_loader: - kv 4: llama.block_count u32 = 40
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 13824
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 40
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 40
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q4_0: 281 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 5120
llm_load_print_meta: n_embd_v_gqa = 5120
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.28 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 6595.61 MiB, ( 6595.69 / 12288.02)
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloaded 30/41 layers to GPU
llm_load_tensors: CPU buffer size = 7023.90 MiB
llm_load_tensors: Metal buffer size = 6595.60 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '.venv/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M3 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 12884.92 MB
llama_kv_cache_init: CPU KV buffer size = 100.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 300.00 MiB, ( 6901.56 / 12288.02)
llama_kv_cache_init: Metal KV buffer size = 300.00 MiB
llama_new_context_with_model: KV self size = 400.00 MiB, K (f16): 200.00 MiB, V (f16): 200.00 MiB
llama_new_context_with_model: CPU input buffer size = 11.01 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 6901.58 / 12288.02)
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 93.52 MiB, ( 6995.08 / 12288.02)
llama_new_context_with_model: Metal compute buffer size = 93.50 MiB
llama_new_context_with_model: CPU compute buffer size = 81.40 MiB
llama_new_context_with_model: graph splits (measure): 5
AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
Model metadata: {'general.quantization_version': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'tokenizer.ggml.bos_token_id': '1', 'tokenizer.ggml.model': 'llama', 'llama.attention.head_count_kv': '40', 'llama.context_length': '4096', 'llama.attention.head_count': '40', 'llama.rope.dimension_count': '128', 'general.file_type': '2', 'llama.feed_forward_length': '13824', 'llama.embedding_length': '5120', 'llama.block_count': '40', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'general.name': 'LLaMA v2'}
from_string grammar:
space ::= space_1
space_1 ::= [ ] |
boolean ::= boolean_3 space
boolean_3 ::= [t] [r] [u] [e] | [f] [a] [l] [s] [e]
string ::= ["] string_7 ["] space
string_5 ::= [^"\] | [\] string_6
string_6 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]
string_7 ::= string_5 string_7 |
root ::= [{] space ["] [f] [a] [v] [o] [r] [a] [b] [l] [e] ["] space [:] space boolean [,] space ["] [n] [a] [m] [e] ["] space [:] space string [}] space
ggml_metal_free: deallocating
Exception ignored in: <function LlamaGrammar.__del__ at 0x1636acfe0>
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/llama_cpp/llama_grammar.py", line 50, in __del__
AttributeError: 'NoneType' object has no attribute 'llama_grammar_free'
### Description
I'm trying to use a Llamacpp model through Langchain providing a grammar path.
The inference process works and the grammar is correctly applied in the output.
After deallocating the model, the function LlamaGrammar.__del__ is called to delete the grammar object but I get 'NoneType' object has no attribute 'llama_grammar_free' because the model was already deallocated.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17 | LlamaCpp error when a model that was built using a grammar_path is deallocated | https://api.github.com/repos/langchain-ai/langchain/issues/17399/comments | 1 | 2024-02-12T09:43:52Z | 2024-05-20T16:09:14Z | https://github.com/langchain-ai/langchain/issues/17399 | 2,129,747,771 | 17,399 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb=PGVector.from_documents(embedding=embedding,collection_name
=COLLECTION_NAME,connection_string=CONNECTION_STRING)
vector_store = PGVector(
connection_string=CONNECTION_STRING,
collection_name=COLLECTION_NAME,
embedding_function=embedding
)
retriever = vector_store.as_retriever()
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type="stuff",
retriever=retriever,
)
retriever=retriever,return_source_documents=True)
return qa
def create_global_qa_chain():
chroma_db_path = "chroma-databases"
folders = os.listdir(chroma_db_path)
qa_chains = {}
for index, folder in enumerate(folders):
folder_path = f"{chroma_db_path}/{folder}"
project = retreival_qa_chain(folder_path)
qa_chains[folder] = project
return qa_chains
qa_chains = create_global_qa_chain()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
How Can I Store QA object returning from retreival_qa_chain in a folder for question answering in pgvector, like in chromadb its automatically storing by specifying persist directory.
How can I store it.
Like in chromadb we store it in persist directory and do question answering by calling like this:
chat_qa = qa_chains[formatted_project_name]
not_uuid=formatted_project_name
query = request.POST.get('message', None)
custom_message=generate_custom_prompt(chat_qa,query,name,not_uuid)
project_instance = ProjectName.objects.get(not_uuid=not_uuid)
# try:
chat_response = chat_langchain(chat_qa, custom_message,not_uuid)
### System Info
I am postgres for storing embeddings of it | How to store QA object in a folder. | https://api.github.com/repos/langchain-ai/langchain/issues/17394/comments | 1 | 2024-02-12T07:26:58Z | 2024-02-14T01:49:02Z | https://github.com/langchain-ai/langchain/issues/17394 | 2,129,559,523 | 17,394 |
[
"langchain-ai",
"langchain"
] | Regarding the discrepancy between the official documentation and my experience, I have checked the versions of langchain and langchain core as suggested:
```python
python -m langchain_core.sys_info
```
The output is as follows:
```python
langchain_core: 0.1.13
langchain: 0.0.340
langchain_community: 0.0.13
langserve: Not Found
```
Additionally, I would like to clarify that the import statement **`from langchain.schema.agent import AgentFinish`** was derived from the demo in the **`OpenAIAssistantRunnable`** class of langchain. I utilized this import statement to resolve the issue of mismatched types encountered previously.
In summary, there appears to be an inconsistency between the package imported in the official documentation example and the package used in the demo of the **`OpenAIAssistantRunnable`** class in langchain.
```python
Example using custom tools and custom execution:
.. code-block:: python
from langchain_experimental.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
from langchain.schema.agent import AgentFinish
from langchain.tools import E2BDataAnalysisTool
tools = [E2BDataAnalysisTool(api_key="...")]
agent = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant e2b tool",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
def execute_agent(agent, tools, input):
tool_map = {tool.name: tool for tool in tools}
response = agent.invoke(input)
while not isinstance(response, AgentFinish):
tool_outputs = []
for action in response:
tool_output = tool_map[action.tool].invoke(action.tool_input)
tool_outputs.append({"output": tool_output, "tool_call_id": action.tool_call_id})
response = agent.invoke(
{
"tool_outputs": tool_outputs,
"run_id": action.run_id,
"thread_id": action.thread_id
}
)
return response
response = execute_agent(agent, tools, {"content": "What's 10 - 4 raised to the 2.7"})
next_response = execute_agent(agent, tools, {"content": "now add 17.241", "thread_id": response.thread_id})
```
_Originally posted by @WindChaserInTheSunset in https://github.com/langchain-ai/langchain/issues/17367#issuecomment-1937756296_
| Regarding the discrepancy between the official documentation and my experience, I have checked the versions of langchain and langchain core as suggested: | https://api.github.com/repos/langchain-ai/langchain/issues/17392/comments | 1 | 2024-02-12T07:21:34Z | 2024-05-20T16:09:09Z | https://github.com/langchain-ai/langchain/issues/17392 | 2,129,553,655 | 17,392 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I have to make some additional work based on the primary key field of the returned similar document. But I see pk field has been removed in `output_fields` . In `similarity_search_with_score_by_vector` method `output_fields` set from `self.fields`. But In `self.fields` there is no `pk` as it has been removed in another method `_extract_fields`.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`pk` is the most important field of a DB. So it should return in the `Document` object metadata along with other fields when `similarity_search` or similar kind of method call.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langchain-google-genai==0.0.5
langchain-openai==0.0.2 | Milvus VectorStore not returning primary key field value (PK) in Similarity Search document | https://api.github.com/repos/langchain-ai/langchain/issues/17390/comments | 1 | 2024-02-12T06:00:59Z | 2024-02-27T04:43:59Z | https://github.com/langchain-ai/langchain/issues/17390 | 2,129,473,165 | 17,390 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
import os
from typing import Any, Dict, List, Optional, Sequence, Type, Union
from uuid import UUID
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain.pydantic_v1 import BaseModel, Field
from langchain.schema import AgentAction, AgentFinish
from langchain.tools import BaseTool
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.documents import Document
from langchain_core.messages import BaseMessage, SystemMessage
from langchain_core.outputs import ChatGenerationChunk, GenerationChunk, LLMResult
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseTool, ToolException
from langchain_openai import ChatOpenAI
from loguru import logger
from tenacity import RetryCallState
# Simulate a custom tool
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
# Define async custom tool
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
handle_tool_error = True
def _run(
self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
raise NotImplementedError("Calculator does not support sync")
async def _arun(
self,
a: int,
b: int,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
if a == 0:
raise ToolException("a cannot be 0")
return a * b
# Custom handler to store data from the agent's execution ==> Want to store all of the data printed to the console when using `set_debug(True)`
class MyCustomAsyncHandler(AsyncCallbackHandler):
def __init__(self):
self.chain_start_data = []
self.chain_end_data = []
self.chain_error_data = []
self.tool_start_data = []
self.tool_end_data = []
self.tool_error_data = []
self.agent_action_data = []
self.agent_finish_data = []
async def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_start: serialized={serialized}, prompts={prompts}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
raise NotImplementedError(
f"{self.__class__.__name__} does not implement `on_chat_model_start`"
)
# Note: This method intentionally raises NotImplementedError
async def on_llm_new_token(
self,
token: str,
*,
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_new_token: token={token}, chunk={chunk}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_end: response={response}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_llm_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_chain_start(
self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_start: serialized={serialized}, inputs={inputs}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_chain_end(
self,
outputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_end: outputs={outputs}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_chain_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
inputs: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_start: serialized={serialized}, input_str={input_str}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, inputs={inputs}, kwargs={kwargs}"
)
async def on_tool_end(
self,
output: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_end: output={output}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_tool_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_text(
self,
text: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_text: text={text}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retry(
self,
retry_state: RetryCallState,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
logger.debug(
f"on_retry: retry_state={retry_state}, run_id={run_id}, parent_run_id={parent_run_id}, kwargs={kwargs}"
)
async def on_agent_action(
self,
action: AgentAction,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_agent_action: action={action}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_agent_finish(
self,
finish: AgentFinish,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_agent_finish: finish={finish}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retriever_start(
self,
serialized: Dict[str, Any],
query: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_start: serialized={serialized}, query={query}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_retriever_end(
self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_end: documents={documents}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retriever_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
# Methods to retrieve stored data
def get_chain_start_data(self) -> List[Dict]:
return self.chain_start_data
def get_chain_end_data(self) -> List[Dict]:
return self.chain_end_data
def get_chain_error_data(self) -> List[Dict]:
return self.chain_error_data
def get_tool_start_data(self) -> List[Dict]:
return self.tool_start_data
def get_tool_end_data(self) -> List[Dict]:
return self.tool_end_data
def get_tool_error_data(self) -> List[Dict]:
return self.tool_error_data
def get_agent_action_data(self) -> List[Dict]:
return self.agent_action_data
def get_agent_finish_data(self) -> List[Dict]:
return self.agent_finish_data
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
# Ensure the OpenAI API key is defined and raise an error if it's not
if OPENAI_API_KEY is None:
raise ValueError("OPENAI_API_KEY environment variable is not defined")
# Create list of tools
tools = [CustomCalculatorTool()]
# Create a prompt
prompt_messages = [
SystemMessage(content=("""You a math expert.""")),
HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="""Multiply {a} by {b}""",
input_variables=["a", "b"],
)
),
MessagesPlaceholder("agent_scratchpad"),
]
prompt = ChatPromptTemplate(messages=prompt_messages)
# Define the LLM model
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, model="gpt-4")
# Create an Agent that can be used to call the tools we defined
agent = create_openai_tools_agent(llm, tools, prompt)
# Custom Handler
custom_handler = MyCustomAsyncHandler()
# Create the agent executor with the custom handler
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
callbacks=[custom_handler],
)
# Invoke the agent executor
run_result = asyncio.run(agent_executor.ainvoke({"a": 2, "b": 3}))
```
### Description
I want to store the logs of a AgentExecutor invocation so that I can load them into my database.
These logs are useful for a analyzing and keeping track of:
1. The prompt in (string form) being sent to OpenAI's API
2. The tool that was used
3. The inputs sent for the tool that was used
4. **Knowing whether the tool was used successfully or not**
When you run an AgentExecutor with a tool and implement a custom callback handler (like in the code I have provided), some of the data is missing.
When I run this code, I would expect to see logs for all of methods defined inside of `MyCustomAsyncHandler` such as `on_chain_start`, `on_chain_end`, `on_tool_start`, `on_tool_end`, etc.
Right now, the only methods that show logs are:
1. `on_chain_start`
2. `on_agent_action`
3. `on_agent_finish`
4. `on_chain_end`
When I run this same code and set `set_debug(True)`, I see logs being printed out for the following:
1. `[chain/start]`
2. `[chain/end]`
3. `[llm/start]`
4. `[llm/end]`
5. `[tool/start]`
6. `[tool/end]`
Which I would expect to be captured by the analogous method inside my custom handler. When creating the `MyCustomAsyncHandler`, I copied the methods directly from langchain's `AsyncCallbackHandler` class to make sure they were named properly.
Is there something I am overlooking or misunderstanding?
### System Info
`python -m langchain_core.sys_info`
> System Information
> ------------------
> > OS: Darwin
> > OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
> > Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
>
> Package Information
> -------------------
> > langchain_core: 0.1.18
> > langchain: 0.1.5
> > langchain_community: 0.0.17
> > langchain_openai: 0.0.3
>
> Packages not installed (Not Necessarily a Problem)
> --------------------------------------------------
> The following packages were not found:
>
> > langgraph
> > langserve | Custom Callback Handlers does not return data as expected during AgentExecutor runs | https://api.github.com/repos/langchain-ai/langchain/issues/17389/comments | 4 | 2024-02-12T05:43:41Z | 2024-02-12T23:22:21Z | https://github.com/langchain-ai/langchain/issues/17389 | 2,129,457,966 | 17,389 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@staticmethod
async def generate_stream(agent, prompt):
print("...........>", prompt)
async for chunk in agent.astream_log({"input": prompt}, include_names=['ChatOpenAI']):
# astream_log(chunk, include_names=['ChatOpenAI'])
yield chunk
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to stream the response of llm by token , by using astream_log method as given in documentation , but it is only streaming the first llm call where it is generating code for the python repl tool, it is not streaming the 2nd llm call where it gives the final response

attached is the out put i received when i hit the api
### System Info
langchain version:
langchain==0.1.4
langchain-community==0.0.19
langchain-core==0.1.21
langchain-experimental==0.0.49
langchain-openai==0.0.5
system: ubuntu 20 and docker
python 3.10 | Pandas DataFreame agent streaming issue | https://api.github.com/repos/langchain-ai/langchain/issues/17388/comments | 2 | 2024-02-12T05:33:27Z | 2024-07-09T22:22:24Z | https://github.com/langchain-ai/langchain/issues/17388 | 2,129,450,263 | 17,388 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am on azure databricks notebook. i am getting this error. i tried different versions of langchain and had no luck. any opinion?
i normally can see that it is available in the notebook, but fails with env.
**thi is the code, that i have:**
import langchain
from langchain.llms import get_type_to_cls_dict
try:
from langchain.llms import get_type_to_cls_dict
print("The function 'get_type_to_cls_dict' is available in this version of langchain.")
except ImportError as e:
print("The function 'get_type_to_cls_dict' is NOT available. Error:", e)
**The function 'get_type_to_cls_dict' is available in this version of langchain.**
**the issue is starting here**
model_info = mlflow.langchain.log_model(
full_chain,
loader_fn=get_retriever, # Load the retriever with DATABRICKS_TOKEN env as secret (for authentication).
artifact_path="chain",
registered_model_name=model_name,
pip_requirements=[
"mlflow==" + mlflow.__version__,
# "langchain==" + langchain.__version__,
"langchain==0.1.4",
"databricks-vectorsearch",
"pydantic==2.5.2 --no-binary pydantic",
"cloudpickle=="+ cloudpickle.__version__
],
input_example=input_df,
signature=signature
)
model = mlflow.langchain.load_model(model_info.model_uri)
model.invoke(dialog)
**this is the error part needed to be fixed:**
**ImportError: cannot import name 'get_type_to_cls_dict' from 'langchain.llms.loading'** (/local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/langchain/llms/loading.py)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File <command-3707747759080680>, line 1
----> 1 model = mlflow.langchain.load_model(model_info.model_uri)
2 model.invoke(dialog)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:567, in load_model(model_uri, dst_path)
547 """
548 Load a LangChain model from a local file or a run.
549
(...)
564 :return: A LangChain model instance
565 """
566 local_model_path = _download_artifact_from_uri(artifact_uri=model_uri, output_path=dst_path)
--> 567 return _load_model_from_local_fs(local_model_path)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:542, in _load_model_from_local_fs(local_model_path)
540 flavor_conf = _get_flavor_configuration(model_path=local_model_path, flavor_name=FLAVOR_NAME)
541 _add_code_from_conf_to_system_path(local_model_path, flavor_conf)
--> 542 return _load_model(local_model_path, flavor_conf)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:429, in _load_model(local_model_path, flavor_conf)
427 model_load_fn = flavor_conf.get(_MODEL_LOAD_KEY)
428 if model_load_fn == _RUNNABLE_LOAD_KEY:
--> 429 return _load_runnables(local_model_path, flavor_conf)
430 if model_load_fn == _BASE_LOAD_KEY:
431 return _load_base_lcs(local_model_path, flavor_conf)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:78, in _load_model_from_path(path, model_config)
76 model_load_fn = model_config.get(_MODEL_LOAD_KEY)
77 if model_load_fn == _RUNNABLE_LOAD_KEY:
---> 78 return _load_runnables(path, model_config)
79 if model_load_fn == _BASE_LOAD_KEY:
80 return _load_base_lcs(path, model_config)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:78, in _load_model_from_path(path, model_config)
76 model_load_fn = model_config.get(_MODEL_LOAD_KEY)
77 if model_load_fn == _RUNNABLE_LOAD_KEY:
---> 78 return _load_runnables(path, model_config)
79 if model_load_fn == _BASE_LOAD_KEY:
80 return _load_base_lcs(path, model_config)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:82, in _load_model_from_path(path, model_config)
80 return _load_base_lcs(path, model_config)
81 if model_load_fn == _CONFIG_LOAD_KEY:
---> 82 return _load_model_from_config(path, model_config)
83 raise MlflowException(f"Unsupported model load key {model_load_fn}")
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:49, in _load_model_from_config(path, model_config)
47 from langchain.chains.loading import load_chain
48 from langchain.chains.loading import type_to_loader_dict as chains_type_to_loader_dict
---> 49 from langchain.llms.loading import get_type_to_cls_dict as llms_get_type_to_cls_dict
50 from langchain.llms.loading import load_llm
51 from langchain.prompts.loading import load_prompt
ImportError: cannot import name 'get_type_to_cls_dict' from 'langchain.llms.loading' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/langchain/llms/loading.py)
### Idea or request for content:
i need support to have the issue mentioned above to be fixed.
<img width="775" alt="image" src="https://github.com/langchain-ai/langchain/assets/152225892/fcca5f6e-de18-42c7-ae12-9b48f5caea4d">
| Issue about get_type_to_cls_dict in azure databricks notebook | https://api.github.com/repos/langchain-ai/langchain/issues/17384/comments | 5 | 2024-02-12T02:38:48Z | 2024-06-11T16:08:03Z | https://github.com/langchain-ai/langchain/issues/17384 | 2,129,329,631 | 17,384 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.llms.bedrock import Bedrock
temperature = 0.0
model_id="anthropic.claude-v2"
model_params={
"max_tokens_to_sample": 2000,
"temperature": temperature,
"stop_sequences": ["\n\nHuman:"]}
llm = Bedrock(model_id=model_id,
client=boto3_bedrock,
model_kwargs=model_params
)
retriever = MultiQueryRetriever.from_llm(
retriever=db.as_retriever(), llm=llm
)
question = "tell me about llama 2?"
docs = retriever.get_relevant_documents(query=question)
len(docs)
### Error Message and Stack Trace (if applicable)
--------------------------------------------------------------------------
ValidationException Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:121, in BedrockEmbeddings._embedding_func(self, text)
120 try:
--> 121 response = self.client.invoke_model(
122 body=body,
123 modelId=self.model_id,
124 accept="application/json",
125 contentType="application/json",
126 )
127 response_body = json.loads(response.get("body").read())
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:535, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
534 # The "self" in this scope is referring to the BaseClient.
--> 535 return self._make_api_call(operation_name, kwargs)
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:980, in BaseClient._make_api_call(self, operation_name, api_params)
979 error_class = self.exceptions.from_code(error_code)
--> 980 raise error_class(parsed_response, operation_name)
981 else:
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[17], line 3
1 question = "tell me about llama 2?"
----> 3 docs = retriever.get_relevant_documents(query=question)
4 len(docs)
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:163, in MultiQueryRetriever._get_relevant_documents(self, query, run_manager)
154 """Get relevant documents given a user query.
155
156 Args:
(...)
160 Unique union of relevant documents from all generated queries
161 """
162 queries = self.generate_queries(query, run_manager)
--> 163 documents = self.retrieve_documents(queries, run_manager)
164 return self.unique_union(documents)
File /opt/conda/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:198, in MultiQueryRetriever.retrieve_documents(self, queries, run_manager)
196 documents = []
197 for query in queries:
--> 198 docs = self.retriever.get_relevant_documents(
199 query, callbacks=run_manager.get_child()
200 )
201 documents.extend(docs)
202 return documents
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
581 def _get_relevant_documents(
582 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
583 ) -> List[Document]:
584 if self.search_type == "similarity":
--> 585 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
586 elif self.search_type == "similarity_score_threshold":
587 docs_and_similarities = (
588 self.vectorstore.similarity_search_with_relevance_scores(
589 query, **self.search_kwargs
590 )
591 )
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/pgvector.py:335, in PGVector.similarity_search(self, query, k, filter, **kwargs)
318 def similarity_search(
319 self,
320 query: str,
(...)
323 **kwargs: Any,
324 ) -> List[Document]:
325 """Run similarity search with PGVector with distance.
326
327 Args:
(...)
333 List of Documents most similar to the query.
334 """
--> 335 embedding = self.embedding_function.embed_query(text=query)
336 return self.similarity_search_by_vector(
337 embedding=embedding,
338 k=k,
339 filter=filter,
340 )
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:156, in BedrockEmbeddings.embed_query(self, text)
147 def embed_query(self, text: str) -> List[float]:
148 """Compute query embeddings using a Bedrock model.
149
150 Args:
(...)
154 Embeddings for the text.
155 """
--> 156 return self._embedding_func(text)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:130, in BedrockEmbeddings._embedding_func(self, text)
128 return response_body.get("embedding")
129 except Exception as e:
--> 130 raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
### Description
I'm using MultiQueryRetriever with bedrock claude model.
I got ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
Please take a look. Thank you
### System Info
!pip show langchain
Name: langchain
Version: 0.0.318
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /opt/conda/lib/python3.10/site-packages
Requires: aiohttp, anyio, async-timeout, dataclasses-json, jsonpatch, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: jupyter_ai, jupyter_ai_magics | MultiQueryRetriever with bedrock claude | https://api.github.com/repos/langchain-ai/langchain/issues/17382/comments | 3 | 2024-02-11T21:38:03Z | 2024-06-24T19:21:01Z | https://github.com/langchain-ai/langchain/issues/17382 | 2,129,180,857 | 17,382 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_google_vertexai import ChatVertexAI, VertexAI, VertexAIEmbeddings
llm_chain = LLMChain(llm=llm, prompt=prompt_template)
res = llm_chain.predict(user_prompt=user_prompt)
### Error Message and Stack Trace (if applicable)
Prompted _[llm/error] [1:chain:LLMChain > 2:llm:VertexAI] [4.64s] LLM run errored with error: "TypeError(\"Additional kwargs key Finance already exists in left dict and value has unsupported type <class 'float'>.\")Traceback (most recent call last):
### Description
I'm trying to use text-unicorn model through vertex ai while setting the stream parameter to true. With every chunk generated by the llm, the generation_info dict contains key-value pairs where the key is the same but the value is different with every returned generation. Acoordingly a runtime error is raised and no propeper answer is returned from the llm.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.0.354
> langchain_community: 0.0.15
> langchain_benchmarks: 0.0.10
> langchain_experimental: 0.0.47
> langchain_google_genai: 0.0.2
> langchain_google_vertexai: 0.0.2
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | merge_dicts in _merge can't merge different values of instance float and raises a type error | https://api.github.com/repos/langchain-ai/langchain/issues/17376/comments | 4 | 2024-02-11T10:18:31Z | 2024-05-21T16:09:11Z | https://github.com/langchain-ai/langchain/issues/17376 | 2,128,927,696 | 17,376 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
llm = ChatGoogleGenerativeAI(model="gemini-pro")
toolkit = AmadeusToolkit(llm=llm)
tools = toolkit.get_tools()
prompt = hub.pull("sirux21/react")
agent = create_react_agent(llm, tools, prompt)
val = {
"input": "Find flights from NYC to LAX tomorrow",
}
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.invoke(val)
```
### Error Message and Stack Trace (if applicable)
NYC is too broad of a location, I should narrow it down to a specific airport
Action: closest_airport
Action Input: {
"location": "New York City, NY"
}content='```json\n{\n "iataCode": "JFK"\n}\n```'JFK is the closest airport to NYC, I will use that as the origin airport
Action: single_flight_search
Action Input: {
"originLocationCode": "JFK",
"destinationLocationCode": "LAX",
"departureDateTimeEarliest": "2023-06-08T00:00:00",
"departureDateTimeLatest": "2023-06-08T23:59:59"
}Traceback (most recent call last):
File "C:\Users\sirux21\Nextcloud\CodeRed-Oddysey\CodeRed-Odyssey\python\main4.py", line 24, in <module>
agent_executor.invoke(val)
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 162, in invoke
raise e
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1097, in _take_next_step
[
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1097, in <listcomp>
[
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1204, in _perform_agent_action
observation = tool.run(
^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 364, in run
raise e
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 355, in run
parsed_input = self._parse_input(tool_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 258, in _parse_input
input_args.validate({key_: tool_input})
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pydantic\v1\main.py", line 711, in validate
return cls(**value)
^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 3 validation errors for FlightSearchSchema
destinationLocationCode
field required (type=value_error.missing)
departureDateTimeEarliest
field required (type=value_error.missing)
departureDateTimeLatest
field required (type=value_error.missing)
### Description
FlightSearchSchema is unable to parse the input
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-google-genai==0.0.8
langchainhub==0.1.14
windows
3.11 | Amadeus searchflight not working | https://api.github.com/repos/langchain-ai/langchain/issues/17375/comments | 5 | 2024-02-11T01:38:03Z | 2024-03-07T01:24:06Z | https://github.com/langchain-ai/langchain/issues/17375 | 2,128,789,064 | 17,375 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import os
from langchain.schema import SystemMessage, HumanMessage
from langchain_openai import AzureChatOpenAI
from langchain.callbacks import get_openai_callback
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
# Create an instance of chat llm
llm = AzureChatOpenAI(
azure_endpoint=azure_endpoint,
api_key = AZURE_OPENAI_API_KEY,
api_version="2023-05-15",
azure_deployment="gpt-3.5-turbo",
model="gpt-3.5-turbo",
)
messages = [
SystemMessage(
content=(
"You are ExpertGPT, an AGI system capable of "
"anything except answering questions about cheese. "
"It turns out that AGI does not fathom cheese as a "
"concept, the reason for this is a mystery."
)
),
HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
]
with get_openai_callback() as cb:
res = llm.invoke(messages)
# print the response
print(res.content)
# print the total tokens used
print(cb.total_tokens)
### Error Message and Stack Trace (if applicable)
HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "HTTP/1.1 401 Unauthorized"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']>
receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_body.complete
receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
response_closed.started
DEBUG:httpcore.http11:response_closed.complete
response_closed.complete
DEBUG:openai._base_client:HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized"
HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized"
DEBUG:openai._base_client:Encountered httpx.HTTPStatusError
Traceback (most recent call last):
File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request
response.raise_for_status()
File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
Encountered httpx.HTTPStatusError
Traceback (most recent call last):
File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request
response.raise_for_status()
File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
DEBUG:openai._base_client:Not retrying
Not retrying
DEBUG:openai._base_client:Re-raising status error
Re-raising status error
Error is coming from here
### Description
My key works for other calls but with langchain is not working and giving the above error. Please help. By the way, I am using Azure open ai
AuthenticationError Traceback (most recent call last)
Cell In[6], line 32
18 messages = [
19 SystemMessage(
20 content=(
(...)
27 HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
28 ]
30 with get_openai_callback() as cb:
---> 32 res = llm.invoke(messages)
34 # print the response
35 print(res.content)
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~/.local/lib/python3.10/site-packages/langchain_openai/chat_models/base.py:451, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
445 message_dicts, params = self._create_message_dicts(messages, stop)
446 params = {
447 **params,
448 **({"stream": stream} if stream is not None else {}),
449 **kwargs,
450 }
--> 451 response = self.client.create(messages=message_dicts, **params)
452 return self._create_chat_result(response)
File ~/.local/lib/python3.10/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f"Missing required argument: {quote(missing[0])}"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py:663, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
611 @required_args(["messages", "model"], ["messages", "model", "stream"])
612 def create(
613 self,
(...)
661 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
662 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 663 return self._post(
664 "/chat/completions",
665 body=maybe_transform(
666 {
667 "messages": messages,
668 "model": model,
669 "frequency_penalty": frequency_penalty,
670 "function_call": function_call,
671 "functions": functions,
672 "logit_bias": logit_bias,
673 "logprobs": logprobs,
674 "max_tokens": max_tokens,
675 "n": n,
676 "presence_penalty": presence_penalty,
677 "response_format": response_format,
678 "seed": seed,
679 "stop": stop,
680 "stream": stream,
681 "temperature": temperature,
682 "tool_choice": tool_choice,
683 "tools": tools,
684 "top_logprobs": top_logprobs,
685 "top_p": top_p,
686 "user": user,
687 },
688 completion_create_params.CompletionCreateParams,
689 ),
690 options=make_request_options(
691 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
692 ),
693 cast_to=ChatCompletion,
694 stream=stream or False,
695 stream_cls=Stream[ChatCompletionChunk],
696 )
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1186 def post(
1187 self,
1188 path: str,
(...)
1195 stream_cls: type[_StreamT] | None = None,
1196 ) -> ResponseT | _StreamT:
1197 opts = FinalRequestOptions.construct(
1198 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1199 )
-> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
880 def request(
881 self,
882 cast_to: Type[ResponseT],
(...)
887 stream_cls: type[_StreamT] | None = None,
888 ) -> ResponseT | _StreamT:
--> 889 return self._request(
890 cast_to=cast_to,
891 options=options,
892 stream=stream,
893 stream_cls=stream_cls,
894 remaining_retries=remaining_retries,
895 )
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
977 err.response.read()
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
982 return self._process_response(
983 cast_to=cast_to,
984 options=options,
(...)
987 stream_cls=stream_cls,
988 )
AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com/), or have expired.'}
### System Info
openai==1.12.0
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-openai==0.0.5 | Keep getting AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'} | https://api.github.com/repos/langchain-ai/langchain/issues/17373/comments | 6 | 2024-02-10T22:50:16Z | 2024-07-22T15:25:14Z | https://github.com/langchain-ai/langchain/issues/17373 | 2,128,743,120 | 17,373 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.vectorstores import Milvus
vector_db = Milvus.from_texts(
texts=str_list,
embedding=embeddings,
connection_args={
"uri": "zilliz_cloud_uri",
"token": "zilliz_cloud_token"
},
)
```
### Error Message and Stack Trace (if applicable)
E0210 23:13:44.149694000 8088408832 hpack_parser.cc:993] Error parsing 'content-type' metadata: invalid value
[__internal_register] retry:4, cost: 0.27s, reason: <_InactiveRpcError: StatusCode.UNKNOWN, Stream removed>
<img width="1440" alt="Screenshot 1402-11-21 at 23 20 28" src="https://github.com/langchain-ai/langchain/assets/69215813/22e7e225-848f-4689-a413-e2ef8e9998b2">
### Description
OS== macOS 14
pymilvus==2.3.4
langchain==0.1.3
langchain-community==0.0.15
pyarrow>=12.0.0
NOTE: pyarrow must be installed manually .otherwise code will throw error
=========================
I am using Milvus module from langchain_community as vector database. Code seem to have gRPC related errors in local environment. moreover , within colab environment I'am facing this error :
```python
584 end = min(i + batch_size, total_count)
585 # Convert dict to list of lists batch for insertion
--> 586 insert_list = [insert_dict[x][i:end] for x in self.fields]
587 # Insert into the collection.
588 try:
KeyError: 'year'
```
Error is from :
File "/Users/moeinmn/anaconda3/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 904, in from_texts
vector_db = cls(
^^^^
File "/Users/moeinmn/anaconda3/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 179, in __init__
self.alias = self._create_connection_alias(connection_args)
### System Info
langchain==0.1.3
langchain-community==0.0.15
pymilvus==2.3.4 | gRPC error with Milvus retriever | https://api.github.com/repos/langchain-ai/langchain/issues/17371/comments | 2 | 2024-02-10T20:01:16Z | 2024-06-19T16:07:03Z | https://github.com/langchain-ai/langchain/issues/17371 | 2,128,696,829 | 17,371 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
On the official documentation page for "Modules -> Agents -> Agent Types -> OpenAI assistants" (https://python.langchain.com/docs/modules/agents/agent_types/openai_assistants#using-existing-thread), there is an error in the example regarding the import statement for AgentFinish package and its usage in the execute_agent function.
The incorrect import statement in the example is:
```python
from langchain_core.agents import AgentFinish
```
However, the correct import statement should be:
```python
from langchain.schema.agent import AgentFinish
```
The issue arises because the incorrect import statement leads to a discrepancy between the package imported and the actual type returned by agent.invoke(input). Despite the actual type being AgentFinish, the example code still enters the while loop, causing further errors when accessing the tool attribute in action.
The corrected example code should be as follows:
```python
from langchain.schema.agent import AgentFinish
def execute_agent(agent, tools, input):
tool_map = {tool.name: tool for tool in tools}
response = agent.invoke(input)
while not isinstance(response, AgentFinish):
tool_outputs = []
for action in response:
tool_output = tool_map[action.tool].invoke(action.tool_input)
tool_outputs.append({"output": tool_output, "tool_call_id": action.tool_call_id})
response = agent.invoke(
{
"tool_outputs": tool_outputs,
"run_id": action.run_id,
"thread_id": action.thread_id
}
)
return response
```
This correction ensures that the example code operates as intended, avoiding errors related to the incorrect package import and usage.
### Idea or request for content:
_No response_ | Correction Needed in OpenAI Assistants Example on Official Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/17367/comments | 4 | 2024-02-10T14:33:55Z | 2024-02-11T13:33:33Z | https://github.com/langchain-ai/langchain/issues/17367 | 2,128,498,821 | 17,367 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import OpenLLM
server_url = "http://localhost:3000"
llm2 = OpenLLM(server_url=server_url)
llm2._generate(["hello i am "])
```
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1139, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 265, in _call
self._identifying_params["model_name"], **copied
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### Error Message and Stack Trace (if applicable)

```python
>>> llm2
OpenLLM(server_url='http://localhost:3000', llm_kwargs={'n': 1, 'best_of': None, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'use_beam_search': False, 'ignore_eos': False, 'skip_special_tokens': True, 'max_new_tokens': 128, 'min_length': 0, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'use_cache': True, 'temperature': 0.75, 'top_k': 15, 'top_p': 0.78, 'typical_p': 1.0, 'epsilon_cutoff': 0.0, 'eta_cutoff': 0.0, 'diversity_penalty': 0.0, 'repetition_penalty': 1.0, 'encoder_repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'renormalize_logits': False, 'remove_invalid_values': False, 'num_return_sequences': 1, 'output_attentions': False, 'output_hidden_states': False, 'output_scores': False, 'encoder_no_repeat_ngram_size': 0})
>>> llm2._client._config
{'n': 1, 'best_of': None, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'use_beam_search': False, 'ignore_eos': False, 'skip_special_tokens': True, 'max_new_tokens': 128, 'min_length': 0, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'use_cache': True, 'temperature': 0.75, 'top_k': 15, 'top_p': 0.78, 'typical_p': 1.0, 'epsilon_cutoff': 0.0, 'eta_cutoff': 0.0, 'diversity_penalty': 0.0, 'repetition_penalty': 1.0, 'encoder_repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'renormalize_logits': False, 'remove_invalid_values': False, 'num_return_sequences': 1, 'output_attentions': False, 'output_hidden_states': False, 'output_scores': False, 'encoder_no_repeat_ngram_size': 0}
>>>
>>> llm2._generate(["hello i am "])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1139, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 265, in _call
self._identifying_params["model_name"], **copied
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### Description
I'm trying to use llama-2-7b-hf hosted via OpenLLM, to run llm chains locally.
Using given example from this docs [langchain-openllm](https://python.langchain.com/docs/integrations/llms/openllm).
After looking into details, the llm is initialized correctly, but there seems to issue with this specific block of code
```python
@property
def _identifying_params(self) -> IdentifyingParams:
"""Get the identifying parameters."""
if self._client is not None:
self.llm_kwargs.update(self._client._config())
model_name = self._client._metadata()["model_name"]
model_id = self._client._metadata()["model_id"]
else:
```
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: 1 SMP Fri Nov 17 03:31:10 UTC 2023
> Python Version: 3.9.6 (default, Jul 12 2021, 18:23:59)
[GCC 9.3.0]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.4
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_example: Installed. No version info available.
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Bug Encountered in OpenLLM invoke method: TypeError with client's config | https://api.github.com/repos/langchain-ai/langchain/issues/17362/comments | 1 | 2024-02-10T05:40:29Z | 2024-05-18T16:07:48Z | https://github.com/langchain-ai/langchain/issues/17362 | 2,128,119,760 | 17,362 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
pgvector = PGVector(...) # Initialize PGVector with necessary parameters
ids_to_delete = [...] # List of ids to delete
pgvector.delete(ids_to_delete)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to fetch ID's of a document stored in Postgres pgvector db to delete the particular document embeddings
### System Info
I am using Postgres pgvector | How to get IDs of documents stored in postgres pgvector | https://api.github.com/repos/langchain-ai/langchain/issues/17361/comments | 9 | 2024-02-10T05:21:56Z | 2024-07-19T18:59:09Z | https://github.com/langchain-ai/langchain/issues/17361 | 2,128,105,436 | 17,361 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.chat_models.huggingface import ChatHuggingFace
from langchain_community.llms import HuggingFaceHub
llm2 = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
huggingfacehub_api_token=""
)
chat_model = ChatHuggingFace(llm=llm2)
from langchain import hub
from langchain.agents import AgentExecutor, load_tools
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
from langchain.tools.render import render_text_description
from langchain_community.utilities import SerpAPIWrapper
# setup tools
search = SerpAPIWrapper(serpapi_api_key=SERPER_API_KEY)
ddg_search = DuckDuckGoSearchRun()
youtube = YouTubeSearchTool()
tools = [
Tool(
name="Search",
func=search.run,
description="Useful for answering questions about current events."
),
Tool(
name="DuckDuckGo Search",
func=ddg_search.run,
description="Useful to browse information from the Internet."
),
Tool(
name="Youtube Search",
func=youtube.run,
description="Useful for when the user explicitly asks to search on YouTube."
)
]
# setup ReAct style prompt
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define the agent
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True,max_execution_time=60)
agent_executor.invoke(
{
"input": "Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?"
}
)
```
ERROR:
```
> Entering new AgentExecutor chain...
Could not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete response
> Finished chain.
{'input': 'Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?',
'output': 'Agent stopped due to iteration limit or time limit.'}
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
Could not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete response
> Finished chain.
{'input': 'Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?',
'output': 'Agent stopped due to iteration limit or time limit.'}
```
### Description
Challenge integrating Huggingface + Langchain
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | $JSON_BLOB ```Invalid or incomplete response - ChatHuggingFace.. Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/17356/comments | 8 | 2024-02-10T03:13:11Z | 2024-02-10T19:26:19Z | https://github.com/langchain-ai/langchain/issues/17356 | 2,128,067,649 | 17,356 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os, dotenv, openai
from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.retrievers import MultiQueryRetriever
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
#API Key
dotenv.load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
#Load and split docs
documents = WebBaseLoader("https://en.wikipedia.org/wiki/New_York_City").load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 50)
documents = text_splitter.split_documents(documents)
vector_store = FAISS.from_documents(documents, OpenAIEmbeddings())
retriever = vector_store.as_retriever()
#MultiQueryRetriever
primary_qa_llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
advanced_retriever = MultiQueryRetriever.from_llm(retriever=retriever, llm=primary_qa_llm)
print(advanced_retriever.get_relevant_documents("Where is nyc?"))
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\pydantic\v1\main.py", line 522, in parse_obj
obj = dict(obj)
^^^^^^^^^
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\output_parsers\pydantic.py", line 25, in parse_result
return self.pydantic_object.parse_obj(json_object)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\pydantic\v1\main.py", line 525, in parse_obj
raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\Documents-Alon\MapleRAG\ragas-tutorial\ragas-debug.py", line 28, in <module>
print(advanced_retriever.get_relevant_documents("Who are you?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\retrievers.py", line 224, in get_relevant_documents
raise e
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\retrievers\multi_query.py", line 172, in _get_relevant_documents
queries = self.generate_queries(query, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\retrievers\multi_query.py", line 189, in generate_queries
response = self.llm_chain(
^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 162, in invoke
raise e
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\llm.py", line 258, in create_outputs
^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\llm.py", line 261, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\output_parsers\pydantic.py", line 29, in parse_result
raise OutputParserException(msg, llm_output=json_object)
langchain_core.exceptions.OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
### Description
Trying to use MultiQueryRetriever and getting an error.
The base `retriever` works. The `primary_qa_llm` works too.
Using windows.
### System Info
```
> pip list | findstr /i "langchain"
langchain 0.1.6
langchain-community 0.0.19
langchain-core 0.1.22
langchain-openai 0.0.5
langchainhub 0.1.14
```
Platform: Windows
Python version: 3.11.7 | MultiQueryRetriever is failing | https://api.github.com/repos/langchain-ai/langchain/issues/17352/comments | 13 | 2024-02-09T23:57:05Z | 2024-05-13T10:21:47Z | https://github.com/langchain-ai/langchain/issues/17352 | 2,127,990,898 | 17,352 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which's present in the latest MultiQueryRetriever documentation
```
from typing import List
from langchain.chains import LLMChain
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, Field
# Output parser will split the LLM result into a list of queries
class LineList(BaseModel):
# "lines" is the key (attribute name) of the parsed output
lines: List[str] = Field(description="Lines of text")
class LineListOutputParser(PydanticOutputParser):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> LineList:
lines = text.strip().split("\n")
return LineList(lines=lines)
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm = OpenAI(temperature=0)
# Chain
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
# Other inputs
question = "What are the approaches to Task Decomposition?"
# Run
retriever = MultiQueryRetriever(
retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines"
) # "lines" is the key (attribute name) of the parsed output
# Results
unique_docs = retriever.get_relevant_documents(
query="What does the course say about regression?"
)
len(unique_docs)
```
the above code is returning below issue
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-22-0cf7a0e69b40>](https://localhost:8080/#) in <cell line: 24>()
22
23
---> 24 output_parser = LineListOutputParser()
25
26 QUERY_PROMPT = PromptTemplate(
2 frames
[<ipython-input-22-0cf7a0e69b40>](https://localhost:8080/#) in __init__(self)
15 class LineListOutputParser(PydanticOutputParser):
16 def __init__(self) -> None:
---> 17 super().__init__(pydantic_object=LineList)
18
19 def parse(self, text: str) -> LineList:
[/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py](https://localhost:8080/#) in __init__(self, **kwargs)
105
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
109
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for LineListOutputParser
pydantic_object
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
```
Can you please have a look into and return the correct code rather than suggesting to go through all the files? | MultiQueryRetriever documentation code is not executing | https://api.github.com/repos/langchain-ai/langchain/issues/17343/comments | 5 | 2024-02-09T21:16:25Z | 2024-02-12T22:07:02Z | https://github.com/langchain-ai/langchain/issues/17343 | 2,127,859,163 | 17,343 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i directly took from MultiQueryRetriever LangChain documentation
```
# Build a sample vectorDB
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
# from langchain_openai import OpenAIEmbeddings
# Load blog post
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
splits = text_splitter.split_documents(data)
# VectorDB
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=splits, embedding=embedding)
from langchain.retrievers.multi_query import MultiQueryRetriever
# from langchain_openai import ChatOpenAI
question = "What are the approaches to Task Decomposition?"
llm = OpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
len(unique_docs)
```
below's the error it is returning
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
14 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
24 try:
---> 25 return self.pydantic_object.parse_obj(json_object)
26 except ValidationError as e:
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')
--> 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
526 return cls(**obj)
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[<ipython-input-73-07101c8e33b2>](https://localhost:8080/#) in <cell line: 34>()
32 logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
33
---> 34 unique_docs = retriever_from_llm.get_relevant_documents(query=question)
35 len(unique_docs)
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in generate_queries(self, question, run_manager)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py](https://localhost:8080/#) in warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
146
147 async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
361 }
362
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
102 ) -> Dict[str, str]:
103 response = self.generate([inputs], run_manager=run_manager)
--> 104 return self.create_outputs(response)[0]
105
106 def generate(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in create_outputs(self, llm_result)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
--> 258 result = [
259 # Get the text of the top generated string.
260 {
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in <listcomp>(.0)
259 # Get the text of the top generated string.
260 {
--> 261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
the same code was running yesterday, but its returning an error today. there must be issue from langchain side itself. Can you have a look it it? | MultiQueryRetriever documentation code itself is not executing | https://api.github.com/repos/langchain-ai/langchain/issues/17342/comments | 7 | 2024-02-09T20:23:23Z | 2024-02-12T22:06:47Z | https://github.com/langchain-ai/langchain/issues/17342 | 2,127,799,180 | 17,342 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores import PGVector
embeddings = VertexAIEmbeddings()
vectorstore = PGVector(
collection_name=<collection_name>,
connection_string=<connection_string>,
embedding_function=embeddings,
)
vectorstore.delete(ids=[<some_id>])
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to particular embeddings of a file and I want to delete particular file embeddings from embeddings of list of files
### System Info
All the dependencies I installed | How to delete/add particular embeddings in PG vector | https://api.github.com/repos/langchain-ai/langchain/issues/17340/comments | 3 | 2024-02-09T20:10:48Z | 2024-02-16T17:13:48Z | https://github.com/langchain-ai/langchain/issues/17340 | 2,127,785,140 | 17,340 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i used to get the answer using MultiQueryRetriever. It was working yesterday, but not working now
```
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
# docs = retriever.get_relevant_documents(query="how many are injured and dead in christchurch Mosque?")
# print(len(docs))
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
# question = "does cemig recognizes the legitimacy of the trade unions?"
question = "can you return the objective of ABInBev?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
below's the error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
15 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
24 try:
---> 25 return self.pydantic_object.parse_obj(json_object)
26 except ValidationError as e:
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')
--> 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
526 return cls(**obj)
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[<ipython-input-32-7d1282f5e9fb>](https://localhost:8080/#) in <cell line: 40>()
38 query = "Does the company has one or more channel(s)/mechanism(s) through which individuals and communities who may be adversely impacted by the Company can raise complaints or concerns, including in relation to human rights issues?"
39 desired_count = 10 # The number of unique documents you want
---> 40 unique_documents = fetch_unique_documents(query, initial_limit=desired_count, desired_count=desired_count)
41
42 # # Print the unique documents or handle them as needed
[<ipython-input-32-7d1282f5e9fb>](https://localhost:8080/#) in fetch_unique_documents(query, initial_limit, desired_count)
6 while len(unique_docs) < desired_count:
7 retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
----> 8 docs = retriever.get_relevant_documents(query)
9
10 # # Set logging for the queries
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in generate_queries(self, question, run_manager)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py](https://localhost:8080/#) in warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
146
147 async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
361 }
362
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
102 ) -> Dict[str, str]:
103 response = self.generate([inputs], run_manager=run_manager)
--> 104 return self.create_outputs(response)[0]
105
106 def generate(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in create_outputs(self, llm_result)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
--> 258 result = [
259 # Get the text of the top generated string.
260 {
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in <listcomp>(.0)
259 # Get the text of the top generated string.
260 {
--> 261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
Can you look into this and help me resolving this? | returning error like LineList expected dict not int (type=type_error) while using MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17339/comments | 1 | 2024-02-09T20:05:31Z | 2024-02-14T03:34:56Z | https://github.com/langchain-ai/langchain/issues/17339 | 2,127,778,995 | 17,339 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i tried running
```
retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
docs = retriever.get_relevant_documents(query="data related to cricket?")
```
below's the output
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
14 frames
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
i'm facing this issue for first time while retrieving the relevant documents. Can you have a look into it? | returning an error like LineList expected dict not int (type=type_error) while using MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17336/comments | 4 | 2024-02-09T19:18:14Z | 2024-02-18T08:32:35Z | https://github.com/langchain-ai/langchain/issues/17336 | 2,127,720,355 | 17,336 |
[
"langchain-ai",
"langchain"
] | @eyurtsev hello. I'd like to ask a follow up question. the `COSINE` distance strategy is resulting in scores >1.. from this [code](https://github.com/langchain-ai/langchain/blob/023cb59e8aaf3dfaad684b3fcf57a1c363b9abd1/libs/core/langchain_core/vectorstores.py#L184C2-L188C1), it looks like scores returned are calculated from `1-distance`, meaning they are similarity scores, and cosine sim scores should be in [-1, 1] range. but I get scores >1.
is there a reason for that? am I missing something in my implementation? thanks.
this is my code snippet
```
embedder = NVIDIAEmbeddings(model="nvolveqa_40k")
store = FAISS.from_documents(docs_split, embedder, distance_strategy=DistanceStrategy.COSINE)
query = "Who is the director of the Oppenheimer movie?"
docs_and_scores = store.similarity_search_with_score(query)
```
_Originally posted by @rnyak in https://github.com/langchain-ai/langchain/discussions/16224#discussioncomment-8413281_ | Question about the Cosine distance strategy | https://api.github.com/repos/langchain-ai/langchain/issues/17333/comments | 5 | 2024-02-09T18:40:28Z | 2024-07-02T08:49:16Z | https://github.com/langchain-ai/langchain/issues/17333 | 2,127,672,239 | 17,333 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import os
import sys
import langchain_community
import langchain_core
from langchain_openai import AzureOpenAIEmbeddings
from langchain_community.vectorstores.azuresearch import AzureSearch
embeddings = AzureOpenAIEmbeddings(...)
vectordb = AzureSearch(...)
retriever = vectordb.as_retriever(search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8})
#retriever = vectordb.as_retriever() # this works !
print(type(retriever))
query="what is the capital of poland?"
print(len(retriever.vectorstore.similarity_search_with_relevance_scores(query)))
import asyncio
async def f():
await retriever.vectorstore.asimilarity_search_with_relevance_scores(query)
loop = asyncio.get_event_loop()
print(
loop.run_until_complete(f())
)
```
### Error Message and Stack Trace (if applicable)
```
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 232, in _aget_docs
return await self.retriever.aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents
raise e
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 679, in _aget_relevant_documents
await self.vectorstore.asimilarity_search_with_relevance_scores(
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 351, in asimilarity_search_with_relevance_scores
docs_and_similarities = await self._asimilarity_search_with_relevance_scores(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 278, in _asimilarity_search_with_relevance_scores
relevance_score_fn = self._select_relevance_score_fn()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 208, in _select_relevance_score_fn
raise NotImplementedError
```
### Description
I've encountered that issue when trying to use `RetrievalQA`. I've noticed then that I cannot use a retriver with a `similarity_score_threshold` on. Then, I checked that the problem was only with `chain.ainvoke`, and not `chain.invoke`.
The code above performs `similarity_search_with_relevance_scores` alright but asynchronous version fails.
It might be related to https://github.com/langchain-ai/langchain/issues/13242 but for async calls.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 15 2023, 12:09:56) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
``` | asimilarity_search_with_relevance_scores returns NotImplementedError with AzureSearch | https://api.github.com/repos/langchain-ai/langchain/issues/17329/comments | 1 | 2024-02-09T17:14:56Z | 2024-05-17T16:08:48Z | https://github.com/langchain-ai/langchain/issues/17329 | 2,127,547,322 | 17,329 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hello,
I wrote a code which will use normal/selfquery retieval and doesn't use RetrievalQachain. I want to add chain of thoughts prompt for accurate retrieval of documents. Below is the code which i used to retrieve docs
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining human rights commitments and implementation strategies by an organization, including ethical principles, global agreements, and operational procedures."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
{
"name": "document_year",
"description": "The year of the document.",
"type": "date",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
docs = retriever.get_relevant_documents("data related to cricket")
```
How should i add prompt template to the above to get accurate retrieval? Can you help me with the code?
### Idea or request for content:
_No response_ | can i use chain of thoughts prompt for document retrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17326/comments | 2 | 2024-02-09T16:32:14Z | 2024-02-09T16:41:05Z | https://github.com/langchain-ai/langchain/issues/17326 | 2,127,480,794 | 17,326 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
The following code fails during iteration if the custom_parser is explicitly wrapped in a RunnableLambda.
```python
from langchain_openai import ChatOpenAI
from langchain.pydantic_v1 import BaseModel, Field
from typing import List, Optional
from langchain_core.runnables import RunnableLambda
model = ChatOpenAI()
class UserInfo(BaseModel):
"""Information to extract from the user's input"""
name: Optional[str] = Field(description = "Name of user")
facts: List[str] = Field(description="List of facts about the user")
model_with_tools = model.bind_tools([UserInfo])
async def custom_parser(chunk_stream):
aggregated_message = None
async for chunk in chunk_stream:
if aggregated_message is None:
aggregated_message = chunk
else:
aggregated_message += chunk
yield aggregated_message.additional_kwargs['tool_calls'][0]['function']
custom_parser = RunnableLambda(custom_parser)
chain = model_with_tools | custom_parser
async for chunk in chain.astream('my name is eugene and i like cats and dogs'):
print(chunk)
```
Need to investigate if this is a bug or bad UX | Async generator function fails when wrapped in a Runnable Lambda and used in streaming | https://api.github.com/repos/langchain-ai/langchain/issues/17315/comments | 2 | 2024-02-09T14:46:26Z | 2024-06-01T00:08:30Z | https://github.com/langchain-ai/langchain/issues/17315 | 2,127,288,780 | 17,315 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
structured_query = StructuredQuery(
query="related to BCCI company",
# query = "data related to ABinbev",
limit=3 # Set the number of documents to retrieve
)
docs = retriever.get_relevant_documents(structured_query)
print(docs)
```
below is the output
```
[Document(page_content='BCCI is a cricket board that controls and manages the activities in India', metadata={'row': 0, 'source': '/content/files/19.csv'}),
Document(page_content='BBCI is a cricket board in India', metadata={'row': 36, 'source': '/content/files/23.csv'}),
Document(page_content='BCCI is a cricket board that controls and manages the activities in India.', metadata={'row': 14, 'source': '/content/files/11.csv'})]
```
In the output above, there is an issue with duplicated page_content. I need a script that fetches the top k documents in a way that avoids these duplications. For example, if we request the top 5 documents (k=5) and find that there are 2 duplicates among them, the script should discard the duplicates and instead retrieve additional unique documents to ensure we still receive a total of 5 unique documents. Essentially, if duplicates are found within the initially requested top documents, the script should continue to fetch the next highest-ranked document(s) until we have 5 unique page_contents. Can you provide a code that accomplishes this?
### Idea or request for content:
_No response_ | returning duplicates while retrieving the documents | https://api.github.com/repos/langchain-ai/langchain/issues/17313/comments | 2 | 2024-02-09T13:50:24Z | 2024-04-25T09:20:44Z | https://github.com/langchain-ai/langchain/issues/17313 | 2,127,194,884 | 17,313 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
structured_query = StructuredQuery(
query="related to BCCI company",
# query = "data related to ABinbev",
limit=3 # Set the number of documents to retrieve
)
docs = retriever.get_relevant_documents(structured_query)
print(docs)
```
below's the output
```
[Document(page_content='BCCI is a cricket board that controls and manages the activities in India', metadata={'row': 0, 'source': '/content/files/19.csv'}),
Document(page_content='BBCI is a cricket board in India', metadata={'row': 36, 'source': '/content/files/23.csv'}),
Document(page_content='BCCI is a cricket board that controls and manages the activities in India.', metadata={'row': 14, 'source': '/content/files/11.csv'})]
```
1st and 3rd output page_content is same, how to do deduplication and make sure its retrieving top k unique documents?
### Idea or request for content:
_No response_ | returning duplicates while retrieving the top k documents | https://api.github.com/repos/langchain-ai/langchain/issues/17310/comments | 9 | 2024-02-09T13:10:50Z | 2024-07-01T09:04:18Z | https://github.com/langchain-ai/langchain/issues/17310 | 2,127,127,974 | 17,310 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chains import load_chain
chain = load_chain('lc://chains/qa_with_sources/map-reduce/chain.json')
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'streamlit.web.cli.main'; 'streamlit.web.cli' is not a package
import streamlit.web.cli
a:toto
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
NameError: name 'toto' is not defined
def f() -> toto:
pass
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
NameError: name 'toto' is not defined
from langchain.chains import load_chain
chain = load_chain('lc://chains/qa_with_sources/map-reduce/chain.json')
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 591, in load_chain
if hub_result := try_load_from_hub(
^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain_core/utils/loading.py", line 54, in try_load_from_hub
return loader(str(file), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 623, in _load_chain_from_file
return load_chain_from_config(config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 586, in load_chain_from_config
return chain_loader(config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 124, in _load_map_reduce_documents_chain
reduce_documents_chain = _load_reduce_documents_chain(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 178, in _load_reduce_documents_chain
return ReduceDocumentsChain(
^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for ReduceDocumentsChain
document_variable_name
extra fields not permitted (type=value_error.extra)
return_intermediate_steps
extra fields not permitted (type=value_error.extra)
```
### Description
We want to create a chain to manipulate a list of documents.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.21
langchain-experimental==0.0.49
langchain-openai==0.0.2.post1
langchain-rag==0.1.18
Linux
Python 3.11.4 | Impossible to load chain `lc://chains/qa_with_sources/map-reduce/chain.json` | https://api.github.com/repos/langchain-ai/langchain/issues/17309/comments | 1 | 2024-02-09T12:45:06Z | 2024-05-17T16:08:43Z | https://github.com/langchain-ai/langchain/issues/17309 | 2,127,065,395 | 17,309 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.memory import PostgresChatMessageHistory
connection_string = ""
history = PostgresChatMessageHistory(
connection_string=connection_string,
session_id="trial1",
table_name= "schema.table"
)
history.add_user_message("Msg user 1")
history.add_ai_message("Msg AI 1")
print(history.messages)
```
### Error Message and Stack Trace (if applicable)
```error
Traceback (most recent call last):
File "d:\xtransmatrix\SmartSurgn\SessionHandler\test.py", line 9, in <module>
history.add_user_message("Msg user 1")
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\langchain\schema\chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\langchain\memory\chat_message_histories\postgres.py", line 66, in add_message
self.cursor.execute(
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\psycopg\cursor.py", line 732, in execute
raise ex.with_traceback(None)
psycopg.errors.UndefinedTable: relation "smartsurgn.msg_history" does not exist
LINE 1: INSERT INTO "schema.table" (session_id, message) V...
^
```
### Description
we are asked to provide a table name to store the chat message history which stores the table by default in the public schema, if I want to store it in a different schema then I must be able to provide a table name as `<schema_name>.<table_name>` but when I do this it runs into the above error. Note: the table gets created in the right schema but runs into error when trying to write to it
### System Info
`langchain==0.1.5` | Chat message history with postgres failing when destination table has explicit schema | https://api.github.com/repos/langchain-ai/langchain/issues/17306/comments | 9 | 2024-02-09T10:57:56Z | 2024-08-02T11:37:54Z | https://github.com/langchain-ai/langchain/issues/17306 | 2,126,902,036 | 17,306 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining"
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
{
"name": "document_year",
"description": "The year of the document.",
"type": "date",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
```
`docs = retriever.get_relevant_documents("Does the company has one or more")`
I don't see any option such as below for SelfQueryRetriever
`retriever = vectorstore.as_retriever(search_kwargs={"k": 5})`
Can you help me out on how to retrieve top k docs for SelfQueryRetriever?
### Idea or request for content:
_No response_ | how to retriever top k docs in SelfqueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17301/comments | 2 | 2024-02-09T08:53:21Z | 2024-02-09T14:51:46Z | https://github.com/langchain-ai/langchain/issues/17301 | 2,126,714,532 | 17,301 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I follow the code in this langchain doc: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat
using GPT 3.5 Turbo
The error does not show up when I use GPT4
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The error I get is this:
> Entering new AgentExecutor chain...
{
"action": "Retriever",
"action_input": {
"title": "Key factors to consider when evaluating the return on investment from AI initiatives"
}
}
ValidationError: 1 validation error for RetrieverInput
query
field required (type=value_error.missing)
### System Info
openai==1.7.0
langchain==0.1.1 | Structured chat with GPT3.5 Turbo ValidationError: 1 validation error for RetrieverInput query field required (type=value_error.missing) | https://api.github.com/repos/langchain-ai/langchain/issues/17300/comments | 5 | 2024-02-09T08:25:04Z | 2024-05-17T16:08:38Z | https://github.com/langchain-ai/langchain/issues/17300 | 2,126,678,447 | 17,300 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. Note: i want to use MyTextSplitter function only for documents page_content which are surpassing OpenAI's maximum character limit, not for all the document page_content data. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. I want to check if each document's page_content exceeds the maximum context length allowed by OpenAI Embeddings. For documents where page_content is longer than this limit, I intend to use a function named MyTextSplitter to divide the page_content into smaller sections. After splitting, these sections should be incorporated back into the respective documents, effectively replacing the original, longer page_content with the newly segmented texts. Can you provide code for me?
### Idea or request for content:
_No response_ | handle context length in chroma db | https://api.github.com/repos/langchain-ai/langchain/issues/17299/comments | 4 | 2024-02-09T07:27:16Z | 2024-02-09T14:52:09Z | https://github.com/langchain-ai/langchain/issues/17299 | 2,126,613,340 | 17,299 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. I want to check if each document's page_content exceeds the maximum context length allowed by OpenAI Embeddings. For documents where page_content is longer than this limit, I intend to use a function named MyTextSplitter to divide the page_content into smaller sections. After splitting, these sections should be incorporated back into the respective documents, effectively replacing the original, longer page_content with the newly segmented texts. Can you provide code for me?
### Idea or request for content:
_No response_ | handling context length in chromadb | https://api.github.com/repos/langchain-ai/langchain/issues/17298/comments | 2 | 2024-02-09T07:23:49Z | 2024-02-09T14:53:29Z | https://github.com/langchain-ai/langchain/issues/17298 | 2,126,609,550 | 17,298 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm using a function which again uses RecursiveCharacterTextSplitter in below which's working
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. Can you provide a code for this?
### Idea or request for content:
_No response_ | how to handle the context lengths in ChromaDB? | https://api.github.com/repos/langchain-ai/langchain/issues/17297/comments | 1 | 2024-02-09T07:11:09Z | 2024-02-09T14:49:07Z | https://github.com/langchain-ai/langchain/issues/17297 | 2,126,595,885 | 17,297 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code i'm using to try for handling longer context lengths
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="sk-")
# Generate embeddings for your documents
documents = [doc for doc in documents]
# Create a Chroma vector store from the documents
vectorstore = Chroma.from_documents(documents, openai.embed_documents)
```
it is returning below error
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-81-029281717453>](https://localhost:8080/#) in <cell line: 8>()
6
7 # Create a Chroma vector store from the documents
----> 8 vectorstore = Chroma.from_documents(documents, openai.embed_documents)
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
776 texts = [doc.page_content for doc in documents]
777 metadatas = [doc.metadata for doc in documents]
--> 778 return cls.from_texts(
779 texts=texts,
780 embedding=embedding,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
734 documents=texts,
735 ):
--> 736 chroma_collection.add_texts(
737 texts=batch[3] if batch[3] else [],
738 metadatas=batch[2] if batch[2] else None,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in add_texts(self, texts, metadatas, ids, **kwargs)
273 texts = list(texts)
274 if self._embedding_function is not None:
--> 275 embeddings = self._embedding_function.embed_documents(texts)
276 if metadatas:
277 # fill metadatas with empty dicts if somebody
AttributeError: 'function' object has no attribute 'embed_documents'
```
Can you assist me in dealing with handling context length issue beacuse i don't wanna use RecursiveCharacterTextSplitter as i've already chunked the data manually, I just want to send the data to ChromaDB at the same time by handling its context length
### Idea or request for content:
_No response_ | unable to use embed_documents function for ChromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/17295/comments | 1 | 2024-02-09T06:53:08Z | 2024-02-09T14:49:50Z | https://github.com/langchain-ai/langchain/issues/17295 | 2,126,577,191 | 17,295 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the FAISS code which i tried to run on for chromadb too, but its not working
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai.embed_documents([doc.page_content for doc in documents])
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
below's the code for chromadb
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai.embed_documents([doc.page_content for doc in documents])
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorstore = Chroma.from_embeddings(text_embeddings, openai)
```
the error is below
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-57-81938d77957d>](https://localhost:8080/#) in <cell line: 11>()
9
10 # Create a FAISS vector store from the embeddings
---> 11 vectorstore = Chroma.from_embeddings(text_embeddings, openai)
AttributeError: type object 'Chroma' has no attribute 'from_embeddings'
```
can you help me out on how to resolve this issue which i'm facing with chromadb?
### Idea or request for content:
_No response_ | unable to apply the same code on Chroma db which i've used for FAISS | https://api.github.com/repos/langchain-ai/langchain/issues/17292/comments | 6 | 2024-02-09T06:09:18Z | 2024-02-09T14:52:17Z | https://github.com/langchain-ai/langchain/issues/17292 | 2,126,535,653 | 17,292 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
llm = Bedrock(
client=client,
model_id='amazon.titan-text-express-v1',
model_kwargs={'temperature': 0.1,'},
endpoint_url='https://bedrock-runtime.us-east-1.amazonaws.com',
region_name='us-east-1',
verbose=True,
)
```
Is there a way I can provide the max output limit in model_kwargs? I am using llm in SQLDatabaseChain and seeing that the SQL command generated by llm gets truncated may be because of default max token limit
### Error Message and Stack Trace (if applicable)
truncates the SQL commands suddenly after 128 tokens
### Description
```
llm = Bedrock(
client=client,
model_id='amazon.titan-text-express-v1',
model_kwargs={'temperature': 0.1,'},
endpoint_url='https://bedrock-runtime.us-east-1.amazonaws.com',
region_name='us-east-1',
verbose=True,
)
```
Is there a way I can provide the max output limit in model_kwargs? I am using llm in SQLDatabaseChain and seeing that the SQL command generated by llm gets truncated may be because of default max token limit
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0
watchdog==3.0.0
| How to set max_output_token for AWS Bedrock Titan text express model? | https://api.github.com/repos/langchain-ai/langchain/issues/17287/comments | 18 | 2024-02-09T03:52:00Z | 2024-02-14T04:22:58Z | https://github.com/langchain-ai/langchain/issues/17287 | 2,126,422,842 | 17,287 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I have created a function to create agent and return the agent executor to run query. Here's the code:
```
def agent_executor(tools: List):
try:
# Create the language model
llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
input_variables=["tool_names", "tools", "task_context"],
template=SYSTEM_PROMPT_TEMPLATE,
),
MessagesPlaceholder(variable_name="chat_history", optional=True),
HumanMessagePromptTemplate.from_template(
input_variables=["input", "chat_history", "agent_scratchpad"],
template=HUMAN_PROMPT_TEMPLATE,
),
]
)
# print(f"Prompt: {prompt}")
# Create the memory object
memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=5, return_messages=True, output_key="output"
)
# Construct the JSON agent
if task_context is not None:
agent = create_agent(llm, tools, prompt)
else:
agent = create_structured_chat_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
handle_parsing_errors=True,
return_intermediate_steps=True,
max_iterations=4,
max_execution_time=100,
)
return agent_executor
except Exception as e:
print(f"Error in executing agent: {e}")
return None
```
I have created a `ConversationBufferWindowMemory` for this agent and assigned in the memory. This is how am I am running this agent:
```
query = "Reply to this email......"
tools = [create_email_reply]
agent = agent_executor(tools)
response = agent.invoke({"input": query})
return response["output"]
```
When I run this agent sometimes the the final answer is not string and may look as follows:
Action:
```json
{
"action": "Final Answer",
"action_input": {
"email": {
"to": "customer@example.com",
"subject": "Status of Your Invoice post_713",
"body": "Dear valued customer,\n\nThank you for reaching out to us with your inquiry about the status of your invoice number post_713.\n\nI am pleased to inform you that the invoice has been successfully posted and the payment status is marked as 'Paid'. The payment was processed on November 14, 2023, with the payment reference number 740115.\n\nShould you require any further assistance or have any additional questions, please do not hesitate to contact us at assist@aexonic.com.\n\nBest regards,\n\n[Your Name]\nCustomer Service Team\nAexonic Technologies"
}
}
}
```
You can see the final answer looks like a dictionary. In this case the agent execution shows an error after final answer. If I remove the memory then agent executes without any error.
### Error Message and Stack Trace (if applicable)
Action:
```json
{
"action": "Final Answer",
"action_input": {
"email": {
"to": "customer@example.com",
"subject": "Status of Your Invoice post_713",
"body": "Dear valued customer,\n\nThank you for reaching out to us with your inquiry about the status of your invoice number post_713.\n\nI am pleased to inform you that the invoice has been successfully posted and the payment status is marked as 'Paid'. The payment was processed on November 14, 2023, with the payment reference number 740115.\n\nShould you require any further assistance or have any additional questions, please do not hesitate to contact us at assist@aexonic.com.\n\nBest regards,\n\n[Your Name]\nCustomer Service Team\nAexonic Technologies"
}
}
}
```
> Finished chain.
Traceback (most recent call last):
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 80, in <module>
response = main(query)
^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 72, in main
agent_output = invoice_agent(query)
^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 41, in invoice_agent
response = agent.invoke({"input": query})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 164, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 440, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 39, in save_context
self.chat_memory.add_ai_message(output_str)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/chat_history.py", line 122, in add_ai_message
self.add_message(AIMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 35, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
### Description
I am trying to create a structured chat agent with memory. When the agent final answer is not a string then the executions fails with error after the final answer. If I remove the memory then it works fine.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:32:11 PDT 2023; root:xnu-10002.41.9~7/RELEASE_ARM64_T6030
> Python Version: 3.11.5 (main, Sep 15 2023, 16:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Agent executor with memory gives error after final answer | https://api.github.com/repos/langchain-ai/langchain/issues/17269/comments | 5 | 2024-02-08T22:19:52Z | 2024-06-18T11:55:39Z | https://github.com/langchain-ai/langchain/issues/17269 | 2,126,162,650 | 17,269 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
def _get_len_safe_embeddings(
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
"""
Generate length-safe embeddings for a list of texts.
This method handles tokenization and embedding generation, respecting the
set embedding context length and chunk size. It supports both tiktoken
and HuggingFace tokenizer based on the tiktoken_enabled flag.
Args:
texts (List[str]): A list of texts to embed.
engine (str): The engine or model to use for embeddings.
chunk_size (Optional[int]): The size of chunks for processing embeddings.
Returns:
List[List[float]]: A list of embeddings for each input text.
"""
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model
_chunk_size = chunk_size or self.chunk_size
# If tiktoken flag set to False
if not self.tiktoken_enabled:
try:
from transformers import AutoTokenizer
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"This is needed in order to for OpenAIEmbeddings without "
"`tiktoken`. Please install it with `pip install transformers`. "
)
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=model_name
)
for i, text in enumerate(texts):
# Tokenize the text using HuggingFace transformers
tokenized = tokenizer.encode(text, add_special_tokens=False)
# Split tokens into chunks respecting the embedding_ctx_length
for j in range(0, len(tokenized), self.embedding_ctx_length):
token_chunk = tokenized[j : j + self.embedding_ctx_length]
tokens.append(token_chunk)
indices.append(i)
# Embed each chunk separately
batched_embeddings = []
for i in range(0, len(tokens), _chunk_size):
token_batch = tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
inputs=token_batch,
**self._invocation_params,
)
if not isinstance(response, dict):
response = response.dict()
batched_embeddings.extend(r["embedding"] for r in response["data"])
# Concatenate the embeddings for each text
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
embeddings[indices[i]].extend(batched_embeddings[i])
return embeddings
```
followed by below
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
it has returned below issue
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-31-14fb4a40f661>](https://localhost:8080/#) in <cell line: 5>()
3
4 # Generate embeddings for your documents
----> 5 embeddings = _get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
6
7 # Create tuples of text and corresponding embedding
TypeError: _get_len_safe_embeddings() missing 1 required positional argument: 'texts'
```
Can you assist me with this code? It'd much better resolving this issue. Can you write an updated code?
### Idea or request for content:
_No response_ | unable to run get_len_safe_embeddings function which i wrote | https://api.github.com/repos/langchain-ai/langchain/issues/17267/comments | 2 | 2024-02-08T22:15:00Z | 2024-02-09T14:50:36Z | https://github.com/langchain-ai/langchain/issues/17267 | 2,126,156,320 | 17,267 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code i tried to use for handling long context lengths
```
def _get_len_safe_embeddings(
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
"""
Generate length-safe embeddings for a list of texts.
This method handles tokenization and embedding generation, respecting the
set embedding context length and chunk size. It supports both tiktoken
and HuggingFace tokenizer based on the tiktoken_enabled flag.
Args:
texts (List[str]): A list of texts to embed.
engine (str): The engine or model to use for embeddings.
chunk_size (Optional[int]): The size of chunks for processing embeddings.
Returns:
List[List[float]]: A list of embeddings for each input text.
"""
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model
_chunk_size = chunk_size or self.chunk_size
# If tiktoken flag set to False
if not self.tiktoken_enabled:
try:
from transformers import AutoTokenizer
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"This is needed in order to for OpenAIEmbeddings without "
"`tiktoken`. Please install it with `pip install transformers`. "
)
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=model_name
)
for i, text in enumerate(texts):
# Tokenize the text using HuggingFace transformers
tokenized = tokenizer.encode(text, add_special_tokens=False)
# Split tokens into chunks respecting the embedding_ctx_length
for j in range(0, len(tokenized), self.embedding_ctx_length):
token_chunk = tokenized[j : j + self.embedding_ctx_length]
tokens.append(token_chunk)
indices.append(i)
# Embed each chunk separately
batched_embeddings = []
for i in range(0, len(tokens), _chunk_size):
token_batch = tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
inputs=token_batch,
**self._invocation_params,
)
if not isinstance(response, dict):
response = response.dict()
batched_embeddings.extend(r["embedding"] for r in response["data"])
# Concatenate the embeddings for each text
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
embeddings[indices[i]].extend(batched_embeddings[i])
return embeddings
```
unable to run above function as it was returning below error and even some nested functions are not defined
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-19-0e438cfc104c>](https://localhost:8080/#) in <cell line: 2>()
1 def _get_len_safe_embeddings(
----> 2 self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
3 ) -> List[List[float]]:
4 """
5 Generate length-safe embeddings for a list of texts.
NameError: name 'List' is not defined
```
followed by
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai._get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
Is the complete function code which i gave and the function which i'm calling from openai is same else can you write the code where i can use the custom function and update the code followed by it?
### Idea or request for content:
_No response_ | tried using _get_len_safe_embeddings function, but retuning some issue | https://api.github.com/repos/langchain-ai/langchain/issues/17266/comments | 1 | 2024-02-08T22:05:40Z | 2024-02-09T14:48:53Z | https://github.com/langchain-ai/langchain/issues/17266 | 2,126,145,532 | 17,266 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which will load csv, then it'll be loaded into FAISS and will try to get the relevant documents, its not using RecursiveCharacterTextSplitter for chunking as the data is already chunked manually, below's the code
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
`print(documents[0])`
output is below
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant87543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
followed by
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
I want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer because i'm not using RecursiveCharacterTextSplitter. For example let's say row 1 has input less than OpenAI length, so it'll be sent, now row 2 has length greater than OpenAI embeddings context length, at that moment, i want to split that row 2 into multiple snippets under same source row 2. This should be done for every row which contains length greater than OpenAI embeddings. Can you assist me and help building/updating above code for me?
### Idea or request for content:
_No response_ | how to split the page_context text which has over length than OpenAI embeddings take? | https://api.github.com/repos/langchain-ai/langchain/issues/17265/comments | 1 | 2024-02-08T21:35:16Z | 2024-02-14T03:34:55Z | https://github.com/langchain-ai/langchain/issues/17265 | 2,126,107,312 | 17,265 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which will load csv, then it'll be loaded into FAISS and will try to get the relevant documents, its not using RecursiveCharacterTextSplitter for chunking as the data is already chunked manually, below's the code
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
`print(documents[0])` output is below
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant87543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
followed by
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
I want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer because i'm not using RecursiveCharacterTextSplitter. Can you write code for me? I just want a code like below
### Idea or request for content:
_No response_ | how to overcome input context length of OpenAI embeddings without using RecursiveCharacterTextSplitter? | https://api.github.com/repos/langchain-ai/langchain/issues/17264/comments | 7 | 2024-02-08T20:52:57Z | 2024-02-14T03:34:55Z | https://github.com/langchain-ai/langchain/issues/17264 | 2,126,051,745 | 17,264 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which loads a CSV file and create a variable documents
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
now for code `documents[1]`, below's the output
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International \n Covenant on Economic, Social and Cultural Rights, and International\nx1: 149.214813858271\ny1: 209.333904087543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
normal method of chunking data and sending it to index is below
```
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
now, how to do i send the documents data to FAISS without splitting it again, because i've already chunked the data manually,
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
but also also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. Can you write code for me? I just want a code like below
### Idea or request for content:
_No response_ | how to index the data into FAISS without using RecursiveCharacterTextSplitter? | https://api.github.com/repos/langchain-ai/langchain/issues/17262/comments | 5 | 2024-02-08T20:26:36Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17262 | 2,126,017,010 | 17,262 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the data which will try to index the data into FAISS using OpenAI embeddings,
```
import pandas as pd
from langchain_community.embeddings.openai import OpenAIEmbeddings
# Initialize OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="your-openai-api-key")
# Load your CSV file
df = pd.read_csv('your_file.csv')
# Get embeddings for each row in the 'Text' column
embeddings = openai.embed_documents(df['Text'].tolist())
# Now, you can use these embeddings to index into your FAISS vector database
# Initialize FAISS
faiss = FAISS()
# Index embeddings into FAISS
faiss.add_vectors(embeddings)
```
it has returned below error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-34-f28b6473f433>](https://localhost:8080/#) in <cell line: 16>()
14 # Now, you can use these embeddings to index into your FAISS vector database
15 # Initialize FAISS
---> 16 faiss = FAISS()
17
18 # Index embeddings into FAISS
TypeError: FAISS.__init__() missing 4 required positional arguments: 'embedding_function', 'index', 'docstore', and 'index_to_docstore_id'
```
can you please let me know how to resolve this? and also how to utilize the faiss after `faiss.add_vectors(embeddings)` for getting relevant documents for a query?
### Idea or request for content:
_No response_ | unable to directly index the data into openai embeddings without chunking | https://api.github.com/repos/langchain-ai/langchain/issues/17261/comments | 3 | 2024-02-08T20:11:00Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17261 | 2,125,991,752 | 17,261 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which will use CSVLoader to load the data which has only column named 'Text' and all i want to do is want to index each row of the 'Text' column from your CSV file into the FAISS vector database without re-chunking the data. I also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. Below's the code which i wrote
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
print(documents[1])
```
below's the output
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International \n Covenant on Economic, Social and Cultural Rights, and International Labour Organization's Declaration on Fundamental \n Principles and Rights at Work. These standards are elaborated upon our WWSBC and/or Code and include:\n Freedom of association and collective bargaining\n Prevention of human trafficking, forced, bonded or compulsory labor\n Anti-child labor\n Anti-discrimination in respect to employment and occupation\n Working hours limitations and Minimum Wage Standards\n Minimum age requirements for employment\n Freedom from harassment\n Diversity, Belonging and Inclusion\n Appropriate wages and benefits\n Right to occupational health and safety\n Supply Chain Responsibility\n Privacy and freedom of expression\n Environmental stewardship\n Anti-Corruption\nx1: 149.214813858271\ny1: 209.333904087543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
all i want to do is want to index each row of the 'Text' column from your CSV file into the FAISS vector database without re-chunking the data. I also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. I don't wanna use RecursiveCharacterTextSplitter because i've already chunked the data manually. Can you help me out with the code?
### Idea or request for content:
_No response_ | trying to index data into FAISS without using CharacterTextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/17260/comments | 3 | 2024-02-08T19:55:30Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17260 | 2,125,966,785 | 17,260 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="playground_mixtral_8x7b",
temperature=0.0,
top_p=0,
max_tokens=500,
seed=42,
callbacks = callbacks
)
### Error Message and Stack Trace (if applicable)
ValidationError Traceback (most recent call last)
Cell In[62], line 11 5 pass 9 from langchain_nvidia_ai_endpoints import ChatNVIDIA---> 11 llm = ChatNVIDIA(model="playground_mixtral_8x7b", 12 temperature=0.0, 13 top_p=0, 14 max_tokens=500, 15 seed=42, 16 callbacks = callbacks 17 )
File /usr/local/lib/python3.10/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs) 106 def __init__(self, **kwargs: Any) -> None:--> 107 super().__init__(**kwargs) 108 self._lc_kwargs = kwargs
File /usr/local/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error:--> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for ChatNVIDIA
temperature
ensure this value is greater than 0.0 (type=value_error.number.not_gt; limit_value=0.0)
### Description
The latest NVCF endpoint now allows the temperature to be 0. However, this wrapper does not allow this to happen, because of the pydantic default validator on the temperature field.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-nvidia-ai-endpoints==0.0.1
langchain-openai==0.0.5
platform mac
Python 3.11 | Temperature for NVIDIA Cloud Function (NVCF) endpoint could not be set to 0 | https://api.github.com/repos/langchain-ai/langchain/issues/17257/comments | 2 | 2024-02-08T19:19:00Z | 2024-05-17T16:08:33Z | https://github.com/langchain-ai/langchain/issues/17257 | 2,125,908,914 | 17,257 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I have a data, which has been already chunked in the csv format in column name Text, below's the code and format
`one = pd.read_csv('1.csv')[['Text']]`
below's the output
```
Text
--
AMD L
Human Rights Guiding Principles...
We commit to...
```
Now, I don't want to use RecursiveCharacterTextSplitter and use chunk_size, overlap etc. I directly want to sent the above data to the FAISS using `vectorStore = FAISS.from_documents(texts, embeddings)` code and i'm using OpenAI embeddings. Can you help me out with code which will help me doing this? And i directly want to index every row as one snippet as in 1st row of Text row is document[0], 2nd row of document is document[1]. So, what if OpenAI embeddings input has been exceeded? Is there any way to overcome that issue as well? If yes, how split again the single row chunks and append it back?
### Idea or request for content:
_No response_ | how to directly index the data into FAISS with the data which has been already chunked | https://api.github.com/repos/langchain-ai/langchain/issues/17256/comments | 7 | 2024-02-08T19:10:48Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17256 | 2,125,894,227 | 17,256 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import PromptTemplate
from langchain_google_genai import GoogleGenerativeAI
def build_executor(llm: BaseLanguageModel, prompt: PromptTemplate):
llm_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
return AgentExecutor(agent=agent, tools=[])
llm = GoogleGenerativeAI(model='models/text-bison-001')
input_variables = ["input", "agent_scratchpad"]
prompt = PromptTemplate.from_file(
"path/to/agent_template.txt", input_variables=input_variables
)
prompt_template = prompt.partial(custom_prompt="")
executor = build_executor(llm, prompt_template)
print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))
```
This the prompt template I used -
```text
you are an AI assistant, helping a Human with a task. The Human has asked you a question.
When you have a response to say to the Human, you MUST use the format:
Thought: Do I need to use a tool? No
Final Answer: [your response here]
Begin!
Previous conversation history:
New input: {input}
{agent_scratchpad}
```
### Error Message and Stack Trace (if applicable)
```bash
Traceback (most recent call last):
File "/Users/kallie.levy/dev/repos/app-common/app_common/executor/build_executor.py", line 33, in <module>
print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1376, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step
[
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in <listcomp>
[
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1130, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 392, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1035, in transform
for chunk in input:
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4168, in transform
yield from self.bound.transform(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 414, in stream
raise e
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 398, in stream
for chunk in self._stream(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 225, in _stream
for stream_resp in _completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 65, in _completion_with_retry
return _completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 60, in _completion_with_retry
return llm.client.generate_content(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'google.generativeai' has no attribute 'generate_content'. Did you mean: 'generate_text'?
```
### Description
I'm creating an AgentExecutor with Google GenerativeAI llm, but since version `0.1.1` of `langchain`, I receive this error. If `langchain <= 0.1.0`, this script works.
### System Info
```
python==3.11.6
langchain==0.1.1
langchain-community==0.0.19
langchain-core==0.1.21
langchain-google-genai==0.0.5
langchain-google-vertexai==0.0.3
``` | Invoking agent executor with Google GenerativeAI: AttributeError: module 'google.generativeai' has no attribute 'generate_content' | https://api.github.com/repos/langchain-ai/langchain/issues/17251/comments | 3 | 2024-02-08T17:55:20Z | 2024-07-08T16:05:25Z | https://github.com/langchain-ai/langchain/issues/17251 | 2,125,771,216 | 17,251 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code which is used for all normal retriver, SelfQuery, MultiQuery and ParentDocument Retriver (same template)
```
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
unique_sources = set()
for doc in documents:
source = doc.metadata['source']
unique_sources.add(source)
num_unique_sources = len(unique_sources)
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "can you return the objective of ABInBev?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
After seeing all the outputs from different retrievers, I felt that normal retriever and mutliquery retriever are performing well. May i know the reason why SelfQuery and ParentDocument retriever is not returning better results?
### Idea or request for content:
_No response_ | regarding performances between normal retriver, SelfQuery, MultiQuery and ParentDocument Retriver | https://api.github.com/repos/langchain-ai/langchain/issues/17243/comments | 1 | 2024-02-08T15:40:43Z | 2024-02-08T16:54:50Z | https://github.com/langchain-ai/langchain/issues/17243 | 2,125,484,631 | 17,243 |
[
"langchain-ai",
"langchain"
] | Feature request discussed in https://github.com/langchain-ai/langchain/discussions/17176
Expand `cache` to accept a cache implementation in addition to a bool value:
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/core/langchain_core/language_models/chat_models.py#L106-L106
If provided, will use the given cache.
# Acceptance Criteria
- [ ] Documentation to the cache variable to explain how it can be used
- [ ] Update https://python.langchain.com/docs/modules/model_io/chat/chat_model_caching
- [ ] Include unit tests to test given functionality
PR can include implementation for caching of LLMs in addition to chat models. | Enhancement: Add ability to pass local cache to chat models | https://api.github.com/repos/langchain-ai/langchain/issues/17242/comments | 1 | 2024-02-08T15:37:44Z | 2024-05-21T16:09:01Z | https://github.com/langchain-ai/langchain/issues/17242 | 2,125,476,844 | 17,242 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/16446
<div type='discussions-op-text'>
<sup>Originally posted by **jason1315** January 23, 2024</sup>
In my project, I need to implement the following logic. Here is a simple:
```python
import asyncio
from langchain_core.runnables import *
from lang_chain.llm.llms import llm
def _test(_dict):
print("value:", _dict)
return _dict
@chain
def my_method(_dict, **keywords):
print(keywords)
return RunnablePassthrough.assign(key=lambda x: keywords.get("i")) | RunnableLambda(_test)
if name == 'main':
loop = asyncio.new_event_loop()
my_list = ["1", "2", "3", " 4", "5"]
head = RunnablePassthrough()
for i in my_list:
head = head | my_method.bind(i=i)
stream = head.invoke({})
#
# async def __stream(stream1):
# async for i in stream1:
# print(i)
#
# loop.run_until_complete(__stream(stream))
```
When I use the .invoke({}) method, it outputs the following results correctly:
```text
{'i': '1'}
value: {'key': '1'}
{'i': '2'}
value: {'key': '2'}
{'i': '3'}
value: {'key': '3'}
{'i': ' 4'}
value: {'key': ' 4'}
{'i': '5'}
value: {'key': '5'}
```
But if I use the astream_log({}) method, it throws an error:
```text
File "F:\py3.11\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: RunnableLambda._atransform.<locals>.func() got an unexpected keyword argument 'i'
```
Why is it designed like this? Do I need to implement a runnable similar to the model if I want to achieve the above logic?</div> | In the astream_log() method, you cannot use the bind method with RunnableLambda. | https://api.github.com/repos/langchain-ai/langchain/issues/17241/comments | 1 | 2024-02-08T15:03:18Z | 2024-07-15T16:06:25Z | https://github.com/langchain-ai/langchain/issues/17241 | 2,125,385,400 | 17,241 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
vectorstore = Chroma(
collection_name="full_documents", embedding_function=embeddings)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
retriever.add_documents(document, ids=None)
```
above is the code which uses chroma as vectordb, can i use FAISS for child_splitter and parent_splitter like below code
```
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
```
if yes, ca you please help me with the code?
### Idea or request for content:
_No response_ | can i use FAISS isntead of Chroma for ParentDocumentRetriver? | https://api.github.com/repos/langchain-ai/langchain/issues/17237/comments | 5 | 2024-02-08T13:40:32Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17237 | 2,125,216,339 | 17,237 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The function sitemap doesn't fetching, it gives me a empty list.
Code:
```
from langchain_community.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
sitemap_loader.requests_per_second = 2
sitemap_loader.requests_kwargs = {"verify": False}
docs = sitemap_loader.load()
print(docs)
```
Output:
Fetching pages: 0it [00:00, ?it/s]
[]
### Idea or request for content:
The example of documentation doesn't work | Update documentation for sitemap loader to use correct URL | https://api.github.com/repos/langchain-ai/langchain/issues/17236/comments | 1 | 2024-02-08T12:46:55Z | 2024-02-13T00:20:34Z | https://github.com/langchain-ai/langchain/issues/17236 | 2,125,103,304 | 17,236 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Below's the code
```
loader = DirectoryLoader(f'/content', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=llm)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what's the commitent no 3 and 4?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
The above code is returning the correct output, but returning the wrong source document. so, i tried playing with chunk and overlap size and sometimes it'll returning correct source document name. How to get the correct source doc name everytime?
### Idea or request for content:
_No response_ | returning wrong source document name | https://api.github.com/repos/langchain-ai/langchain/issues/17233/comments | 6 | 2024-02-08T12:17:21Z | 2024-02-09T14:53:47Z | https://github.com/langchain-ai/langchain/issues/17233 | 2,125,050,901 | 17,233 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def _execute(
self,
command: str,
fetch: Literal["all", "one"] = "all",
) -> Sequence[Dict[str, Any]]:
"""
Executes SQL command through underlying engine.
If the statement returns no rows, an empty list is returned.
"""
with self._engine.begin() as connection: # type: Connection
if self._schema is not None:
if self.dialect == "snowflake":
connection.exec_driver_sql(
"ALTER SESSION SET search_path = %s", (self._schema,)
)
elif self.dialect == "bigquery":
connection.exec_driver_sql("SET @@dataset_id=?", (self._schema,))
elif self.dialect == "mssql":
pass
elif self.dialect == "trino":
connection.exec_driver_sql("USE ?", (self._schema,))
elif self.dialect == "duckdb":
# Unclear which parameterized argument syntax duckdb supports.
# The docs for the duckdb client say they support multiple,
# but `duckdb_engine` seemed to struggle with all of them:
# https://github.com/Mause/duckdb_engine/issues/796
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
elif self.dialect == "oracle":
connection.exec_driver_sql(
f"ALTER SESSION SET CURRENT_SCHEMA = {self._schema}"
)
elif self.dialect == "sqlany":
# If anybody using Sybase SQL anywhere database then it should not
# go to else condition. It should be same as mssql.
pass
elif self.dialect == "postgresql": # postgresql
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
cursor = connection.execute(text(command))
if cursor.returns_rows:
if fetch == "all":
result = [x._asdict() for x in cursor.fetchall()]
elif fetch == "one":
first_result = cursor.fetchone()
result = [] if first_result is None else [first_result._asdict()]
else:
raise ValueError("Fetch parameter must be either 'one' or 'all'")
return result
return []
```
### Error Message and Stack Trace (if applicable)
```code
SELECT * FROM metadata_sch_stg.company_datasets LIMIT 2←[0m←[36;1m←[1;3mError: (pg8000.exceptions.DatabaseError) {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "$1"', 'P': '20', 'F': 'scan.l', 'L': '1180', 'R': 'scanner_yyerror'}
[SQL: SET search_path TO %s]
[parameters: ('metadata_sch_stg',)]
```
### Description
When attempting to set the PostgreSQL search_path using exec_driver_sql within the SQLDatabase class, an error is thrown. The relevant code snippet is as follows:
```python
elif self.dialect == "postgresql": # postgresql
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
```
This line attempts to set the search_path to the schema defined in the self._schema attribute. However, this results in a syntax error because the parameter substitution (%s) is not supported for the SET command in PostgreSQL.
Expected Behavior:
The search_path should be set to the specified schema without errors, allowing subsequent queries to run within the context of that schema.
Actual Behavior:
A syntax error is raised, indicating an issue with the SQL syntax near the parameter substitution placeholder.
Steps to Reproduce the error:
Instantiate an SQLDatabase object with the PostgreSQL dialect.
Change the Postgres schema to any other schema, other that 'public' Schema.
Observe the syntax error.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-google-vertexai==0.0.3
langsmith==0.0.85
pg8000==1.29.8
SQLAlchemy==2.0.16
cloud-sql-python-connector==1.2.4
OS: Windows | Error when setting PostgreSQL search_path using exec_driver_sql in SQLDatabase class | https://api.github.com/repos/langchain-ai/langchain/issues/17231/comments | 2 | 2024-02-08T10:31:05Z | 2024-06-12T16:08:01Z | https://github.com/langchain-ai/langchain/issues/17231 | 2,124,824,598 | 17,231 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query
Can anyone actually make out the SQL commands?
In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable pgvector and setup your database as a vector store: ```sql – Enable the pgvector extension to work with embedding vectors create extension if not exists vector;
– Create a table to store your documents create table documents ( id uuid primary key, content text, – corresponds to Document.pageContent metadata jsonb, – corresponds to Document.metadata embedding vector (1536) – 1536 works for OpenAI embeddings, change if needed );
– Create a function to search for documents create function match_documents ( query_embedding vector (1536), filter jsonb default ‘{}’ ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ #variable_conflict use_column begin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding; end; $$; ```
That is what it looks like in Google Chrome.
### Idea or request for content:
Appropriately from ChatGPT:
Here are the SQL commands you need to run in your Supabase SQL editor to set up a vector store with pgvector. These commands will create the necessary extensions, tables, and functions for your vector store:
1. **Enable pgvector Extension**:
```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
2. **Create a Table for Storing Documents**:
```sql
CREATE TABLE documents (
id uuid PRIMARY KEY,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 dimensions for OpenAI embeddings
);
```
3. **Create a Function for Searching Documents**:
```sql
CREATE FUNCTION match_documents(
query_embedding vector(1536),
filter jsonb DEFAULT '{}'
)
RETURNS TABLE(
id uuid,
content text,
metadata jsonb,
similarity float
)
LANGUAGE plpgsql AS $$
BEGIN
RETURN QUERY SELECT
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) AS similarity
FROM documents
WHERE metadata @> filter
ORDER BY documents.embedding <=> query_embedding;
END;
$$;
```
These commands will set up your database to work with embedding vectors using pgvector, create a table to store documents with an embedding vector field, and a function to perform document searches based on these embeddings.
Make sure to carefully input these commands in the Supabase SQL editor and adjust any parameters (like the dimension size of the vector or table structure) as needed for your specific use case. For more information and detailed instructions, please refer to the [Supabase documentation](https://supabase.com/docs). | poorly formatted SQL commands for pgvector Supabase | https://api.github.com/repos/langchain-ai/langchain/issues/17225/comments | 1 | 2024-02-08T07:28:07Z | 2024-02-12T10:23:03Z | https://github.com/langchain-ai/langchain/issues/17225 | 2,124,517,651 | 17,225 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Use following code:
```python
from langchain_community.embeddings import BedrockEmbeddings
embeddings = BedrockEmbeddings(
region_name="us-east-1",
model_id="cohere.embed-english-v3"
)
e1 = embeddings.embed_documents(["What is the project name?"])[0]
e2 = embeddings.embed_query("What is the project name?")
print(e1==e2)
```
### Error Message and Stack Trace (if applicable)
Outputs: `True`
Should ideally be `False`.
### Description
Cohere models has the ability to generate optimized embeddings for type of use case. However it is not being leveraged in AWS Bedrock.
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/community/langchain_community/embeddings/bedrock.py#L123-L131
Current workaround is to define the `input_type` at the construtor which is not ideal (viz. using with predefined tools) compared to native way defined for the `CohereEmbeddings` class.
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/community/langchain_community/embeddings/cohere.py#L125-L134
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Documents and queries with bedrock's cohere model should result in different embedding values | https://api.github.com/repos/langchain-ai/langchain/issues/17222/comments | 1 | 2024-02-08T04:56:58Z | 2024-05-16T16:09:05Z | https://github.com/langchain-ai/langchain/issues/17222 | 2,124,359,955 | 17,222 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```%pip install --upgrade --quiet langchain-pinecone langchain-openai langchain```
### Error Message and Stack Trace (if applicable)
ERROR: Could not find a version that satisfies the requirement langchain-pinecone (from versions: none)
ERROR: No matching distribution found for langchain-pinecone
### Description
To install the new pinecone vector store library LangChain suggests to install the **langchain-pinecone** library through pip install. But the pip install says that such a package does not exist. This is the [document page](https://python.langchain.com/docs/integrations/vectorstores/pinecone) I'm referring to.
### System Info
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
pinecone-client==3.0.2
Platform: linux and Google colab | The pip install for langchain-pinecone shows error. | https://api.github.com/repos/langchain-ai/langchain/issues/17221/comments | 4 | 2024-02-08T04:56:41Z | 2024-02-09T09:56:04Z | https://github.com/langchain-ai/langchain/issues/17221 | 2,124,359,758 | 17,221 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@dosu-bot I am getting the error with SQLDatabase and SQLDatabaseChain. The output tokens of SQL command gets truncated max at 128 even after using max_string_length parameter in SQLDatabase set to 32000.
Here is the code:
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=1,
include_tables=["table1"],
view_support=True,
max_string_length=32000,
)
return_op = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
verbose=True,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
ProgrammingError: (snowflake.connector.errors.ProgrammingError) 001003 (42000): SQL compilation error: parse error line 1 at position 300 near '<EOF>. [SQL: SELECT DESTINATION_DATA, SUM(CASE WHEN LOWER(TRIM(TVENA_CANE)) = 'skpettion' THEN 1 ELSE 0 END) AS skpettion, SUM(CASE WHEN LOWER(TRIM(TVENA_CANE)) = 'tickk' THEN 1 ELSE 0 END) AS CLICKS FROM TABT_COURS_KBSPBIGN_CIDORT WHERE DIRQANE = 'buj' AND TVENA_CANE >= '2023-12-01' AND TVENA_CANE <= '2023]
### Description
Generated SQL command gets truncated max at 128
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0 | SQLDatabaseChain, SQLDatabase max_string_length not working | https://api.github.com/repos/langchain-ai/langchain/issues/17212/comments | 13 | 2024-02-08T00:03:42Z | 2024-05-21T16:08:56Z | https://github.com/langchain-ai/langchain/issues/17212 | 2,124,119,799 | 17,212 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
>>> from langchain_community.llms import Ollama
>>> from langchain.callbacks import wandb_tracing_enabled
>>> llm = Ollama(model="mistral")
>>> with wandb_tracing_enabled():
... llm.invoke("Tell me a joke")
...
wandb: Streaming LangChain activity to W&B at https://wandb.ai/<redacted>
wandb: `WandbTracer` is currently in beta.
wandb: Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
# (1.) : WORKS
" Why don't scientists trust atoms?\n\nBecause they make up everything!"
>>> os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
>>> llm.invoke("Tell me a joke")
" Why don't scientists trust atoms?\n\nBecause they make up everything!"
# (2.) Doesn't work
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm following the documentation that says, just setting os.environ["LANCHAIN_WANDB_TRACING"]="true" is enough to trace langchain with wandb. It doesn't.
Using the context manager shows that everything is setup correctly.
I have no idea what is happening.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.12.1 (main, Jan 23 2024, 13:02:12) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.5
> langchain_community: 0.0.18
> langsmith: 0.0.86
> langchain_mistralai: 0.0.4
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | os.environ["LANCHAIN_WANDB_TRACING"]="true" doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/17211/comments | 1 | 2024-02-08T00:02:59Z | 2024-05-16T16:08:54Z | https://github.com/langchain-ai/langchain/issues/17211 | 2,124,119,088 | 17,211 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
# In langchain/libs/community/langchain_community/agent_toolkits/sql/base.py
if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:
if prompt is None:
from langchain.agents.mrkl import prompt as react_prompt
format_instructions = (
format_instructions or react_prompt.FORMAT_INSTRUCTIONS
)
template = "\n\n".join(
[
react_prompt.PREFIX,
"{tools}",
format_instructions,
react_prompt.SUFFIX,
]
)
prompt = PromptTemplate.from_template(template)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Although it respects and uses the format_instructions argument, it completely ignores prefix and suffix in favor of the hardcoded values in from langchain.agents.mrkl import prompt as react_prompt. This unnecessarily requires fully construction the prompt.
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Sep 26 19:53:57 UTC 2023
> Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.4
> langchain_community: 0.0.17
> langsmith: 0.0.86
> langchain_experimental: 0.0.49
> langchain_openai: 0.0.2.post1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | create_sql_agent ignores custom prefix and suffix if agent_type="zero-shot-react-description" | https://api.github.com/repos/langchain-ai/langchain/issues/17210/comments | 4 | 2024-02-07T23:50:17Z | 2024-02-23T18:22:31Z | https://github.com/langchain-ai/langchain/issues/17210 | 2,124,105,690 | 17,210 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.document_loaders import AsyncChromiumLoader,AsyncHtmlLoader
from langchain.document_transformers import BeautifulSoupTransformer
# Load HTML
loader = AsyncChromiumLoader(["https://www.tandfonline.com/doi/full/10.1080/07303084.2022.2053479"])
html = loader.load()
bs_transformer = BeautifulSoupTransformer()
docs_transformed = bs_transformer.transform_documents(html, tags_to_extract=['h1','h2',"span",'p'])
print(docs_transformed)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm encountering an issue with web scraping using the provided code snippet in the langchain repository.
The code generally works well for most URLs, but there are specific cases where it fails to extract any content. For instance:
1) URL: "https://www.cdc.gov/populationhealth/well-being/features/how-right-now.htm"When attempting to scrape this URL, no content is extracted.
Upon investigation, I found that the URL redirects to: "https://www.cdc.gov/emotional-wellbeing/features/how-right-now.htm?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fpopulationhealth%2Fwell-being%2Ffeatures%2Fhow-right-now.htm"This redirection might be causing the issue.
2) URL: "https://onlinelibrary.wiley.com/doi/10.1111/josh.13243"Scraping this URL returns the following content:
"'onlinelibrary.wiley.com Comprobando si la conexión del sitio es segura Enable JavaScript and cookies to continue'"It seems like there might be some JavaScript or cookie-based verification process causing the scraping to fail.Steps to Reproduce:Use the provided code snippet with the mentioned URLs.
Observe the lack of extracted content or the presence of unexpected content.
Expected Behavior:
The web scraping code should consistently extract relevant content from the provided URLs without issues.
Additional Information: It might be necessary to handle URL redirections or JavaScript-based content verification to ensure successful scraping.
Any insights or suggestions on how to improve the code to handle these scenarios would be greatly appreciated.
### System Info
pip install -q langchain-openai langchain playwright beautifulsoup4
| Web Scrapping: specific cases where it fails to extract any content | https://api.github.com/repos/langchain-ai/langchain/issues/17203/comments | 1 | 2024-02-07T22:30:41Z | 2024-05-15T16:08:04Z | https://github.com/langchain-ai/langchain/issues/17203 | 2,124,008,096 | 17,203 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
cypher_prompt = PromptTemplate.from_template(CYPHER_GENERATION_TEMPLATE)
cypher_qa = GraphCypherQAChain.from_llm(
llm,
graph=graph,
cypher_prompt=cypher_prompt,
verbose=True,
return_intermediate_steps=True,
return_direct=False
)
------ other part of the code ----
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
return_intermediate_steps=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
GraphCypherQAChain tool is accesible for the react-chat agent an generate good cypher queries, but it do not return the cypher queries to the agent.
What I expect is that with return_intermediate_steps on both the tool and the agent I can have both the cypher queries and the agent steps. Right now I only see the agents return_intermediate_steps, but there is any cypher query from the tool.
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.5
langchainhub==0.1.14
mac
Python 3.9.6
| Can't acces return_intermediate_steps of a tool when dealing with an agent | https://api.github.com/repos/langchain-ai/langchain/issues/17182/comments | 1 | 2024-02-07T14:47:53Z | 2024-05-15T16:07:59Z | https://github.com/langchain-ai/langchain/issues/17182 | 2,123,203,370 | 17,182 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class MongoHandler:
...
@error_logging
def setup_vector_search(
self,
search_index_name: str,
text_key: str,
embedding_key: str
) -> None:
self._search_index_name = search_index_name
try:
self._vector_search = MongoDBAtlasVectorSearch(
collection=self._collection,
embedding=self._embedding_model,
index_name=search_index_name,
text_key=text_key,
embedding_key=embedding_key,
)
except Exception as e:
logging.error('Mongo Atlas에서 먼저 Search Index 세팅을 해주세요.')
logging.error(e)
raise e
@error_logging
def vector_search(
self,
query: str,
k: int=5,
pre_filter: dict=None,
) -> List[Document]:
assert self._vector_search, 'vector search 세팅을 먼저 해주세요.'
results = self._vector_search.similarity_search(
query=query,
k=k,
pre_filter=pre_filter
)
return results
def search_documents(company_name, question_dict, num_docs=2):
search_result = {}
for key, value in question_dict.items():
for query in value:
contents = ''
pre_filter = {"$and": [{"stock_name": company_name}, {'major_category': key}]}
search = MONGODB_COLLENTION.vector_search(
query=query,
pre_filter=pre_filter,
k=num_docs
)
search_contents = [content.page_content for content in search]
# reference_data = [content.metadata for content in search]
contents += '\n\n'.join(search_contents)
search_result[query] = contents
return search_result
def run(model="gpt-3.5-turbo-0125"):
stock_name_list = get_company_name()
company_info = {}
for company_name in tqdm(stock_name_list[:2]):
question_dict = make_questions(company_name)
search_result = search_documents(company_name, question_dict)
total_answers = get_answers(company_name, search_result, model)
company_info.update(total_answers)
return company_info
company_info_1 = run(model="gpt-3.5-turbo-1106")
for key, value in company_info_1.items():
print(f"{key}:\n{value}\n\n")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 1
----> 1 company_info_1 = run(model="gpt-3.5-turbo-1106")
2 for key, value in company_info_1.items():
3 print(f"{key}:\n{value}\n\n")
Cell In[4], line 7
5 for company_name in tqdm(stock_name_list[:2]):
6 question_dict = make_questions(company_name)
----> 7 search_result = search_documents(company_name, question_dict)
8 total_answers = get_answers(company_name, search_result, model)
9 company_info.update(total_answers)
Cell In[3], line 24
22 contents = ''
23 pre_filter = {"$and": [{"stock_name": company_name}, {'major_category': key}]}
---> 24 search = MONGODB_COLLENTION.vector_search(
25 query=query,
26 pre_filter=pre_filter,
27 k=num_docs
28 )
29 search_contents = [content.page_content for content in search]
30 # reference_data = [content.metadata for content in search]
File ~/work/team_project/contents-generate-ai/baseLogger.py:210, in error_logging.<locals>.wrapper(*args, **kwargs)
208 except Exception as e:
209 logging.error(e)
--> 210 raise e
File ~/work/team_project/contents-generate-ai/baseLogger.py:206, in error_logging.<locals>.wrapper(*args, **kwargs)
204 def wrapper(*args, **kwargs):
205 try:
--> 206 return func(*args, **kwargs)
208 except Exception as e:
209 logging.error(e)
File ~/work/team_project/contents-generate-ai/src/modules/my_mongodb.py:247, in MongoHandler.vector_search(self, query, k, pre_filter)
238 @error_logging
239 def vector_search(
240 self,
(...)
243 pre_filter: dict=None,
244 ) -> List[Document]:
245 assert self._vector_search, 'vector search 세팅을 먼저 해주세요.'
--> 247 results = self._vector_search.similarity_search(
248 query=query,
249 k=k,
250 pre_filter=pre_filter
251 )
252 return results
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:273, in MongoDBAtlasVectorSearch.similarity_search(self, query, k, pre_filter, post_filter_pipeline, **kwargs)
256 """Return MongoDB documents most similar to the given query.
257
258 Uses the vectorSearch operator available in MongoDB Atlas Search.
(...)
270 List of documents most similar to the query and their scores.
271 """
272 additional = kwargs.get("additional")
--> 273 docs_and_scores = self.similarity_search_with_score(
274 query,
275 k=k,
276 pre_filter=pre_filter,
277 post_filter_pipeline=post_filter_pipeline,
278 )
280 if additional and "similarity_score" in additional:
281 for doc, score in docs_and_scores:
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:240, in MongoDBAtlasVectorSearch.similarity_search_with_score(self, query, k, pre_filter, post_filter_pipeline)
223 """Return MongoDB documents most similar to the given query and their scores.
224
225 Uses the vectorSearch operator available in MongoDB Atlas Search.
(...)
237 List of documents most similar to the query and their scores.
238 """
239 embedding = self._embedding.embed_query(query)
--> 240 docs = self._similarity_search_with_score(
241 embedding,
242 k=k,
243 pre_filter=pre_filter,
244 post_filter_pipeline=post_filter_pipeline,
245 )
246 return docs
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:212, in MongoDBAtlasVectorSearch._similarity_search_with_score(self, embedding, k, pre_filter, post_filter_pipeline)
210 text = res.pop(self._text_key)
211 score = res.pop("score")
--> 212 del res["embedding"]
213 docs.append((Document(page_content=text, metadata=res), score))
214 return docs
KeyError: 'embedding'
### Description
<Situation>
1. Set up the embedding field name, "doc_embedding"
2. Run `MongoDBAtlasVectorSearch(**kwargs).similarity_search(**params)`
3. Raise KeyError: 'embedding'
I think `del res["embedding"]` is hard code.
So I suggest that fix that line to `del res[self._embedding_key]`.
### System Info
[tool.poetry.dependencies]
python = "^3.10"
openai = "^1.7.0"
langchain = "^0.1.0"
langchain-openai = "^0.0.2"
langchain-community = "^0.0.15" | community: [MongoDBAtlasVectorSearch] Fix KeyError 'embedding' | https://api.github.com/repos/langchain-ai/langchain/issues/17177/comments | 2 | 2024-02-07T13:31:02Z | 2024-06-08T16:09:40Z | https://github.com/langchain-ai/langchain/issues/17177 | 2,123,046,146 | 17,177 |
[
"langchain-ai",
"langchain"
] | Can we also update the pricing information for the latest OpenAI models (released 0125)?
_Originally posted by @huanvo88 in https://github.com/langchain-ai/langchain/issues/12994#issuecomment-1923687183_
| Can we also update the pricing information for the latest OpenAI models (released 0125)? | https://api.github.com/repos/langchain-ai/langchain/issues/17173/comments | 3 | 2024-02-07T12:41:17Z | 2024-07-17T16:04:53Z | https://github.com/langchain-ai/langchain/issues/17173 | 2,122,949,251 | 17,173 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
%pip install pymilvus
#Imports a PyMilvus package:
from pymilvus import (
connections,
utility,
FieldSchema,
CollectionSchema,
DataType,
Collection,
)
from langchain_openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.schema.runnable import RunnablePassthrough
from langchain.prompts import PromptTemplate
from langchain_community.document_loaders import TextLoader,PyPDFLoader
from langchain_community.vectorstores import Milvus
connections.connect("default", host="localhost", port="19530")
import os
import openai
from dotenv import load_dotenv
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
file_path="Code_of_Conduct_Policy.pdf"
loader = PyPDFLoader(file_path)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=documents)
print(texts)
embeddings = OpenAIEmbeddings()
#Creates a collection:
fields = [
FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=False),
FieldSchema(name="random", dtype=DataType.DOUBLE),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=8)
]
schema = CollectionSchema(fields, "hello_milvus is the simplest demo to introduce the APIs")
hello_milvus = Collection("hello_milvus", schema)
vector_db = Milvus.from_documents(
texts,
embeddings,
collection_name="testing",
connection_args={"host": "localhost", "port": "19530"},
)
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)
Cell In[18], [line 1](vscode-notebook-cell:?execution_count=18&line=1)
----> [1](vscode-notebook-cell:?execution_count=18&line=1) vector_db = Milvus.from_documents(
[2](vscode-notebook-cell:?execution_count=18&line=2) texts,
[3](vscode-notebook-cell:?execution_count=18&line=3) embeddings,
[4](vscode-notebook-cell:?execution_count=18&line=4) collection_name="testing",
[5](vscode-notebook-cell:?execution_count=18&line=5) connection_args={"host": "localhost", "port": "19530"},
[6](vscode-notebook-cell:?execution_count=18&line=6) )
File [~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
[506](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:506) texts = [d.page_content for d in documents]
[507](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:507) metadatas = [d.metadata for d in documents]
--> [508](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508) return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984), in Milvus.from_texts(cls, texts, embedding, metadatas, collection_name, connection_args, consistency_level, index_params, search_params, drop_old, ids, **kwargs)
[971](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:971) auto_id = True
[973](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:973) vector_db = cls(
[974](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:974) embedding_function=embedding,
[975](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:975) collection_name=collection_name,
(...)
[982](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:982) **kwargs,
[983](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:983) )
--> [984](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984) vector_db.add_texts(texts=texts, metadatas=metadatas, ids=ids)
[985](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:985) return vector_db
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586), in Milvus.add_texts(self, texts, metadatas, timeout, batch_size, ids, **kwargs)
[584](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:588) try:
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586), in <listcomp>(.0)
[584](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:588) try:
KeyError: 'pk'
### Description
Every cell is working but when I run vector_db = Milvus.from_documents, its throwing an error, though my docker is running, still I am gettiing the same error
### System Info
pip install pymilvus
| Getting "KeyError: 'pk' " while using Milvus DB | https://api.github.com/repos/langchain-ai/langchain/issues/17172/comments | 9 | 2024-02-07T12:37:12Z | 2024-06-18T08:05:08Z | https://github.com/langchain-ai/langchain/issues/17172 | 2,122,942,016 | 17,172 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chains import ConversationalRetrievalChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
llm_jurassic_ultra = Bedrock(
model_id="ai21.j2-ultra-v1",
endpoint_url="https://bedrock.us-east-1.amazonaws.com",
model_kwargs={"temperature": 0.7, "maxTokens": 500, "numResults": 1}
print('llm_jurassic_ultra:', llm_jurassic_ultra);
llm_jurassic_mid = Bedrock(
model_id="amazon.titan-text-express-v1",
endpoint_url="https://bedrock.us-east-1.amazonaws.com",
model_kwargs={"temperature": 0.7, "maxTokenCount": 300, "topP": 1}
)
print('llm_jurassic_mid:', llm_jurassic_mid);
#Create template for combining chat history and follow up question into a standalone question.
question_generator_chain_template = """
Here is some chat history contained in the <chat_history> tags and a follow-up question in the <follow_up> tags:
<chat_history>
{chat_history}
</chat_history>
<follow_up>
{question}
</follow_up>
Combine the chat history and follow up question into a standalone question.
"""
question_generator_chain_prompt = PromptTemplate.from_template(question_generator_chain_template)
print('question_generator_chain_prompt:', question_generator_chain_prompt);
#Create template for asking the question of the given context.
combine_docs_chain_template = """
You are a friendly, concise chatbot. Here is some context, contained in <context> tags:
<context>
{context}
</context>
Given the context answer this question: {question}
"""
combine_docs_chain_prompt = PromptTemplate.from_template(combine_docs_chain_template)
# RetrievalQA instance with custom prompt template
qa = ConversationalRetrievalChain.from_llm(
llm=llm_jurassic_ultra,
condense_question_llm=llm_jurassic_mid,
retriever=retriever,
return_source_documents=True,
condense_question_prompt=question_generator_chain_prompt,
combine_docs_chain_kwargs={"prompt": combine_docs_chain_prompt}
)
more here : https://github.com/aws-samples/amazon-bedrock-kendra-lex-chatbot/blob/main/lambda/app.py
```
### Error Message and Stack Trace (if applicable)
```
[ERROR] ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The requested operation is not recognized by the service.
Traceback (most recent call last):
File "/var/task/app.py", line 128, in lambda_handler
result = qa(input_variables)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 159, in _call
answer = self.combine_docs_chain.run(
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 510, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 122, in _call
output, extra_return_dict = self.combine_docs(
File "/var/lang/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 298, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 120, in generate
return self.llm.generate_prompt(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 507, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 656, in generate
output = self._generate_helper(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 544, in _generate_helper
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 531, in _generate_helper
self._generate(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 1053, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/bedrock.py", line 427, in _call
return self._prepare_input_and_invoke(prompt=prompt, stop=stop, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/bedrock.py", line 266, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
```
### Description
I am trying langchain to fetch response from Bedrock and have error as described. I validated if any access related issues and there is nothing as such.
Working on this example: https://github.com/aws-samples/amazon-bedrock-kendra-lex-chatbot/blob/main/lambda/app.py
### System Info
Running on AWS Lambda. | Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation. | https://api.github.com/repos/langchain-ai/langchain/issues/17170/comments | 2 | 2024-02-07T11:10:10Z | 2024-06-29T16:07:42Z | https://github.com/langchain-ai/langchain/issues/17170 | 2,122,775,441 | 17,170 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
model = ChatOpenAI(temperature=0)
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
joke_query = "Tell me a joke."
parser = JsonOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}, #cause
)
chain = prompt | model | parser
chain.invoke({"query": joke_query})
openai_functions = [convert_to_openai_function(Joke)]
parser = JsonOutputFunctionsParser()
chain = prompt | model.bind(functions=openai_functions) | parser
chain.invoke({"query": "tell me a joke"})
# openai.BadRequestError: Error code: 400 - {'error': {'message': "'' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
### Error Message and Stack Trace (if applicable)
```
# openai.BadRequestError: Error code: 400 - {'error': {'message': "'' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
### Description
For the JsonOutputParser.get_format_instructions function, it modifies the result of pydantic_object.schema() directly. This can lead to unintended issues when the same Pydantic class calls .schema() multiple times. An example of this is the convert_to_openai_function function. Below is an illustration of this scenario.
### System Info
langchain==0.1.4
langchain-anthropic==0.0.1.post1
langchain-community==0.0.16
langchain-core==0.1.16
langchain-openai==0.0.2
| Issue: JsonOutputParser's get_format_instructions() Modifying Pydantic Class Schema | https://api.github.com/repos/langchain-ai/langchain/issues/17161/comments | 1 | 2024-02-07T09:03:18Z | 2024-02-13T22:41:48Z | https://github.com/langchain-ai/langchain/issues/17161 | 2,122,518,120 | 17,161 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Hi Team,
I am using chromadb for uploading documents and then trying to get the answer from db using using Agent but every time it is generating inconsistent results and the probability to generate correct answer is 0.1 so let me know how can I fix this
```python
from langchain.chains import ChatVectorDBChain, RetrievalQA, RetrievalQAWithSourcesChain, ConversationChain
from langchain.agents import initialize_agent, Tool, load_tools, AgentExecutor, ConversationalChatAgent
from langchain.tools import BaseTool, tool
vectordb = connect_chromadb()
search_qa = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff",
retriever=vectordb.as_retriever(search_type="mmr", search_kwargs={"filter": filters}), return_source_documents=True,
chain_type_kwargs=digitaleye_templates.qa_summary_kwargs, reduce_k_below_max_tokens=True)
summary_qa = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff",
retriever=vectordb.as_retriever(search_type="mmr", search_kwargs={"filter": filters}),
return_source_documents=True, chain_type_kwargs=digitaleye_templates.general_summary_kwargs,
reduce_k_below_max_tokens=True)
detools = [
Tool(
name = "QA Search",
func=search_qa,
description="Useful for when you want to search a document store for the answer to a question based on facts contained in those documents.",
return_direct=True,
),
Tool(
name = "General Summary",
func=summary_qa,
description="Useful for when you want to summarize a document for the answer to a question based on facts contained in those documents.",
return_direct=True,
),
]
agent = initialize_agent(tools=detools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
agent_kwargs={
'prefix':PREFIX,
#'format_instructions':FORMAT_INSTRUCTIONS,
#'suffix':SUFFIX,
"input_variables": ["input","agent_scratchpad"],
},
#prefix=PREFIX,
#format_instructions=FORMAT_INSTRUCTIONS,
#suffix=SUFFIX,
max_iterations=3,
return_intermediate_steps=False,
early_stopping_method="generate",
handle_parsing_errors="Check you output and make sure it conforms! Do not output an action and a final answer at the same time.",
#streaming=True,
)
answer = agent.run({"input": "summarize mrileftkneereport.pdf"})
print(answer)
#here are the samples of answer which I received
answer1 = "As an AI model, I’m unable to open or read files such as “mrileftkneereport.pdf”. However, I can help you generate a
summary or answer specific questions based on the text content you provide from the report. Please copy and paste
the text from the report, and I’ll do my best to assist you."
answer2 = "As an AI, I’m unable to access or analyze specific files such as “mrileftkneereport.pdf”. However, I can guide you on
how to interpret a radiology report based on the guidelines provided.
For a general summary, you would need to identify the patient’s name, date of exam, type of exam requested, and
radiologist name from the report. The first paragraph of the summary should include the reason for the screening,
screening time, protocol, patient background, and comparison date.
The second paragraph should detail the specific findings, including the number of nodules detected, their precise
position and size, findings on lungs, heart, pleura, coronary artery calcification, mediastinum/hilum/axilla, and other
findings.
The third paragraph should provide conclusions, including the overall lung-rads category, management
recommendation and follow-up date, based on the lesion.
If there are incidental findings, these should be introduced in the fourth paragraph.
For a specific question, you would need to locate the relevant information in the report. For example, if the question is
about the size of the largest lung nodule, you would need to find the section of the report that details the nodules and
their sizes.
If the question is about the radiologist’s recommendation for managing a lung nodule, you would need to find the
section of the report that provides the conclusions and recommendations.
Remember to provide the answer in a clear and accurate manner, and provide any relevant context if necessary.
If the question contains both a general summary request and a specific question, prioritize answering the specific
question.
Avoid using legal jargon in your responses and answer what you can based on the information available in the report. If
you are unable to answer the question, explain why, such as the information not being included in the report."
answer3 = "The question appears to be the name of the file, “mrileftkneereport.pdf”, rather than a specific question or a request
for a general summary of the radiology report. Therefore, it’s not clear what information is being requested. Please
provide a specific question or request for a general summary of the radiology report."
answer4 = "The question provided is “mrileftkneeereport.pdf” which does not specify whether a general summary or a specific
detail is required from the MRI report. However, I can provide a general summary based on the information given in
the report.
Summary Report
Patient’s Name: Regina Doe
Date of Exam: Not specified
Type of Exam: MRI Left Knee
Radiologist Name: Ross Banner, MD
The MRI screening was performed due to a left knee twisting injury that Regina Doe experienced on 1/1/2013 The
screening was conducted using a standard protocol on a .3 Tesla Open MRI system, without IV contrast. There were
no
previous scans available for comparison.
The MRI findings revealed a complete mid substance disruption of the anterior cruciate ligament, with its fibers
fibrillating within the joint. This has resulted in the buckling of the posterior crucial ligament. There is also edema
relative to the medial collateral ligament, indicating a grade 1 injury. The lateral collateral ligament complex, including
the iliotibial band, biceps femoris tendon, fibular collateral ligament, and popliteus muscle and tendon, are thought to
be intact. The menisci and patella appear to be in good condition, although there is posterior meniscal capsular
junction
edema. A large suprapatellar bursal effusion and mild reactive synovitis were also noted. The osseous structures and
periarticular soft tissues were largely unremarkable, except for a deepened lateral condylar patellar sulcus of the femur.
The conclusions drawn from the MRI report include a complete full-thickness disruption of the anterior cruciate
ligament, an associated osseous contusion of the lateral condylar patellar sulcus (indicative of a pivot shift injury), and a
grade 1 MCL complex injury. No other associated injuries were identified."
```
where answer4 ls correct but why I am not getting it consistently.
Please help me on this, I will be thankful to you.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to get answers from chromadb vectorstore using Agent but every time it is producing inconsistent results.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.9.11 (main, Mar 30 2022, 02:45:55) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | AgentExecutor giving inconsistent results | https://api.github.com/repos/langchain-ai/langchain/issues/17160/comments | 3 | 2024-02-07T07:34:57Z | 2024-02-07T13:41:56Z | https://github.com/langchain-ai/langchain/issues/17160 | 2,122,370,678 | 17,160 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
# Generate summaries of text elements
def generate_text_summaries(texts, tables, summarize_texts=False):
"""
Summarize text elements
texts: List of str
tables: List of str
summarize_texts: Bool to summarize texts
"""
# Prompt
prompt_text = """You are an assistant tasked with summarizing tables and text for retrieval. \
These summaries will be embedded and used to retrieve the raw text or table elements. \
Give a concise summary of the table or text that is well-optimized for retrieval. Table \
or text: {element} """
prompt = PromptTemplate.from_template(prompt_text)
empty_response = RunnableLambda(
lambda x: AIMessage(content="Error processing document")
)
# Text summary chain
model = VertexAI(
temperature=0, model_name="gemini-pro", max_output_tokens=1024
).with_fallbacks([empty_response])
summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()
# Initialize empty summaries
text_summaries = []
table_summaries = []
# Apply to text if texts are provided and summarization is requested
if texts and summarize_texts:
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 1})
elif texts:
text_summaries = texts
# Apply to tables if tables are provided
if tables:
table_summaries = summarize_chain.batch(tables, {"max_concurrency": 1})
return text_summaries, table_summaries
# Get text, table summaries
text_summaries2, table_summaries = generate_text_summaries(
texts[9:], tables, summarize_texts=True
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-6-4464722c69fb>](https://localhost:8080/#) in <cell line: 51>()
49
50 # Get text, table summaries
---> 51 text_summaries2, table_summaries = generate_text_summaries(
52 texts[9:], tables, summarize_texts=True
53 )
3 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for VertexAI
__root__
Unable to find your project. Please provide a project ID by:
- Passing a constructor argument
- Using vertexai.init()
- Setting project using 'gcloud config set project my-project'
- Setting a GCP environment variable
- To create a Google Cloud project, please follow guidance at https://developers.google.com/workspace/guides/create-project (type=value_error)
```
### Description
I am not able to connect to Vertex AI, new to GCP.. what are the steps?
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.5
> langchain_community: 0.0.18
> langsmith: 0.0.86
> langchain_experimental: 0.0.50
> langchain_google_vertexai: 0.0.3
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | ValidationError: 1 validation error for VertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/17159/comments | 2 | 2024-02-07T06:32:43Z | 2024-02-07T13:42:50Z | https://github.com/langchain-ai/langchain/issues/17159 | 2,122,288,498 | 17,159 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is a point of improvement in the import statement included in the following sample code that exists in the Usage.
> from langchain_community.llms import OCIGenAI
```py
from langchain_community.llms import OCIGenAI
# use default authN method API-key
llm = OCIGenAI(
model_id="MY_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
)
response = llm.invoke("Tell me one fact about earth", temperature=0.7)
print(response)
```
This is a point as well.
> from langchain_community.vectorstores import FAISS
```py
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain_community.embeddings import OCIGenAIEmbeddings
from langchain_community.vectorstores import FAISS
```
### Idea or request for content:
The first item is appropriate to do this.
```py
from langchain_community.llms.oci_generative_ai import OCIGenAI
```
And the second item is appropriate to do this.
```py
from langchain_community.vectorstores.faiss import FAISS
``` | DOC: some import statement of Oracle Cloud Infrastructure Generative AI can be improved | https://api.github.com/repos/langchain-ai/langchain/issues/17156/comments | 3 | 2024-02-07T05:00:30Z | 2024-02-13T06:38:55Z | https://github.com/langchain-ai/langchain/issues/17156 | 2,122,192,776 | 17,156 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.vectorstores.milvus import Milvus
vdb = Milvus(
embedding_function=embeddings,
connection_args={
"host": "localhost",
"port": 19530,
},
auto_id=True,
)
vdb.add_texts(
texts=[
"This is a test",
"This is another test",
],
metadatas=[
{"test": "1"},
{"test": "2"},
],
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[8], [line 1](vscode-notebook-cell:?execution_count=8&line=1)
----> [1](vscode-notebook-cell:?execution_count=8&line=1) vdb.add_texts(
[2](vscode-notebook-cell:?execution_count=8&line=2) texts=[
[3](vscode-notebook-cell:?execution_count=8&line=3) "This is a test",
[4](vscode-notebook-cell:?execution_count=8&line=4) "This is another test",
[5](vscode-notebook-cell:?execution_count=8&line=5) ],
[6](vscode-notebook-cell:?execution_count=8&line=6) metadatas=[
[7](vscode-notebook-cell:?execution_count=8&line=7) {"test": "1"},
[8](vscode-notebook-cell:?execution_count=8&line=8) {"test": "2"},
[9](vscode-notebook-cell:?execution_count=8&line=9) ],
[10](vscode-notebook-cell:?execution_count=8&line=10) )
File [d:\Projects\ai-notebook\.venv\Lib\site-packages\langchain_community\vectorstores\milvus.py:586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586), in Milvus.add_texts(self, texts, metadatas, timeout, batch_size, ids, **kwargs)
[584](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:588) try:
File [d:\Projects\ai-notebook\.venv\Lib\site-packages\langchain_community\vectorstores\milvus.py:586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586), in <listcomp>(.0)
[584](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:588) try:
KeyError: 'pk'
```
### Description
according to #16256 , if set auto_id=True, milvus will compatible to old version that has not auto_id, but it seems not to compatible.
@jaelgu would you please take a look at this issue?
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.5 | Vectorstore Milvus set auto_id=True seems incompatible with old version | https://api.github.com/repos/langchain-ai/langchain/issues/17147/comments | 9 | 2024-02-07T01:39:21Z | 2024-07-07T17:14:57Z | https://github.com/langchain-ai/langchain/issues/17147 | 2,121,991,274 | 17,147 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_core.runnables import RunnablePassthrough
# Prompt template
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
model = ChatOpenAI(temperature=0, model="gpt-4")
# RAG pipeline
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using the notebook from https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb
and the original "Llama2 article" pdf from the notebook - I get **_only_** incorrect answers related to information in the tables.
<img width="1089" alt="Screenshot 2024-02-06 at 6 59 40 PM" src="https://github.com/langchain-ai/langchain/assets/38818491/68de29e1-c256-474a-99be-df32e823bd4e">
chain.invoke("What is the commonsense reasoning score of Falcon 40B ?")
'The commonsense reasoning score of Falcon 40B is 15.2.' ... Incorrect
chain.invoke("Which model has the worst Reading Comprehension and which one has the best")
'The Llama 1 model has the worst Reading Comprehension and the Falcon model has the best.' ... Incorrect
If one asks without capitalization - 'reading comprehension' no answers found (?!) ... Incorrect
Another table and example:
<img width="913" alt="Screenshot 2024-02-06 at 6 28 32 PM" src="https://github.com/langchain-ai/langchain/assets/38818491/6b0768de-a419-486a-a83d-742144c32379">
chain.invoke("What is the power consumption of training Llama2 34B ?")
'The power consumption of training Llama2 34B is https://github.com/langchain-ai/langchain/commit/172032014ea25f655a3efab5be5abcc2e3693037 W.' ... Incorrect
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langchainhub==0.1.14
Python 3.9.12 (main, Apr 5 2022, 01:53:17)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin | Incorrect answers related to tables in the original Llama2 article used in the tutorial | https://api.github.com/repos/langchain-ai/langchain/issues/17140/comments | 1 | 2024-02-07T00:02:38Z | 2024-05-15T16:07:49Z | https://github.com/langchain-ai/langchain/issues/17140 | 2,121,894,168 | 17,140 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
pdf_loader = PyPDFLoader("docs\conda-cheatsheet.pdf", True)
pages = pdf_loader.load()
```
### Error Message and Stack Trace (if applicable)
File "C:\Users\luke8\Desktop\chatagent\app.py", line 26, in <module>
pages = pdf_loader.load()
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\pdf.py", line 161, in load
return list(self.lazy_load())
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\pdf.py", line 168, in lazy_load
yield from self.parser.parse(blob)
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\base.py", line 95, in parse
return list(self.lazy_parse(blob))
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\parsers\pdf.py", line 26, in lazy_parse
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
File "D:\Anaconda3\envs\env39\lib\site-packages\pypdf\_reader.py", line 345, in __init__
raise PdfReadError("Not encrypted file")
pypdf.errors.PdfReadError: Not encrypted file
### Description
Trying to extract image from pft to text, got error above. I also tried to follow the syntax in this [Extracting images in Langchain doc](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf#extracting-images) got this error instead ```Traceback (most recent call last):
File "C:\Users\luke8\Desktop\chatagent\app.py", line 25, in <module>
pdf_loader = PyPDFLoader("docs\TaskWaver.pdf", extract_images=True)
TypeError: __init__() got an unexpected keyword argument 'extract_images'```
### System Info
# packages in environment at D:\Anaconda3\envs\env39:
#
# Name Version Build Channel
aiohttp 3.9.0 py39h2bbff1b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
anyio 3.5.0 py39haa95532_0 anaconda
appdirs 1.4.4 pyhd3eb1b0_0
argon2-cffi 20.1.0 py39h2bbff1b_1 anaconda
asttokens 2.0.5 pyhd3eb1b0_0 anaconda
async-timeout 4.0.3 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 py39haa95532_0 anaconda
babel 2.11.0 py39haa95532_0 anaconda
backcall 0.2.0 pyhd3eb1b0_0 anaconda
beautifulsoup4 4.12.2 py39haa95532_0 anaconda
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0 anaconda
blinker 1.7.0 pypi_0 pypi
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
brotli-python 1.0.9 py39hd77b12b_7 anaconda
ca-certificates 2023.12.12 haa95532_0
cachetools 5.3.2 pyhd8ed1ab_0 conda-forge
certifi 2024.2.2 py39haa95532_0
cffi 1.16.0 py39h2bbff1b_0 anaconda
charset-normalizer 2.0.4 pyhd3eb1b0_0 anaconda
click 8.1.7 py39haa95532_0
colorama 0.4.6 py39haa95532_0
coloredlogs 15.0.1 pypi_0 pypi
comm 0.1.2 py39haa95532_0 anaconda
contourpy 1.2.0 py39h59b6b97_0
cryptography 41.0.3 py39h89fc84f_0 anaconda
cycler 0.11.0 pyhd3eb1b0_0
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
debugpy 1.6.7 py39hd77b12b_0 anaconda
decorator 5.1.1 pyhd3eb1b0_0 anaconda
defusedxml 0.7.1 pyhd3eb1b0_0 anaconda
distro 1.9.0 pypi_0 pypi
docker-pycreds 0.4.0 pyhd3eb1b0_0
entrypoints 0.4 py39haa95532_0 anaconda
et_xmlfile 1.1.0 py39haa95532_0
exceptiongroup 1.0.4 py39haa95532_0 anaconda
executing 0.8.3 pyhd3eb1b0_0 anaconda
filelock 3.13.1 py39haa95532_0
flask 3.0.2 pypi_0 pypi
flask-sqlalchemy 3.1.1 pypi_0 pypi
flatbuffers 23.5.26 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
frozenlist 1.4.0 py39h2bbff1b_0
gitdb 4.0.7 pyhd3eb1b0_0
gitpython 3.1.37 py39haa95532_0
gmpy2 2.1.2 py39h7f96b67_0
google-api-core 2.16.1 pyhd8ed1ab_0 conda-forge
google-auth 2.27.0 pyhca7485f_0 conda-forge
googleapis-common-protos 1.62.0 pyhd8ed1ab_0 conda-forge
greenlet 3.0.3 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.2 pypi_0 pypi
httpx 0.26.0 pypi_0 pypi
humanfriendly 10.0 py39haa95532_1
icc_rt 2022.1.0 h6049295_2
icu 58.2 vc14hc45fdbb_0 [vc14] anaconda
idna 3.4 py39haa95532_0 anaconda
importlib-metadata 7.0.1 py39haa95532_0
importlib_resources 6.1.1 py39haa95532_1
iniconfig 2.0.0 pyhd8ed1ab_0 conda-forge
intel-openmp 2023.1.0 h59b6b97_46320
ipykernel 6.25.0 py39h9909e9c_0 anaconda
ipython 8.15.0 py39haa95532_0 anaconda
ipython_genutils 0.2.0 pyhd3eb1b0_1 anaconda
ipywidgets 8.0.4 py39haa95532_0 anaconda
itsdangerous 2.1.2 pypi_0 pypi
jedi 0.18.1 py39haa95532_1 anaconda
jinja2 3.1.3 py39haa95532_0
joblib 1.2.0 py39haa95532_0
jpeg 9e h2bbff1b_1 anaconda
json5 0.9.6 pyhd3eb1b0_0 anaconda
jsonschema 4.19.2 py39haa95532_0 anaconda
jsonschema-specifications 2023.7.1 py39haa95532_0 anaconda
jupyter 1.0.0 py39haa95532_8 anaconda
jupyter_client 7.4.9 py39haa95532_0 anaconda
jupyter_console 6.6.3 py39haa95532_0 anaconda
jupyter_core 5.5.0 py39haa95532_0 anaconda
jupyter_server 1.23.4 py39haa95532_0 anaconda
jupyterlab 3.3.2 pyhd3eb1b0_0 anaconda
jupyterlab_pygments 0.2.2 py39haa95532_0 anaconda
jupyterlab_server 2.25.1 py39haa95532_0 anaconda
jupyterlab_widgets 3.0.9 py39haa95532_0 anaconda
kiwisolver 1.4.4 py39hd77b12b_0
krb5 1.20.1 h5b6d351_1 anaconda
langchain 0.0.291 pyhd8ed1ab_0 conda-forge
langsmith 0.0.86 pyhd8ed1ab_0 conda-forge
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libclang 14.0.6 default_hb5a9fac_1 anaconda
libclang13 14.0.6 default_h8e68704_1 anaconda
libdeflate 1.17 h2bbff1b_1
libpng 1.6.39 h8cc25b3_0 anaconda
libpq 12.15 h906ac69_1 anaconda
libprotobuf 3.20.3 h23ce68f_0
libsodium 1.0.18 h62dcd97_0 anaconda
libtiff 4.5.1 hd77b12b_0
libuv 1.44.2 h2bbff1b_0
libwebp-base 1.3.2 h2bbff1b_0
lz4-c 1.9.4 h2bbff1b_0 anaconda
markupsafe 2.1.3 py39h2bbff1b_0
marshmallow 3.20.2 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
matplotlib-base 3.8.0 py39h4ed8f06_0
matplotlib-inline 0.1.6 py39haa95532_0 anaconda
mistune 2.0.4 py39haa95532_0 anaconda
mkl 2023.1.0 h6b88ed4_46358
mkl-service 2.4.0 py39h2bbff1b_1
mkl_fft 1.3.8 py39h2bbff1b_0
mkl_random 1.2.4 py39h59b6b97_0
mpc 1.1.0 h7edee0f_1
mpfr 4.0.2 h62dcd97_1
mpir 3.0.0 hec2e145_1
mpmath 1.3.0 py39haa95532_0
multidict 6.0.4 py39h2bbff1b_0
munkres 1.1.4 py_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
nbclassic 0.5.5 py39haa95532_0 anaconda
nbclient 0.8.0 py39haa95532_0 anaconda
nbconvert 7.10.0 py39haa95532_0 anaconda
nbformat 5.9.2 py39haa95532_0 anaconda
nest-asyncio 1.5.6 py39haa95532_0 anaconda
networkx 2.8.8 pyhd8ed1ab_0 conda-forge
notebook 6.5.4 py39haa95532_0 anaconda
notebook-shim 0.2.3 py39haa95532_0 anaconda
numexpr 2.8.7 py39h2cd9be0_0
numpy 1.26.3 py39h055cbcc_0
numpy-base 1.26.3 py39h65a83cf_0
onnxruntime 1.17.0 py39he3bb845_0_cpu conda-forge
openai 1.11.1 pypi_0 pypi
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
opencv-python 4.9.0.80 pypi_0 pypi
openjpeg 2.4.0 h4fc8c34_0
openpyxl 3.0.10 py39h2bbff1b_0
openssl 3.0.13 h2bbff1b_0
packaging 23.1 py39haa95532_0 anaconda
pandas 2.2.0 py39h32e6231_0 conda-forge
pandas-stubs 2.1.4.231227 py39haa95532_0
pandocfilters 1.5.0 pyhd3eb1b0_0 anaconda
parso 0.8.3 pyhd3eb1b0_0 anaconda
pathtools 0.1.2 pyhd3eb1b0_1
pickleshare 0.7.5 pyhd3eb1b0_1003 anaconda
pillow 10.0.1 pypi_0 pypi
pip 23.3.1 py39haa95532_0
platformdirs 3.10.0 py39haa95532_0 anaconda
plotly 5.9.0 py39haa95532_0
pluggy 1.4.0 pyhd8ed1ab_0 conda-forge
ply 3.11 py39haa95532_0 anaconda
prometheus_client 0.14.1 py39haa95532_0 anaconda
prompt-toolkit 3.0.36 py39haa95532_0 anaconda
prompt_toolkit 3.0.36 hd3eb1b0_0 anaconda
protobuf 3.20.3 py39hcbf5309_1 conda-forge
psutil 5.9.0 py39h2bbff1b_0 anaconda
pure_eval 0.2.2 pyhd3eb1b0_0 anaconda
pyasn1 0.5.1 pyhd8ed1ab_0 conda-forge
pyasn1-modules 0.3.0 pyhd8ed1ab_0 conda-forge
pyclipper 1.3.0.post5 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0 anaconda
pydantic 1.10.12 py39h2bbff1b_1
pygments 2.15.1 py39haa95532_1 anaconda
pyopenssl 23.2.0 py39haa95532_0 anaconda
pyparsing 3.0.9 py39haa95532_0
pypdf 4.0.1 pypi_0 pypi
pyqt 5.15.10 py39hd77b12b_0 anaconda
pyqt5-sip 12.13.0 py39h2bbff1b_0 anaconda
pyreadline3 3.4.1 py39haa95532_0
pysocks 1.7.1 py39haa95532_0 anaconda
pytest 8.0.0 pyhd8ed1ab_0 conda-forge
pytest-subtests 0.11.0 pyhd8ed1ab_0 conda-forge
python 3.9.18 h1aa4202_0
python-dateutil 2.8.2 pyhd3eb1b0_0 anaconda
python-dotenv 1.0.1 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.16.2 py39haa95532_0 anaconda
python-flatbuffers 2.0 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
python_abi 3.9 2_cp39 conda-forge
pytorch 2.2.0 py3.9_cpu_0 pytorch
pytorch-mutex 1.0 cpu pytorch
pytz 2023.3.post1 py39haa95532_0 anaconda
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pywin32 305 py39h2bbff1b_0 anaconda
pywinpty 2.0.10 py39h5da7b33_0 anaconda
pyyaml 6.0.1 py39h2bbff1b_0
pyzmq 25.1.0 py39hd77b12b_0 anaconda
qt-main 5.15.2 h879a1e9_9 anaconda
qtconsole 5.5.0 py39haa95532_0 anaconda
qtpy 2.4.1 py39haa95532_0 anaconda
rapidocr-onnxruntime 1.3.11 pypi_0 pypi
referencing 0.30.2 py39haa95532_0 anaconda
requests 2.31.0 py39haa95532_0 anaconda
rpds-py 0.10.6 py39h062c2fa_0 anaconda
rsa 4.9 pyhd8ed1ab_0 conda-forge
scikit-learn 1.3.0 py39h4ed8f06_1
scipy 1.11.4 py39h309d312_0
send2trash 1.8.2 py39haa95532_0 anaconda
sentry-sdk 1.9.0 py39haa95532_0
setproctitle 1.2.2 py39h2bbff1b_1004
setuptools 68.2.2 py39haa95532_0
shapely 2.0.2 pypi_0 pypi
sip 6.7.12 py39hd77b12b_0 anaconda
six 1.16.0 pyhd3eb1b0_1 anaconda
smmap 4.0.0 pyhd3eb1b0_0
sniffio 1.2.0 py39haa95532_1 anaconda
soupsieve 2.5 py39haa95532_0 anaconda
sqlalchemy 2.0.25 pypi_0 pypi
sqlite 3.41.2 h2bbff1b_0
stack_data 0.2.0 pyhd3eb1b0_0 anaconda
stringcase 1.2.0 py_0 conda-forge
sympy 1.12 py39haa95532_0
tbb 2021.8.0 h59b6b97_0
tenacity 8.2.3 pyhd8ed1ab_0 conda-forge
terminado 0.17.1 py39haa95532_0 anaconda
threadpoolctl 2.2.0 pyh0d69192_0
tinycss2 1.2.1 py39haa95532_0 anaconda
tomli 2.0.1 py39haa95532_0 anaconda
tornado 6.3.3 py39h2bbff1b_0 anaconda
tqdm 4.66.1 pypi_0 pypi
traitlets 5.7.1 py39haa95532_0 anaconda
types-pytz 2022.4.0.0 py39haa95532_1
typing-extensions 4.9.0 py39haa95532_1
typing_extensions 4.9.0 py39haa95532_1
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tzdata 2023d h04d1e81_0
ucrt 10.0.20348.0 haa95532_0
urllib3 1.26.18 py39haa95532_0 anaconda
vc 14.3 hcf57466_18 conda-forge
vc14_runtime 14.38.33130 h82b7239_18 conda-forge
vs2015_runtime 14.38.33130 hcb4865c_18 conda-forge
wandb 0.16.2 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.5 pyhd3eb1b0_0 anaconda
webencodings 0.5.1 py39haa95532_1 anaconda
websocket-client 0.58.0 py39haa95532_4 anaconda
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py39haa95532_0
widgetsnbextension 4.0.5 py39haa95532_0 anaconda
win_inet_pton 1.1.0 py39haa95532_0 anaconda
winpty 0.4.3 4 anaconda
xz 5.4.2 h8cc25b3_0 anaconda
yaml 0.2.5 he774522_0
yarl 1.7.2 py39hb82d6ee_2 conda-forge
zeromq 4.3.4 hd77b12b_0 anaconda
zipp 3.17.0 py39haa95532_0
zlib 1.2.13 h8cc25b3_0 anaconda
zstd 1.5.5 hd43e919_0 anaconda | Exracting images from PDF Error "pypdf.errors.PdfReadError: Not encrypted file" | https://api.github.com/repos/langchain-ai/langchain/issues/17134/comments | 3 | 2024-02-06T22:17:45Z | 2024-07-14T16:05:47Z | https://github.com/langchain-ai/langchain/issues/17134 | 2,121,762,214 | 17,134 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import pinecone
from pinecone import Pinecone, ServerlessSpec
from langchain.vectorstores import Pinecone as Pinecone2
import os
# We initialize pinecone
pinecone_api_key=os.getenv("PINECONE_API_KEY"),
pinecone_env=os.getenv("PINECONE_ENV_KEY")
openai_key=os.getenv("OPENAI_API_KEY")
index_name = "langchain"
# We make an object to initialize Pinecone
class PineconeConnected():
def __init__(self, index_name: str, pinecone_api_key: str, pinecone_env: str, openai_key: str):
embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone # index load
# We instanciate the object
pc1=PineconeConnected(index_name, pinecone_api_key, pinecone_env ,openai_key)
pc1.pinecone(pinecone_api_key, pinecone_env)
db_Pinecone = pc1.db_Pinecone(index_name, embeddings)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[154], line 22
18 self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
19 self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone # index load
---> 22 pc1=PineconeConnected(index_name, pinecone_api_key, pinecone_env ,openai_key)
23 pc1.pinecone(pinecone_api_key, pinecone_env)
24 db_Pinecone = pc1.db_Pinecone(index_name, embeddings)
Cell In[154], line 19, in PineconeConnected.__init__(self, index_name, pinecone_api_key, pinecone_env, openai_key)
17 embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
18 self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
---> 19 self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings)
File ~/.local/lib/python3.8/site-packages/langchain/vectorstores/pinecone.py:264, in Pinecone.from_existing_index(cls, index_name, embedding, text_key, namespace)
257 except ImportError:
258 raise ValueError(
259 "Could not import pinecone python package. "
260 "Please install it with `pip install pinecone-client`."
261 )
263 return cls(
--> 264 pinecone.Index(index_name), embedding.embed_query, text_key, namespace
265 )
TypeError: __init__() missing 1 required positional argument: 'host'
### Description
It seems that the pinecone.Index-function that is called from the from_existing_index()-function requires a host-argument, but when this is supplied, it is never delivered to the Index-function.
### System Info
I'm on a Windows 11-PC, with Python version 3.8.10 and langchain version 0.0.184. I tried installing the pinecone_community-library, but it broke my code, and the langchain_pinecone-library it couldn't find. | In the lanchain.vectorstores Pinecone-library, the method from_existing_index() seems broken | https://api.github.com/repos/langchain-ai/langchain/issues/17128/comments | 1 | 2024-02-06T21:25:58Z | 2024-05-14T16:08:56Z | https://github.com/langchain-ai/langchain/issues/17128 | 2,121,692,628 | 17,128 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```bash
cd libs/langchain
make docker_tests
```
### Error Message and Stack Trace (if applicable)
```
=> ERROR [dependencies 2/2] RUN /opt/poetry/bin/poetry install --no-interaction --no-ansi --with test 2.8s
------
> [dependencies 2/2] RUN /opt/poetry/bin/poetry install --no-interaction --no-ansi --with test:
0.879 Path /core for langchain-core does not exist
0.881 Path /core for langchain-core does not exist
0.882 Path /community for langchain-community does not exist
0.884 Path /core for langchain-core does not exist
0.885 Path /community for langchain-community does not exist
0.886 Path /core for langchain-core does not exist
0.886 Path /community for langchain-community does not exist
1.038 Creating virtualenv langchain in /app/.venv
1.684 Installing dependencies from lock file
2.453 Path /community for langchain-community does not exist
2.453 Path /core for langchain-core does not exist
2.479
2.479 Path /core for langchain-core does not exist
------
Dockerfile:34
--------------------
32 |
33 | # Install the Poetry dependencies (this layer will be cached as long as the dependencies don't change)
34 | >>> RUN $POETRY_HOME/bin/poetry install --no-interaction --no-ansi --with test
35 |
36 | # Use a multi-stage build to run tests
--------------------
```
### Description
`make docker_tests` fails because poetry cannot find the `community` and `core` packages. Most likely related to https://github.com/langchain-ai/langchain/discussions/13823 and https://github.com/langchain-ai/langchain/discussions/14243
### System Info
Python Version: `3.9.18` | langchain: broken docker_tests target | https://api.github.com/repos/langchain-ai/langchain/issues/17111/comments | 1 | 2024-02-06T15:24:36Z | 2024-05-14T16:08:50Z | https://github.com/langchain-ai/langchain/issues/17111 | 2,121,053,514 | 17,111 |
[
"langchain-ai",
"langchain"
] |
#11740 new openLLM remote client is not working with langchain.llms.OpenLLM
This raises Attribute Error
```AttributeError: 'GenerationOutput' object has no attribute 'responses'```
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
There is no responses key in GenerationOutput schema from openLLM github, rather they have a outputs field.

https://github.com/bentoml/OpenLLM/blob/8ffab93d395c9030232b52ab00ed36cb713804e3/openllm-core/src/openllm_core/_schemas.py#L118
_Originally posted by @97k in https://github.com/langchain-ai/langchain/issues/11740#issuecomment-1929779365_
| OpenLLM: GenerationOutput object has no attribute 'responses' | https://api.github.com/repos/langchain-ai/langchain/issues/17108/comments | 5 | 2024-02-06T14:19:16Z | 2024-02-22T11:19:05Z | https://github.com/langchain-ai/langchain/issues/17108 | 2,120,899,339 | 17,108 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.redis import Redis
import os
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
model_kwargs={'device': 'cpu'}
metadata = [
{
"user": "john",
"age": 18,
"job": "engineer",
"credit_score": "high",
},
{
"user": "derrick",
"age": 45,
"job": "doctor",
"credit_score": "low",
},
{
"user": "nancy",
"age": 94,
"job": "doctor",
"credit_score": "high",
},
{
"user": "tyler",
"age": 100,
"job": "engineer",
"credit_score": "high",
},
{
"user": "joe",
"age": 35,
"job": "dentist",
"credit_score": "medium",
},
]
texts = ["foo", "foo", "foo", "bar", "bar"]
rds = Redis.from_texts(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users",
)
results = rds.similarity_search("foo")
```
### Error Message and Stack Trace (if applicable)
```python
ResponseError: Error parsing vector similarity query: query vector blob size (1536) does not match index's expected size (6144).
```
### Description
I was able to successfully use Langchain and Redis vector storage with OpenAIEmbeddings, following the documentation example. However, when I tried the same basic example with different types of embeddings, it didn't work. It appears that Langchain's Redis vector store is only compatible with OpenAIEmbeddings.
### System Info
langchain==0.1.4
langchain-community==0.0.15
langchain-core==0.1.16 | Redis vector store using HuggingFaceEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/17107/comments | 1 | 2024-02-06T14:07:20Z | 2024-05-14T16:08:46Z | https://github.com/langchain-ai/langchain/issues/17107 | 2,120,871,807 | 17,107 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_experimental.text_splitter import SemanticChunker
...
text_splitter=SemanticChunker(embeddings)
docs = text_splitter.split_documents(documents)
-> IndexError: index -1 is out of bounds for axis 0 with size 0
### Error Message and Stack Trace (if applicable)
File "/usr/local/lib/python3.10/dist-packages/langchain_experimental/text_splitter.py", line 138, in create_documents
for chunk in self.split_text(text):
File "/usr/local/lib/python3.10/dist-packages/langchain_experimental/text_splitter.py", line 103, in split_text
breakpoint_distance_threshold = np.percentile(
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4283, in percentile
return _quantile_unchecked(
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4555, in _quantile_unchecked
return _ureduce(a,
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 3823, in _ureduce
r = func(a, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4721, in _quantile_ureduce_func
result = _quantile(arr,
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4830, in _quantile
slices_having_nans = np.isnan(arr[-1, ...])
IndexError: index -1 is out of bounds for axis 0 with size 0
### Description
I'm trying the SemanticChunker and noticed that it fails for documents that can't be splitted into multiple sentences.
I guess using the SemanticChunker for such short documents does not really make sense, however it should cause an exception.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| SemanticChunker error with single sentence documents | https://api.github.com/repos/langchain-ai/langchain/issues/17106/comments | 1 | 2024-02-06T13:43:40Z | 2024-05-14T16:08:40Z | https://github.com/langchain-ai/langchain/issues/17106 | 2,120,816,919 | 17,106 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.callbacks.base import BaseCallbackHandler
def get_callback(logging_path):
class CustomCallback(BaseCallbackHandler):
def on_llm_start(self, serialized, prompts, **kwargs):
with open(logging_path,'a') as f:
f.write(f"LLM START: {prompts}\n\n")
def on_chat_model_start(self, serialized, messages, **kwargs):
with open(logging_path,'a') as f:
f.write(f"CHAT START: {messages}\n\n")
def on_llm_end(self, response, **kwargs):
with open(logging_path,'a') as f:
f.write(f"LLM END: {response}\n\n")
return CustomCallback()
callback_obj = get_callback("logger.txt")
sql_db = get_database(sql_database_path)
db_chain = SQLDatabaseChain.from_llm(mistral, sql_db, verbose=True,callbacks = [callback_obj])
db_chain.invoke({
"query": "What is the best time of Lance Larson in men's 100 meter butterfly competition?"
})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The Custom Callback which i am passing during the instance of SQLDatabaseChain is not executing.
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-experimental==0.0.32
langchainhub==0.1.14
platform linux
python 3.9.13 | Callbacks are not working with SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/17103/comments | 1 | 2024-02-06T12:11:48Z | 2024-02-08T03:29:25Z | https://github.com/langchain-ai/langchain/issues/17103 | 2,120,631,069 | 17,103 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0,
openai_api_key = my_api_key,
model_name="gpt-4",
model_kwargs = {"logprobs": True,
"top_logprobs":3})
llm.invoke("Please categorize this text below into positive, negative or neutral: I had a good day")
```
```
# OUTPUT
AIMessage(content='Positive')
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Lopgprobs argument is again settable in OpenAi according to this [official source](https://cookbook.openai.com/examples/using_logprobs) (Openai docs):
However, when I try to use it via langchain, it does not exist in the output despite explicitly being set to `True` in `model_kwargs`.
If I put `logprobs` parameter outside `model_kwargs` it does show warning which gives me confidence that the place is right.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22635
> Python Version: 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_google_genai: 0.0.3
> langchain_openai: 0.0.5
> OpenAI: 1.11.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
EDIT: added openai package version. | ChatOpenai logprobs not reported despite being set to True in `model_kwargs` | https://api.github.com/repos/langchain-ai/langchain/issues/17101/comments | 8 | 2024-02-06T11:27:50Z | 2024-05-20T16:08:45Z | https://github.com/langchain-ai/langchain/issues/17101 | 2,120,554,426 | 17,101 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:https://github.com/langchain-ai/langchain/blob/f027696b5f068bacad96a9356ae196f5de240007/libs/community/langchain_community/vectorstores/milvus.py#L564-L573 is used to assign the key of the doc.metadata to each collection.schema collumns(including text and embdding), but in previews code:https://github.com/langchain-ai/langchain/blob/f027696b5f068bacad96a9356ae196f5de240007/libs/community/langchain_community/vectorstores/milvus.py#L550C1-L554C10. Those two keys have been assigned already?
### Error Message and Stack Trace (if applicable)
The length of the value from key metadatas cannot align with the length of the value from the milvus collections columns.
### Description

### System Info

| Why assign the value of milvus doc.matadatas to insert dict several times? | https://api.github.com/repos/langchain-ai/langchain/issues/17095/comments | 4 | 2024-02-06T08:43:05Z | 2024-05-14T16:08:30Z | https://github.com/langchain-ai/langchain/issues/17095 | 2,120,246,560 | 17,095 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code produces the duplicated examples of `sunny`. It is output correctly using FAISS.
```from langchain_google_vertexai import VertexAIEmbeddings
from langchain_community.vectorstores import Chroma
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "slow", "output": "fast"},
{"input": "windy", "output": "calm"},
]
example_selector3 = SemanticSimilarityExampleSelector.from_examples(
examples,
VertexAIEmbeddings("textembedding-gecko@001"),
Chroma,
k=2,
)
similar_prompt = FewShotPromptTemplate(
example_selector=example_selector3,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
print(similar_prompt.format(adjective="rainny"))
```
Output looks like the below
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: rainny
Output:
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Below is the output using FAISS. I do not expect the output to be the same but at least it should not contain duplicates. If we put k=10, it duplicates the example and give more examples than the original list.
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: rainny
Output:
```
Below is the output when k=10
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: rainny
Output:
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_google_vertexai: 0.0.3
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SemanticSimilarityExampleSelector with Chroma return duplicated examples | https://api.github.com/repos/langchain-ai/langchain/issues/17093/comments | 1 | 2024-02-06T08:15:50Z | 2024-05-14T16:08:25Z | https://github.com/langchain-ai/langchain/issues/17093 | 2,120,203,051 | 17,093 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
pdf_file = '/content/documents/Neha Wadikar Resume.pdf'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1200,
chunk_overlap=300)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0)
# Create a retriever for the vector database
document_content_description = "Description of research papers, a resume and a research proposal"
metadata_field_info = [
AttributeInfo(
name="year",
description="The year the document or event occurred, reflecting release, publication, or professional milestone",
type="integer",
),
AttributeInfo(
name="role",
description="The professional role or job title of the individual described in the document",
type="string",
),
AttributeInfo(
name="sector",
description="The industry or sector focus of the professional profile or study",
type="string",
),
AttributeInfo(
name="skills",
description="A list of skills or expertise areas highlighted in the professional profile or research study",
type="string",
),
AttributeInfo(
name="achievements",
description="Key achievements or outcomes described within the document, relevant to professional profiles or research findings",
type="string",
),
AttributeInfo(
name="education",
description="Educational background information, including institutions attended",
type="string",
),
AttributeInfo(
name="volunteer_work",
description="Details of any volunteer work undertaken by the individual in the professional profile",
type="string",
),
AttributeInfo(
name="researcher",
description="The name of the researcher or author of a study",
type="string",
),
AttributeInfo(
name="supervisor",
description="The supervisor or advisor for a research project",
type="string",
),
AttributeInfo(
name="institution",
description="The institution or organization associated with the document, such as a university or research center",
type="string",
),
AttributeInfo(
name="focus",
description="The main focus or subjects of study, research, or professional expertise",
type="string",
),
AttributeInfo(
name="challenges",
description="Specific challenges addressed or faced in the context of the document",
type="string",
),
AttributeInfo(
name="proposed_solutions",
description="Solutions proposed or implemented as described within the document",
type="string",
),
AttributeInfo(
name="journal",
description="The name of the journal where a study was published",
type="string",
),
AttributeInfo(
name="authors",
description="The authors of a research study or paper",
type="string",
),
AttributeInfo(
name="keywords",
description="Key terms or concepts central to the document's content",
type="string",
),
AttributeInfo(
name="results",
description="The main results or findings of a research study, including statistical outcomes",
type="string",
),
AttributeInfo(
name="approach",
description="The approach or methodology adopted in a research study or professional project",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
# print(retriever.get_relevant_documents("What is the title of the proposal"))
# print(retriever.invoke("What are techinical skills in resume"))
# logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what's the person name, the travel experience, which countries she's been to? If yes, in which year she's been to and did she attend any conferences??"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
If you see in the above code, in the metadata_field_info, i haven't particularly mentioned any AttributeInfo related to travel and I asked model to return an answer which is related to travel and it has returned that 'I don't know'. So, can you explain me how exactly this metadata_fields_info works? And is it mandatory to create a large metadata_fields_info covering all the aspects of the pdf files, so that it returns the answers taking metdata_fields_info AttributeInfo as reference and return the answer from pdf?
### Idea or request for content:
_No response_ | how metadata_fields_info is helpful in SelfQueryRetrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17090/comments | 5 | 2024-02-06T06:27:28Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17090 | 2,120,059,025 | 17,090 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Please write docs for SQLRecordManager
### Idea or request for content:
- Supported databases
- List of methods
- Description of method parameters | DOC: SQLRecordManager documentation does not exist | https://api.github.com/repos/langchain-ai/langchain/issues/17088/comments | 9 | 2024-02-06T05:04:15Z | 2024-08-09T16:07:08Z | https://github.com/langchain-ai/langchain/issues/17088 | 2,119,969,318 | 17,088 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
response = ""
async for token in chain.astream(input=input):
yield token.content
response += token.content
```
### Error Message and Stack Trace (if applicable)
TypeError: NetworkError when attempting to fetch resource.
### Description
I am using the RunnableMap to create a chain for my application related to RAG. The chain is defined as follow:
```python
context = RunnableMap(
{
"context": (
retriever_chain
),
"question": itemgetter("question"),
"chat_history": itemgetter("chat_history"),
}
)
prompt = ChatPromptTemplate.from_messages(
messages=[
("system", SYSTEM_ANSWER_QUESTION_TEMPLATE),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
response_synthesizer = prompt | llm
response_chain = context | response_synthesizer
```
I have create an endpoint with fastapi for stream response for this chain as follow:
```python
@router.post("/workspace/{workspace_id}/docschat/chat")
async def chat(
data: ChatRequest,
) -> StreamingResponse:
try:
### Few codes
async def generate_response():
input = {"question": data.message,
"chat_history": #...
}
response = ""
async for token in chain.astream(input=input):
yield token.content
response += token.content
return StreamingResponse(generate_response(), media_type="text/event-stream")
except Exception as e:
return JSONResponse(status_code=500, content={"message": "Internal server error", "error": str(e)})
```
When I hit the endpoint for the first time, I get successful streaming response. However, successive response freezes the whole api. I am getting "TypeError: NetworkError when attempting to fetch resource." when I use swagger while the api freezes. I have a doubt the async operation from first response did not complete hence causing error in the succesive chain trigger. How do I solve this issue?
### System Info
"pip freeze | grep langchain"
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.17
langchain-experimental 0.0.49
langchain-openai 0.0.5
langchainhub 0.1.14
langchainplus-sdk 0.0.20
platform:
linux
python --version
Python 3.10.10 | NetworkError when attempting to fetch resource with chain.astream | https://api.github.com/repos/langchain-ai/langchain/issues/17087/comments | 1 | 2024-02-06T04:58:16Z | 2024-02-07T03:51:47Z | https://github.com/langchain-ai/langchain/issues/17087 | 2,119,963,524 | 17,087 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
return_op = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True,
verbose=True,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
None
### Description
Is there any parameter in SQLDatabaseChain that can make the intermediate_step editable i.e. the SQL code generated by LLM can it be edited before executing the SQL command at the database?
SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True,
verbose=True,
return_intermediate_steps=True,
)
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
pip_audit==2.6.0
pre-commit==3.6.0
pylint==2.17.4
pylint_quotes==0.2.3
pylint_pydantic==0.3.2
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==2.0.25
streamlit==1.30.0
watchdog==3.0.0 | Editable SQL in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/17071/comments | 3 | 2024-02-05T22:47:34Z | 2024-05-15T16:07:39Z | https://github.com/langchain-ai/langchain/issues/17071 | 2,119,604,240 | 17,071 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The current code does not handle top-level conditions like the example below:
```python
filter = {"or": [{"rating": {"gte": 8.5}}, {"genre": "animated"}]}
retriever = vectorstore.as_retriever(search_kwargs={"filter": filter})
```
The current implementation for this example will not find results.
The current implementation does not support GTE and LTE comparators in pgvector.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I am trying to filter documents based on metadata, including top level conditions with 'and', 'or' operators
* I am trying to create more comprehensive metadata filtering with pgvector and to create the base for pgvector self querying.
* I am trying to use GTE and LTE comparators in filter clause
* Code for pgvector can be refactored, using already defined comparators and operators from langchain.chains.query_constructor.ir
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
Python 3.10.12 | PGVector filtering improvements to support self-querying | https://api.github.com/repos/langchain-ai/langchain/issues/17064/comments | 1 | 2024-02-05T21:36:46Z | 2024-05-13T16:10:33Z | https://github.com/langchain-ai/langchain/issues/17064 | 2,119,498,678 | 17,064 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code fails if the splitter is left with just one sentence.
```python
from langchain_experimental.text_splitter import SemanticChunker
from langchain.schema import Document
embeddings_model = OpenAIEmbeddings()
docs = [Document(page_content='....')]
splitter = SemanticChunker(embeddings_model)
split_docs = splitter.split_documents(docs)
```
### Error Message and Stack Trace (if applicable)
```
File "/Users/salamanderxing/Documents/gcp-chatbot/chatbot/scraper/build_db.py", line 24, in scrape_and_split_urls
split_docs = tuple(splitter.split_documents(docs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 159, in split_documents
return self.create_documents(texts, metadatas=metadatas)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 144, in create_documents
for chunk in self.split_text(text):
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 103, in split_text
breakpoint_distance_threshold = np.percentile(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4283, in percentile
return _quantile_unchecked(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4555, in _quantile_unchecked
return _ureduce(a,
^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 3823, in _ureduce
r = func(a, **kwargs)
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4721, in _quantile_ureduce_func
result = _quantile(arr,
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4830, in _quantile
slices_having_nans = np.isnan(arr[-1, ...])
~~~^^^^^^^^^
IndexError: index -1 is out of bounds for axis 0 with size 0
```
### Description
I'm simply using this text splitter, and sometimes it raises this issue. I looked into it and its due to the fact that the splitter is left with only one sentence and tries to compute the np.percentile of the distances of an empty list. Made a PR for this. The issue is unfortunately a bit hard to replicate. I could not provide the text that raised the issue. However, by looking at the source code, it's clear that there is a bug occurring when the splitter is left with only one sentence:
From https://github.com/langchain-ai/langchain/blob/af5ae24af2b32e962adf23d78e59ed505d17fff7/libs/experimental/langchain_experimental/text_splitter.py#L84
```python
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
# Splitting the essay on '.', '?', and '!'
single_sentences_list = re.split(r"(?<=[.?!])\s+", text)
sentences = [
{"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
]
sentences = combine_sentences(sentences)
embeddings = self.embeddings.embed_documents(
[x["combined_sentence"] for x in sentences]
)
for i, sentence in enumerate(sentences):
sentence["combined_sentence_embedding"] = embeddings[i]
distances, sentences = calculate_cosine_distances(sentences)
start_index = 0
# Create a list to hold the grouped sentences
chunks = []
breakpoint_percentile_threshold = 95
breakpoint_distance_threshold = np.percentile(
distances, breakpoint_percentile_threshold
) # If you want more chunks, lower the percentile cutoff
indices_above_thresh = [
i for i, x in enumerate(distances) if x > breakpoint_distance_threshold
] # The indices of those breakpoints on your list
# Iterate through the breakpoints to slice the sentences
for index in indices_above_thresh:
# The end index is the current breakpoint
end_index = index
# Slice the sentence_dicts from the current start index to the end index
group = sentences[start_index : end_index + 1]
combined_text = " ".join([d["sentence"] for d in group])
chunks.append(combined_text)
# Update the start index for the next group
start_index = index + 1
# The last group, if any sentences remain
if start_index < len(sentences):
combined_text = " ".join([d["sentence"] for d in sentences[start_index:]])
chunks.append(combined_text)
return chunks
```
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-experimental==0.0.50
MacOS 14.3
Python 3.11.7 | IndexError: index -1 is out of bounds for axis 0 with size 0 in langchain_experimental | https://api.github.com/repos/langchain-ai/langchain/issues/17060/comments | 1 | 2024-02-05T21:23:13Z | 2024-02-06T19:53:11Z | https://github.com/langchain-ai/langchain/issues/17060 | 2,119,479,277 | 17,060 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores.surrealdb import SurrealDBStore
from langchain_openai import OpenAIEmbeddings
async def main():
text = "Here is some sample text with the name test"
texts = RecursiveCharacterTextSplitter(chunk_size=3, chunk_overlap=0).split_text(
text
)
store = await SurrealDBStore.afrom_texts(
texts=texts,
# Can replace with other embeddings
# otherwise OPENAI_API_KEY required
embedding=OpenAIEmbeddings(),
dburl="ws://localhost:8000/rpc",
db_user="root",
db_pass="root",
)
retriever = store.as_retriever()
# Throws TypeError: 'NoneType' object is not a mapping
docs = await retriever.aget_relevant_documents("What is the name of this text?")
print(docs)
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```sh
Traceback (most recent call last):
File "/workspaces/core-service/core_service/recreate_issue.py", line 32, in <module>
asyncio.run(main())
File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 684, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/workspaces/core-service/core_service/recreate_issue.py", line 28, in main
docs = await retriever.aget_relevant_documents("What is the name of this text?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents
raise e
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 674, in _aget_relevant_documents
docs = await self.vectorstore.asimilarity_search(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 380, in asimilarity_search
return await self.asimilarity_search_by_vector(query_embedding, k, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 343, in asimilarity_search_by_vector
for document, _ in await self._asimilarity_search_by_vector_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 236, in _asimilarity_search_by_vector_with_score
metadata={"id": result["id"], **result["metadata"]},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not a mapping
```
### Description
I am trying to use the `langchain_commnity.vectorstores.surrealdb` module. The `metadatas` property on the `SurrealDBStore.afrom_texts` is optional int the typing for this class but when used as a retriever it errors unless proper metadata is provided.
This is happening because a `None` check is missing here and it's instead trying to map a `None` type which is the default:
https://github.com/langchain-ai/langchain/blob/75b6fa113462fd5736fba830ada5a4c886cf4ad5/libs/community/langchain_community/vectorstores/surrealdb.py#L236
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.12.1 (main, Dec 19 2023, 20:23:36) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.3
> langchain_community: 0.0.15
> langchain_openai: 0.0.5 | Using SurrealDB as a vector store with no metadatas throws a TypeError | https://api.github.com/repos/langchain-ai/langchain/issues/17057/comments | 1 | 2024-02-05T20:29:47Z | 2024-04-01T00:45:15Z | https://github.com/langchain-ai/langchain/issues/17057 | 2,119,403,982 | 17,057 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.