issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
Obsidian markdown documents frequently have additional metadata beyond what is in the frontmatter: tags within the document, and (for many users) dataview plugin values.
Add support for this.
### Motivation
Surfacing tags and dataview fields would unlock more abilities for self-querying obsidian data (https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/chroma_self_query)
### Your contribution
I plan to make a PR for this. | Add support tags and dataview fields to ObsidianLoader | https://api.github.com/repos/langchain-ai/langchain/issues/9800/comments | 2 | 2023-08-26T16:02:26Z | 2023-12-02T16:05:07Z | https://github.com/langchain-ai/langchain/issues/9800 | 1,868,155,973 | 9,800 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I tried to use AsyncFinalIteratorCallbackHandler (which inherits from AsyncCallbackHandler) to implement async streaming, then encountered following issue:
libs/langchain/langchain/callbacks/manager.py:301: RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
getattr(handler, event_name)(*args, **kwargs)
following is the demo:
`
import os
import asyncio
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents import AgentType
from langchain.agents import initialize_agent
from langchain.callbacks.streaming_aiter_final_only import AsyncFinalIteratorCallbackHandler
os.environ["OPENAI_API_KEY"] = "<your openai key>"
async_streaming_handler = AsyncFinalIteratorCallbackHandler()
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo-16k-0613")
agent_chain = initialize_agent(
[], llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
async def streaming():
result = agent_chain.run(
'how to gain good reviews from customer?', callbacks=[async_streaming_handler])
print(f"result: {result}")
while True:
token = await async_streaming_handler.queue.get()
print(f"async token: {token}")
await asyncio.sleep(0.1)
asyncio.run(streaming())
`
how can I deal with it, could anyone help me?
### Suggestion:
_No response_ | Issue: RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/9798/comments | 3 | 2023-08-26T12:46:53Z | 2023-12-02T16:05:12Z | https://github.com/langchain-ai/langchain/issues/9798 | 1,868,085,901 | 9,798 |
[
"langchain-ai",
"langchain"
] | null | Issue: what's the difference between this with RASA | https://api.github.com/repos/langchain-ai/langchain/issues/9792/comments | 4 | 2023-08-26T07:06:24Z | 2023-12-02T16:05:17Z | https://github.com/langchain-ai/langchain/issues/9792 | 1,867,947,019 | 9,792 |
[
"langchain-ai",
"langchain"
] | ### System Info
Colab environment
LangChain version: 0.0.152
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please follow the tutorial [here](https://learn.activeloop.ai/courses/take/langchain/multimedia/46317672-using-the-open-source-gpt4all-model-locally) and run the code below to reproduce
```
template = """Question: {question}
Answer: Let's answer in two sentence while being funny."""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = GPT4All(model="./models/ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What happens when it rains somewhere?"
llm_chain.run(question)
```
### Expected behavior
TypeError: GPT4All.generate() got an unexpected keyword argument 'n_ctx' | GPT4All callup failure | https://api.github.com/repos/langchain-ai/langchain/issues/9786/comments | 2 | 2023-08-26T05:44:11Z | 2023-12-18T23:48:24Z | https://github.com/langchain-ai/langchain/issues/9786 | 1,867,920,013 | 9,786 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, Actually exists a document loader for Google Drive in the Python version, I would like to propose this feature in the javascript version, it is important to our company.
### Motivation
Hi, Actually exists a document loader for Google Drive in the Python version, I would like to propose this feature in the javascript version, it is important to our company.
### Your contribution
no :( I don´t have the knowledge | document loader google drive for javascript version | https://api.github.com/repos/langchain-ai/langchain/issues/9783/comments | 2 | 2023-08-26T02:19:29Z | 2023-12-02T16:05:22Z | https://github.com/langchain-ai/langchain/issues/9783 | 1,867,861,639 | 9,783 |
[
"langchain-ai",
"langchain"
] | ### System Info
pydantic==1.10.12
langchain==0.0.271
System: MacOS Ventura 13.5 (22G74)
Python 3.9.6
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is the main python file:
```
from dotenv import load_dotenv
load_dotenv()
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.agents import Tool
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.chains import RetrievalQA
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from pydantic import BaseModel, Field
class DocumentInput(BaseModel):
question: str = Field()
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tools = []
files = [
{
"name": "alphabet-earnings",
"path": "https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf",
},
{
"name": "tesla-earnings",
"path": "https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update",
},
]
for file in files:
print(f"Loading {file['name']} with path {file['path']}")
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever),
)
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
comparison = agent({"input": "which one has higher earning?"})
print(comparison)
print('-------------------------------')
comparison = agent({"input": "did alphabet or tesla have more revenue?"})
print(comparison)
```
The python version is 3.9.6 and pydantic==1.10.12 and langchain==0.0.271
Run the code using python main.py
Here is the output:
```
Loading alphabet-earnings with path https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf
Loading tesla-earnings with path https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update
> Entering new AgentExecutor chain...
Which companies' earnings are you referring to?
> Finished chain.
{'input': 'which one has higher earning?', 'output': "Which companies' earnings are you referring to?"}
-------------------------------
> Entering new AgentExecutor chain...
Invoking: `alphabet-earnings` with `{'question': 'revenue'}`
{'query': 'revenue', 'result': 'The revenue for Alphabet Inc. for the quarter ended March 31, 2023, was $69,787 million.'}
Invoking: `tesla-earnings` with `{'question': 'revenue'}`
{'query': 'revenue', 'result': 'Total revenue for Q1-2023 was $23.3 billion.'}Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion.
> Finished chain.
{'input': 'did alphabet or tesla have more revenue?', 'output': "Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion."}
```
### Expected behavior
I expect that for the question 'which one has higher earning?' also gives a good answer just like it did when the question asked was 'did alphabet or tesla have more revenue?' instead.
I was following this guide: https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit | RetrievalQA for document comparison is not working for similar type of questions. | https://api.github.com/repos/langchain-ai/langchain/issues/9780/comments | 2 | 2023-08-25T21:54:12Z | 2023-12-01T16:06:23Z | https://github.com/langchain-ai/langchain/issues/9780 | 1,867,724,844 | 9,780 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Some urls will return status code 403 if the right headers are not passed to the requests.get(path) function. My workaround was to provide the headers of that website to get the approval from the server. Would be great to be able to pass headers to PyMuPDFLoader and all the other web based document_loaders
headers = {
"User-Agent": "Chrome/109.0.5414.119",
"Referer": "https://www.ncbi.nlm.nih.gov" if 'ncbi' in self.file_path else None
}
r = requests.get(self.file_path, headers=headers)
the execution would be PyMuPDF(path, headers).load() and if it detects that headers is provided itll provide it downstream to 'get'
### Motivation
Some urls will return status code 403 if the right headers are not passed to the requests.get(path) function. My workaround was to provide the headers of that website to get the approval from the server. Would be great to be able to pass headers to PyMuPDFLoader and all the other web based document_loaders
Mainly an issue with websites like NCBI
### Your contribution
Not experienced enough | Pass headers arg (requests library) to loaders that fetch from web | https://api.github.com/repos/langchain-ai/langchain/issues/9778/comments | 3 | 2023-08-25T20:39:29Z | 2024-05-28T08:23:07Z | https://github.com/langchain-ai/langchain/issues/9778 | 1,867,657,651 | 9,778 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version 0.0.273
Python version 3.8.10
Ubuntu 20.04.5 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code:
```
MY_MODEL_NAME="SUBSTITUTE_THIS_WITH_YOUR_OWN_MODEL_FOR_REPRODUCTION"
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = ChatOpenAI(temperature=0.1, model=MY_MODEL_NAME)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(llm=llm, memory=memory)
message1 = "Howdy there"
response1 = conversation(message1)
print(response1)
message2 = "How's it going?"
response2 = conversation(message2)
print(response2)
```
Inspect the requests sent to the server. They will resemble the following packets received by my own server:
request1:
```
'messages': [{'role': 'user', 'content': 'The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n[]\nHuman: Howdy there\nAI:'}]
```
request2:
```
'messages': [{'role': 'user', 'content': "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n[HumanMessage(content='Howdy there', additional_kwargs={}, example=False), AIMessage(content='Howdy! How can I help you today?\\n', additional_kwargs={}, example=False)]\nHuman: How's it going?\nAI:"}]
```
Note that these requests are malformatted.
### Expected behavior
There are two issues.
First, the `messages` packet is clearly malformatted, containing HumanMessage and AIMessage strings.
Second, the `messages` packet only has one conversation turn, and there appears to be no options within the ConversationChain class to allow for multiple turns.
This is particularly problematic as the ConversationChain class requires the user to know what turn tokens are appropriate to use. The user cannot and should not be expected to have knowledge of how the model was trained: there should be an option to leave this up to the server to decide.
My expected (and required for my product) behavior is for the two requests to be formatted as follows.
request1:
```
'messages': [{'role': 'user', 'content': 'Howdy there'}]
```
request2:
```
'messages': [{'role': 'user', 'content': 'Howdy there'}, {'role': 'assistant', 'content': 'Howdy! How can I help you today?\\n'}, {'role': 'user', 'content': "How's it going?"}]
```
Ultimately it is confusing why `conversation(message1); conversation(message2);` sends a different request to the server back-end than `llm([HumanMessage(content=message1), AIMessage(content=response1), HumanMessage(content=message2)])` does. | ConversationChain sends malformatted requests to server | https://api.github.com/repos/langchain-ai/langchain/issues/9776/comments | 3 | 2023-08-25T19:37:16Z | 2023-12-07T16:06:30Z | https://github.com/langchain-ai/langchain/issues/9776 | 1,867,592,455 | 9,776 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/709a67d9bfcff475356924d8461140052dd418f7/libs/langchain/langchain/chains/qa_with_sources/base.py#L123
I've noticed that the retrieval qa chain doesn't always return SOURCES, it sometimes returns "Sources", "sources" or "source". | The RetrievalQAWithSourcesChain with the ExamplePrompt doesn't always return SOURCES as part of it's answers. | https://api.github.com/repos/langchain-ai/langchain/issues/9774/comments | 3 | 2023-08-25T17:57:20Z | 2023-12-02T16:05:32Z | https://github.com/langchain-ai/langchain/issues/9774 | 1,867,473,509 | 9,774 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Right now, the Sagemaker Inference Endpoint LLM does not allow for async requests and it's a performance bottleneck.
I have an API gateway set up such that I have a restful api endpoint backed by the sagemaker inference endpoint.
In an ideal world:
1. Langchain should allow for arbitrary http requests to a backend LLM of our choice fronted by your LLM interfaces. This way, we can standardize async calls for this sort of flow.
2. SagemakerEndpoint should allow for async requests.
Is this feasible?
Does this exist at the moment?
### Suggestion:
_No response_ | Issue: SagemakerEndpoint does not support async calls | https://api.github.com/repos/langchain-ai/langchain/issues/9773/comments | 1 | 2023-08-25T17:21:24Z | 2023-12-01T16:06:42Z | https://github.com/langchain-ai/langchain/issues/9773 | 1,867,424,178 | 9,773 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The issue with PySpark messages is that they could become quite lengthy and after some iterations they could potentially causing problems with token limits. In this context, I would like to initiate a discussion about this topic and explore potential solutions.
### Suggestion:
My suggestion would be to generate a summary of the error message before returning. I am currently not deep enough into the langchain topic and maybe there are better options so feel free to comment.
It is regarding following part [langchain/libs/langchain/langchain/utilities/spark_sql.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/spark_sql.py):
```python
def run_no_throw(self, command: str, fetch: str = "all") -> str:
"""Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
"""
try:
return self.run(command, fetch)
except Exception as e:
"""Format the error message"""
return f"Error: {e}"
``` | PySpark error message and token limits in spark_sql | https://api.github.com/repos/langchain-ai/langchain/issues/9767/comments | 1 | 2023-08-25T14:22:11Z | 2023-12-01T16:06:48Z | https://github.com/langchain-ai/langchain/issues/9767 | 1,867,149,666 | 9,767 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.273
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use Azure Cognitive Search retriever, however it fails because our fields are different:
Our index looks like this:

Our code:
```
llm = AzureChatOpenAI(
openai_api_base=config['AZURE_OPENAI_ENDPOINT'],
openai_api_version=config['AZURE_OPENAI_API_VERSION'],
deployment_name=config['OPENAI_DEPLOYMENT_NAME'],
openai_api_key=config['AZURE_OPENAI_API_KEY'],
openai_api_type=config['OPENAI_API_TYPE'],
model_name=config['OPENAI_MODEL_NAME'],
temperature=0)
embeddings = OpenAIEmbeddings(
openai_api_base=config['AZURE_OPENAI_ENDPOINT'],
openai_api_type="azure",
deployment=config['AZURE_OPENAI_EMBEDDING_DEPLOYED_MODEL_NAME'],
openai_api_key=config['AZURE_OPENAI_API_KEY'],
model=config['AZURE_OPENAI_EMBEDDING_DEPLOYED_MODEL_NAME'],
chunk_size=1)
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="text",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="embedding",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=1536,
vector_search_configuration="default",
)
]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=config['AZURE_SEARCH_SERVICE_ENDPOINT'],
azure_search_key=config['AZURE_SEARCH_ADMIN_KEY'],
index_name=config['AZURE_SEARCH_VECTOR_INDEX_NAME'],
embedding_function=embeddings.embed_query,
fields=fields,
)
retriever = vector_store.as_retriever(search_type="similarity", kwargs={"k": 3})
# Creating instance of RetrievalQA
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Generating response to user's query
response = chain({"query": config['question']})
```
I traced it all down to the function: vector_search_with_score in azuresearch.py
```
results = self.client.search(
search_text="",
vectors=[
Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
)
],
select=[FIELDS_ID, FIELDS_CONTENT, FIELDS_METADATA],
filter=filters,
)
```
The code is trying to use FIELDS_CONTENT_VECTOR which is a constant and its not our field name.
I guess some other issues may arise with other parts of the code where constants are used.
Why do we have different field names?
We are using Microsoft examples to setup all azure indexing, indexers, skillsets and datasources:
https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-ingestion-python-sample.ipynb
and their open ai embedding generator deployed as an azure function:
https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md
I wrote a blog post series about this
https://medium.com/python-in-plain-english/elevate-chat-ai-applications-mastering-azure-cognitive-search-with-vector-storage-for-llm-a2082f24f798
### Expected behavior
I should be able to define the fields we want to use, but the code uses constants | AzureSearch.py is using constant field names instead of ours | https://api.github.com/repos/langchain-ai/langchain/issues/9765/comments | 14 | 2023-08-25T14:18:44Z | 2024-07-11T07:54:13Z | https://github.com/langchain-ai/langchain/issues/9765 | 1,867,141,839 | 9,765 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version : 0.0.273
Python version : 3.10.8
Platform : macOS 13.5.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import langchain
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
llm = ChatOpenAI(model_name="gpt-3.5-turbo", streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
resp = llm.predict("Tell me a joke")
resp1 = llm.predict("Tell me a joke")
```
output:
```
Sure, here's a classic one for you:
Why don't scientists trust atoms?
Because they make up everything!
```
### Expected behavior
I'd expect both responses to be streamed to stdout but as the second one is coming from the memory cache, the callback handler `on_llm_new_token` is never called and thus the second response never printed.
I guess `on_llm_new_token` should be called once with the full response loaded from cache to ensure a consistent behavior between new and cached responses. | Streaming doesn't work correctly with caching | https://api.github.com/repos/langchain-ai/langchain/issues/9762/comments | 5 | 2023-08-25T13:35:30Z | 2024-04-23T09:58:26Z | https://github.com/langchain-ai/langchain/issues/9762 | 1,867,071,726 | 9,762 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.191
llama-cpp-python==0.1.78
chromadb==0.3.22
python3.10
wizard-vicuna-13B.ggmlv3.q4_0.bin
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a slightly modified version of privateGPT running. I am facing a weird issue where the SelfQueryRetriever is creating attributes that do not exist in my ChromaDB. This is crashing the app when running the RetrievalQA with error `chromadb.errors.NoDatapointsException: No datapoints found for the supplied filter`. I have provided a list of the attributes that exists in my DB, but still the SelfQueryRetreiever is creating filters on metadata that does not exist.
To reproduce the problem, use the wizard-vicuna-13B.ggmlv3.q4_0.bin model provided by TheBloke/wizard-vicuna-13B-GGML on HuggingFace and run the below code. I don't think the choice of model has an impact here. The issue I am facing is the creation of metadata filters that do not exist.
Is there a way to limit the attributes allowed by the SelfQueryRetriever?
```python
import logging
import click
import torch
from auto_gptq import AutoGPTQForCausalLM
from huggingface_hub import hf_hub_download
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.llms import HuggingFacePipeline, LlamaCpp
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
import time
# from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.vectorstores import Chroma
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
LlamaForCausalLM,
LlamaTokenizer,
pipeline,
)
from constants import CHROMA_SETTINGS, EMBEDDING_MODEL_NAME, PERSIST_DIRECTORY, MODEL_ID, MODEL_BASENAME
SEED = 42
def load_model(device_type, model_id, model_basename=None, local_model: bool = False, local_model_path: str = None):
"""
Select a model for text generation using the HuggingFace library.
If you are running this for the first time, it will download a model for you.
subsequent runs will use the model from the disk.
Args:
device_type (str): Type of device to use, e.g., "cuda" for GPU or "cpu" for CPU.
model_id (str): Identifier of the model to load from HuggingFace's model hub.
model_basename (str, optional): Basename of the model if using quantized models.
Defaults to None.
Returns:
HuggingFacePipeline: A pipeline object for text generation using the loaded model.
Raises:
ValueError: If an unsupported model or device type is provided.
"""
if local_model:
logging.info(f'Loading local model at {local_model_path}')
else:
logging.info(f"Loading Model: {model_id}, on: {device_type}")
logging.info("This action can take a few minutes!")
if model_basename is not None:
# if "ggml" in model_basename:
if ".ggml" in model_basename:
logging.info("Using Llamacpp for GGML quantized models")
if local_model:
model_path = local_model_path
else:
model_path = hf_hub_download(repo_id=model_id, filename=model_basename)
max_ctx_size = 2048
kwargs = {
"model_path": model_path,
"n_ctx": max_ctx_size,
"max_tokens": max_ctx_size,
}
if device_type.lower() == "mps":
kwargs["n_gpu_layers"] = 1000
if device_type.lower() == "cuda":
kwargs['seed'] = SEED
kwargs["n_gpu_layers"] = 40
return LlamaCpp(**kwargs)
else:
# The code supports all huggingface models that ends with GPTQ and have some variation
# of .no-act.order or .safetensors in their HF repo.
logging.info("Using AutoGPTQForCausalLM for quantized models")
if ".safetensors" in model_basename:
# Remove the ".safetensors" ending if present
model_basename = model_basename.replace(".safetensors", "")
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
logging.info("Tokenizer loaded")
model = AutoGPTQForCausalLM.from_quantized(
model_id,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=False,
quantize_config=None,
)
elif (
device_type.lower() == "cuda"
): # The code supports all huggingface models that ends with -HF or which have a .bin
# file in their HF repo.
logging.info("Using AutoModelForCausalLM for full models")
tokenizer = AutoTokenizer.from_pretrained(model_id)
logging.info("Tokenizer loaded")
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
trust_remote_code=True,
# max_memory={0: "15GB"} # Uncomment this line with you encounter CUDA out of memory errors
)
model.tie_weights()
else:
logging.info("Using LlamaTokenizer")
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(model_id)
# Load configuration from the model to avoid warnings
generation_config = GenerationConfig.from_pretrained(model_id)
# see here for details:
# https://huggingface.co/docs/transformers/
# main_classes/text_generation#transformers.GenerationConfig.from_pretrained.returns
# Create a pipeline for text generation
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=2048,
temperature=0,
top_p=0.95,
repetition_penalty=1.15,
generation_config=generation_config,
)
local_llm = HuggingFacePipeline(pipeline=pipe)
logging.info("Local LLM Loaded")
return local_llm
# chose device typ to run on as well as to show source documents.
@click.command()
@click.option(
"--device_type",
default="cuda" if torch.cuda.is_available() else "cpu",
type=click.Choice(
[
"cpu",
"cuda",
"ipu",
"xpu",
"mkldnn",
"opengl",
"opencl",
"ideep",
"hip",
"ve",
"fpga",
"ort",
"xla",
"lazy",
"vulkan",
"mps",
"meta",
"hpu",
"mtia",
],
),
help="Device to run on. (Default is cuda)",
)
@click.option(
"--show_sources",
"-s",
is_flag=True,
help="Show sources along with answers (Default is False)",
)
@click.option(
"--local_model",
"-lm",
is_flag=True,
help="Use local model (Default is False)",
)
@click.option(
"--local_model_path",
"-lmp",
default=None,
help="Path to local model. (Default is None)",
)
def main(device_type, show_sources, local_model: bool = False, local_model_path: str = None):
"""
This function implements the information retrieval task.
1. Loads an embedding model, can be HuggingFaceInstructEmbeddings or HuggingFaceEmbeddings
2. Loads the existing vectorestore that was created by inget.py
3. Loads the local LLM using load_model function - You can now set different LLMs.
4. Setup the Question Answer retreival chain.
5. Question answers.
"""
logging.info(f"Running on: {device_type}")
logging.info(f"Display Source Documents set to: {show_sources}")
embeddings = HuggingFaceInstructEmbeddings(model_name=EMBEDDING_MODEL_NAME, model_kwargs={"device": device_type})
# uncomment the following line if you used HuggingFaceEmbeddings in the ingest.py
# embeddings = HuggingFaceEmbeddings(model_name=EMBEDDING_MODEL_NAME)
# load the vectorstore
db = Chroma(
persist_directory=PERSIST_DIRECTORY,
embedding_function=embeddings,
client_settings=CHROMA_SETTINGS,
)
template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\
just say that you don't know, don't try to make up an answer.
{context}
{history}
Question: {question}
Helpful Answer:"""
prompt = PromptTemplate(input_variables=["history", "context", "question"], template=template)
memory = ConversationBufferMemory(input_key="question", memory_key="history")
llm = load_model(
device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME, local_model=local_model,
local_model_path=local_model_path)
metadata_field_info = [
AttributeInfo(
name='country',
description='The country name.',
type='string'
),
AttributeInfo(
name='source',
description='Filename and location of the source file.',
type='string'
)
]
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=db,
document_contents='News, policies, and laws for various countries.',
metadata_field_info=metadata_field_info,
verbose=True,
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt, "memory": memory},
)
# Interactive questions and answers
while True:
query = input("\nEnter a query: ")
if query == "exit":
break
# Get the answer from the chain
start = time.time()
res = qa(query)
answer, docs = res["result"], res["source_documents"]
# Print the result
print(f'Time: {time.time() - start}')
print("\n\n> Question:")
print(query)
print("\n> Answer:")
print(answer)
if show_sources: # this is a flag that you can set to disable showing answers.
# # Print the relevant sources used for the answer
print("----------------------------------SOURCE DOCUMENTS---------------------------")
for document in docs:
print("\n> " + document.metadata["source"] + ":")
print(document.page_content)
print("----------------------------------SOURCE DOCUMENTS---------------------------")
if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(filename)s:%(lineno)s - %(message)s", level=logging.INFO
)
main()
```
```bash
2023-08-25 12:53:40,256 - INFO - run_localGPT.py:209 - Running on: cuda
2023-08-25 12:53:40,256 - INFO - run_localGPT.py:210 - Display Source Documents set to: True
2023-08-25 12:53:40,397 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length 512
2023-08-25 12:53:42,924 - INFO - __init__.py:88 - Running Chroma using direct local API.
2023-08-25 12:53:42,929 - WARNING - __init__.py:43 - Using embedded DuckDB with persistence: data will be stored in: /home/waleedalfaris/localGPT/DB
2023-08-25 12:53:42,934 - INFO - ctypes.py:22 - Successfully imported ClickHouse Connect C data optimizations
2023-08-25 12:53:42,937 - INFO - json_impl.py:45 - Using python library for writing JSON byte strings
2023-08-25 12:53:47,543 - INFO - duckdb.py:460 - loaded in 129337 embeddings
2023-08-25 12:53:47,545 - INFO - duckdb.py:472 - loaded in 1 collections
2023-08-25 12:53:47,546 - INFO - duckdb.py:89 - collection with name langchain already exists, returning existing collection
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:50 - Loading local model at /home/waleedalfaris/localGPT/models/wizard-vicuna-13B.ggmlv3.q4_0.bin
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:53 - This action can take a few minutes!
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:58 - Using Llamacpp for GGML quantized models
ggml_init_cublas: found 1 CUDA devices:
Device 0: Tesla T4, compute capability 7.5
llama.cpp: loading model from /home/waleedalfaris/localGPT/models/wizard-vicuna-13B.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_head_kv = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 1
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 0.11 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 669.91 MB (+ 1600.00 MB per state)
llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 40 repeating layers to GPU
llama_model_load_internal: offloaded 40/43 layers to GPU
llama_model_load_internal: total VRAM used: 7288 MB
llama_new_context_with_model: kv self size = 1600.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
Enter a query: what is the penalty for cybercrime in the United Arab Emirates?
llama_print_timings: load time = 786.98 ms
llama_print_timings: sample time = 158.14 ms / 196 runs ( 0.81 ms per token, 1239.39 tokens per second)
llama_print_timings: prompt eval time = 83050.84 ms / 920 tokens ( 90.27 ms per token, 11.08 tokens per second)
llama_print_timings: eval time = 22099.62 ms / 195 runs ( 113.33 ms per token, 8.82 tokens per second)
llama_print_timings: total time = 105962.51 ms
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='country', value='United Arab Emirates'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='penalty', value='cybercrime')]) limit=None
Traceback (most recent call last):
File "/home/waleedalfaris/localGPT/run_localGPT.py", line 302, in <module>
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/waleedalfaris/localGPT/run_localGPT.py", line 279, in main
query = input("\nEnter a query: ")
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 119, in _call
docs = self._get_docs(question)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 181, in _get_docs
return self.retriever.get_relevant_documents(question)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py", line 90, in get_relevant_documents
docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 81, in search
return self.similarity_search(query, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 182, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 230, in similarity_search_with_score
results = self.__query_collection(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/utils.py", line 53, in wrapper
return func(*args, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
return self._collection.query(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 219, in query
return self._client._query(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/api/local.py", line 408, in _query
uuids, distances = self._db.get_nearest_neighbors(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/db/clickhouse.py", line 576, in get_nearest_neighbors
raise NoDatapointsException(
chromadb.errors.NoDatapointsException: No datapoints found for the supplied filter {"$and": [{"country": {"$eq": "United Arab Emirates"}}, {"penalty": {"$eq": "cybercrime"}}]}
2023-08-25 13:05:24,584 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: /home/waleedalfaris/localGPT/DB
```
### Expected behavior
Result of SelfQueryRetriever should only contain filters country with a value of United Arab Emirates and query should not be blank. Should have an ouptut similar to `query='cybercrime penalty' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='country', value='United Arab Emirates') limit=None` | SelfQueryRetriever creates attributes that do not exist | https://api.github.com/repos/langchain-ai/langchain/issues/9761/comments | 5 | 2023-08-25T13:31:11Z | 2024-01-12T11:56:38Z | https://github.com/langchain-ai/langchain/issues/9761 | 1,867,063,910 | 9,761 |
[
"langchain-ai",
"langchain"
] | ### System Info
- LangChain version: 0.0.105
- Platform: Macbook Pro M1 - Mac OS Ventura
- Node.js version: v18.17.1
- qdrant/js-client-rest: 1.4.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Related Components:**
- Vector Stores / Retrievers
- JavaScript LangChain Qdrant Wrapper
**Information:**
The issue arises when attempting to perform a semantic search using the Qdrant wrapper in LangChain through JavaScript. The provided code snippet is as follows:
```javascript
const embeddings = new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY })
const vectorStore = await QdrantVectorStore.fromExistingCollection(
embeddings,
{
url: process.env.QDRANT_URL,
collectionName: process.env.QDRANT_COLLECTION_NAME
})
const results = await vectorStore.similaritySearch("some query", 4)
```
The problem is that the `results` list of Documents contains undefined `pageContent`, while the metadata is fetched correctly. Strangely, when performing the same operation using the Python LangChain Qdrant wrapper, the `page_content` and `metadata` are both retrieved from the same Qdrant vectorstore correctly.
**Reproduction:**
To reproduce the issue, follow these steps:
1. Use the provided code snippet to perform a semantically search using the JavaScript LangChain Qdrant wrapper.
2. Examine the `results` list of Documents and observe that the `pageContent` property is undefined.
3. Compare the results with the results from the python equivalent code snippet:
```python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Qdrant
from qdrant_client import QdrantClient
qdrant_client = QdrantClient(
api_key=os.getenv("QDRANT_API_KEY"),
url=os.getenv("QDRANT_URL")
)
# get existing Qdrant vectorstore
vectorstore = Qdrant(
embeddings=openai_embeddings,
client=qdrant_client,
collection_name=os.getenv("QDRANT_COLLECTION_NAME"),
)
vectorstore.similarity_search(query='some query', k=4)
```
Please assist in resolving this discrepancy in the behavior between the two wrappers.
### Expected behavior
The expected behavior is that when performing a semantically search using the JavaScript LangChain Qdrant wrapper, the `results` list of Documents should contain valid `pageContent` along with correct metadata, similar to the behavior in the Python LangChain Qdrant wrapper.
Expected result (works with the python Qdrant wrapper):
```bash
[Document(page_content='\n Some content of a document ', metadata={'source': 'https://some.source.com', 'title': 'some title'})
...
]
```
Actual result:
```bash
[Document(page_content=undefined, metadata={'source': 'https://some.source.com', 'title': 'some title'})
...
]
``` | JavaScript LangChain Qdrant semantic search results: pageContent in each Document is undefined | https://api.github.com/repos/langchain-ai/langchain/issues/9760/comments | 3 | 2023-08-25T13:08:16Z | 2023-08-25T13:27:22Z | https://github.com/langchain-ai/langchain/issues/9760 | 1,867,029,264 | 9,760 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
have two separate docker images as follows.
1. for the purpose of Loading documents and then tokenizing and creating embeddings and storing in Vector store DB like this `vector_store_db:Weaviate = weaviateInstance.from_documents(documents, self.embeddings, by_text=False) `
2. Another docker image running FASTAPI which receives the actual Query. Now we want to be ablet to store the `vector_store_db` (created by the first docker image) in Redis Store so that the second docker image can get the `vector_store_db` from REDIS and execute the Query against it by invoking function like `similar_doc=vector_store_db.similarity_search("Question ?",k=1)`
we tried number of options to be able to store the `vector_store_db` ( which is of type Weaviate as per the documentation here [https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html#langchain.vectorstores.weaviate.Weaviate.from_texts](url) ) REDIS , but we are getting SERIALIZATION issues. We tried ` pickle, dill and json` and NO luck yet.
And then we came across the issue link [https://github.com/langchain-ai/langchain/issues/9288](url) and we tried the `dumps()` option and our relevant code snippet looks like this
```vs_redis_obj = dumps(vector_store_db)
redis_client = redis.Redis(host='localhost', port=6379, encoding="utf-8", decode_responses=True)
redis_client.set("ourkey", vs_serialized)
vs_obj:Weaviate = redis_client.get("ourkey")
#Start: sample code for Querying the Vector store DB
similar_doc=vector_store_db.similarity_search("Who is trying to invade earth?",k=1)
```
but we are getting the error `Error :'str' object has no attribute 'similarity_search'`
Basically we kind of get why the error is coming because of the following....
1. when we store the object `vector_store_db` in REDIS its getting serialized to its `str` equivalent.
2. So when we do redis.get() we get the vector_store_db to be its `str` equivalent. And this is the reason why our call to `similarity_search()` fails .
Any ideas how can we fix this please.
Basically we need to be able to make the vector_store_db (created by one docker image) to another docker image through REDIS,
Any help / suggestion is much appreciated and thanks in advance.
### Suggestion:
_No response_ | Issue: To be able to store Weaviate (for that matter any vector store) Vector Store in REDIS | https://api.github.com/repos/langchain-ai/langchain/issues/9758/comments | 10 | 2023-08-25T12:46:04Z | 2023-12-03T16:04:41Z | https://github.com/langchain-ai/langchain/issues/9758 | 1,866,995,735 | 9,758 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey wanted to request a feature within the map-reduce chain where a person can send his own list of chunks data of a corpus instead of creating chunks of data based on sending a textsplitter and a corpus.
### Motivation
Sometime one might see map-reduce cases where one wants to use their own chunks of data and don't want to split a data corpus based on sending a textsplitter or character splitter.
### Your contribution
I can work on it by raising a PR. | Custom Map-Reduce chain | https://api.github.com/repos/langchain-ai/langchain/issues/9757/comments | 2 | 2023-08-25T12:39:58Z | 2023-12-01T16:07:08Z | https://github.com/langchain-ai/langchain/issues/9757 | 1,866,987,071 | 9,757 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
can i load my local model by chain = LLMChain(llm=chat, prompt=chat_prompt)
### Suggestion:
_No response_ | Issue: can i load my local model by chain = LLMChain(llm=chat, prompt=chat_prompt) | https://api.github.com/repos/langchain-ai/langchain/issues/9752/comments | 4 | 2023-08-25T09:51:51Z | 2023-12-01T16:07:13Z | https://github.com/langchain-ai/langchain/issues/9752 | 1,866,736,311 | 9,752 |
[
"langchain-ai",
"langchain"
] | ### System Info
## Description:
### Context:
I'm using LangChain to develop an application that interacts with the gpt-3.5-turbo-16k model to handle long chains of up to 16384 tokens.
### Problem:
While the first message processes quickly **(specially if i have not previues context in the inputs)**, from the second message onward, I experience excessively long execution times, exceeding 5 minutes. On occasions, I receive timeout errors exceeding 10 minutes like the following:
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).`
#### **It's worth noting that when using the OpenAI API directly with the same context and length, the response is almost immediate.**
### Relevant Code:
```
from langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from langchain.chat_models import ChatOpenAI
from langchain.memory.entity import ConversationEntityMemory
def create_conversation_chain(inputs, num_msgs=3):
"""
Creates the base instance for the conversation with the llm and the memory
:param num_msgs: Number of messages to include in the memory buffer
:return: The conversation chain instance
"""
load_dotenv()
llm = ChatOpenAI(
temperature=0,
model_name=MODEL,
verbose=False,
)
memory = ConversationEntityMemory(
llm=llm,
k=num_msgs,
)
if inputs:
for inp in inputs:
memory.save_context(inp[0], inp[1])
conversation = ConversationChain(
llm=llm,
memory=memory,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
verbose=True,
)
return conversation
conversation = create_conversation_chain(inputs=self.input_msgs_entries, num_msgs=num_msgs_to_include)
ans = self.conversation.predict(input=msg)
```
Feel free to send me questions about my code if you need to know something else, but essentialy that is what I have
### Additional Details:
1. Operating System: Windows 10
2. Python Version: 3.10
3. LangChain Version: 0.0.271
4. I've tried with the streaming=True parameter cause I saw that in other issue, but the results remain the same.
### Request:
Could you help me identify and address the cause of these prolonged execution times when using LangChain, especially compared to direct use of the OpenAI API?
Thank you very much for your help!! ^^
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Steps to Reproduce the Behavior:
1. Setup Environment:
> - Install LangChain version 0.0.271 on a Windows 10 (but i tryed it in ubuntu and same problems) machine with Python 3.10.
> - gpt-3.5-turbo-16k model.
2. Implement the Conversation Chain:
> - Utilize the create_conversation_chain function provided in the initial description.
3. Provide Context:
> - Use a context (inputs) that is long enough to approach the 6000 to 9000 (context + msg) tokens but i am also getting this time consuming responses with lower bounds, I let you bellow one file with text (it is spanish text) that I use as an input to call gpt-3.5-turbo-16k.
4. Initialize and Predict:
> - Call the function:
> `conversation = create_conversation_chain(inputs=input_msgs_entries, num_msgs=num_msgs_to_include_in_buffer)`
> - Then, predict using:
> `ans = conversation.predict(input=msg)`
5. Observe Delay:
> - Note that while the first message processes quickly **(specially if i have not previues context in the inputs)**, subsequent messages experience prolonged execution times, sometimes exceeding 10 minutes.
> - Occasionally, timeout errors might occur, indicating a failure in the request due to excessive waiting time.
6. Compare with Direct OpenAI API:
> - Directly send the same context and message to the OpenAI API, without using LangChain.
> - Observe that the response is almost immediate, highlighting the difference in performance.
7. Test with Streaming:
> - Set the streaming=True parameter and observe that the prolonged execution times persist.
[test_random_conv_text.txt](https://github.com/langchain-ai/langchain/files/12437477/test_random_conv_text.txt)
### Expected behavior
## Expected Behavior:
When utilizing the create_conversation_chain function with the gpt-3.5-turbo-16k model to handle chains close to 16384 tokens:
1. **Consistent Performance:** The execution times for each message, regardless of it being the first or subsequent ones, should be relatively consistent and not show drastic differences.
2. **Reasonable Response Times:** Even for longer contexts approaching the model's token limit, the response time should be within a reasonable range, certainly not exceeding 10 minutes for a single prediction.
3. **No Timeout Errors:** The system should handle the requests efficiently, avoiding timeout errors, especially if the direct OpenAI API call with the same context responds almost immediately.
4. **Memory Efficiency:** The memory management system, especially when handling the context and inputs, should efficiently store and retrieve data without causing significant delays. | Prolonged execution times when using ConversationChain and ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/9750/comments | 4 | 2023-08-25T09:08:55Z | 2023-12-27T16:05:53Z | https://github.com/langchain-ai/langchain/issues/9750 | 1,866,661,448 | 9,750 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/9605
<div type='discussions-op-text'>
<sup>Originally posted by **nima-cp** August 22, 2023</sup>
Hello everyone, I wanna have a Q&A over some documents including pdf, xml and csv. I'm having some difficulty to write a DirectoryLoader for different types of files in a folder. I'm using Chroma and couldn't find a way to solve this. However, I found this in TS documentation:
```typescript
const directoryLoader = new DirectoryLoader(filePath, {
'.pdf': (path) => new PDFLoader(path, { splitPages: true }),
'.docx': (path) => new DocxLoader(path),
'.json': (path) => new JSONLoader(path, '/texts'),
'.jsonl': (path) => new JSONLinesLoader(path, '/html'),
'.txt': (path) => new TextLoader(path),
'.csv': (path) => new CSVLoader(path, 'text'),
'.htm': (path) => new UnstructuredLoader(path),
'.html': (path) => new UnstructuredLoader(path),
'.ppt': (path) => new UnstructuredLoader(path),
'.pptx': (path) => new UnstructuredLoader(path),
});
```
How can I write the same in Python?
```python
loader = DirectoryLoader(
filePath,
glob="./*.pdf",
loader_cls=PyMuPDFLoader,
show_progress=True,
)
```</div> | DirectoryLoader for different file types | https://api.github.com/repos/langchain-ai/langchain/issues/9749/comments | 5 | 2023-08-25T09:03:10Z | 2024-04-22T10:04:26Z | https://github.com/langchain-ai/langchain/issues/9749 | 1,866,651,760 | 9,749 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would like to have BambooHR Tool here(https://python.langchain.com/docs/integrations/tools/).
To make it possible to request information about employees within the company, their vacations, and so on.
### Motivation
BambooHR is quite a popular service, so I believe this tool will be used a lot.
### Your contribution
I am willing to contribute by coding a portion, but I am hesitant to code everything as it may be too much. It would be great if other enthusiasts could also join in.
I've already found the BambooHR OpenAPI file
[bamboo_openapi.json.zip](https://github.com/langchain-ai/langchain/files/12437406/bamboo_openapi.json.zip)
| BambooHR Tool | https://api.github.com/repos/langchain-ai/langchain/issues/9748/comments | 16 | 2023-08-25T08:56:25Z | 2023-12-01T16:07:18Z | https://github.com/langchain-ai/langchain/issues/9748 | 1,866,641,171 | 9,748 |
[
"langchain-ai",
"langchain"
] | Hi, I would like to build a chat bot to support multple users to access the chat bot.
Since the llm model size is very big, my VRAM can only load only one copy of LLM.
I would like to know if there is any way to load the model once and multiple access concurrently.
Here is what I just tried.
I tried to create two threads, and each thread run the llm model with different prompt.
Unfortunately, the responses are very strange. The r1 and r2 are gibberish code.
If I remove one of the thread, the response is good.
`
from langchain.llms import CTransformers
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import yaml
import os
import threading
import datetime
import time
def job1():
print("job1: ", datetime.datetime.now())
question1 = "Please introduce the history of China"
r1 = llm(question1)
print("r1:", r1)
def job2():
print("job2: ", datetime.datetime.now())
question2 = "Please introduce the history of The United States"
r2 = llm(question2)
print("r2:", r2)
// load the model once
llm = LlamaCpp(
model_path="/workspace/test/llama-2-7b-combined/ggml-model-q6_K.bin",
n_gpu_layers=20,
n_batch=128,
n_ctx=2048,
temperature=0.1,
max_tokens=512,
)
t1 = threading.Thread(target=job1)
t2 = threading.Thread(target=job2)
t1.start()
t2.start()
t1.join()
t2.join()
print("Done.")
`
### Suggestion:
_No response_ | Issue: LLM Multiple access problem | https://api.github.com/repos/langchain-ai/langchain/issues/9747/comments | 4 | 2023-08-25T08:42:02Z | 2023-12-01T16:07:23Z | https://github.com/langchain-ai/langchain/issues/9747 | 1,866,620,005 | 9,747 |
[
"langchain-ai",
"langchain"
] | ### Feature request
1. Ideally, current `input_variables` should be separated into `required_variables` and `allowed_variables`
2. `allowed_variables` should consist of `required_variables` + `optional_variables`
3. Current implementation of `format_document` requires some overhaul, as suggested by #7239. Since `format_document` is part of the schema, it should either be a class or, at least, its formatting and validation parts should be separated.
```python
# from:
def format_document(doc: Document, prompt: BasePromptTemplate) -> str:
base_info = {"page_content": doc.page_content, **doc.metadata}
missing_metadata = set(prompt.input_variables).difference(base_info)
if len(missing_metadata) > 0:
required_metadata = [
iv for iv in prompt.input_variables if iv != "page_content"
]
raise ValueError(
f"Document prompt requires documents to have metadata variables: "
f"{required_metadata}. Received document with missing metadata: "
f"{list(missing_metadata)}."
)
document_info = {k: base_info[k] for k in prompt.input_variables}
return prompt.format(**document_info)
# into (assumes required_variables is input_variables - optional_variables, backward compatible, not ideal or elegant so far):
def _validate_document(doc: Document, prompt: BasePromptTemplate) -> None:
base_info = {"page_content": doc.page_content, **doc.metadata}
missing_metadata = set(prompt.required_variables).difference(base_info)
if missing_metadata:
raise ValueError(
f"Document prompt requires documents to have metadata variables: "
f"{prompt.required_variables}. Received document with missing metadata: "
f"{list(missing_metadata)}."
)
def _format_document(doc: Document, prompt: BasePromptTemplate) -> None:
base_info = {"page_content": doc.page_content, **doc.metadata}
document_info = {k: base_info[k] for k in prompt.input_variables} # or allowed_variables
return prompt.format(**document_info)
def format_document(doc: Document, prompt: BasePromptTemplate, validation_function: Callable = _validate_document, formatting_function: Callable = _format_document, **kwargs) -> str:
_validate_document(doc, prompt)
return _format_document(doc, prompt, **kwargs) # format_kwargs?
```
### Motivation
Given that both `f-string` and `jinja2` support some control logic, it seems quite logical to allow optional variables, or to make `format_document` function more customizable.
### Your contribution
I'd like to work on it, but I believe there's a need for further discussion. | Add `optional_variables` for templates and make `format_document` customizable | https://api.github.com/repos/langchain-ai/langchain/issues/9746/comments | 0 | 2023-08-25T07:46:15Z | 2023-08-28T08:18:56Z | https://github.com/langchain-ai/langchain/issues/9746 | 1,866,525,826 | 9,746 |
[
"langchain-ai",
"langchain"
] | ### System Info
As stated in the title, the query is returning, but not the relevant documents. The code snippet below illustrates the issue:
```python
query = 'building bridges'
filter = Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='construction_material', value='steel')
limit = None
[]
```
I know that the documents and data are stored correctly because the query I am using works fine with similarity_search, and it returns the appropriate text. After splitting the PDF, I had to recreate the metadata and added it along with the documents. The meta_data field prints off without any problems when I access it in the similarity_search as well.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**### Steps to Reproduce the Behavior:**
1. **Load PDF:** Begin by loading the PDF file that you will be working with.
2. **Append page_content:** Next, append the content from the PDF pages into an empty string called text.
3. **Split text Recursively:** Split the text string recursively to segment the content.
4. **Create Metadata for Split Text:** Use the following function to create metadata for the split text.
```python
def create_metadata(title: str, section: int, material: str) -> dict:
metadata = {
"title": title,
"section": section,
"material": material,
}
return metadata
```
5. **Loop Over Split Text:** Iterate through the split text, appending custom metadata to a list.
6. **Add Docs, Embeddings, Metadata:** Utilize the Chroma.from_texts method with the following parameters:
```python
vectordb = Chroma.from_texts(
texts=docs,
embedding=embedding,
persist_directory=persist_directory,
metadatas=metadatas_list,
)
```
**Proceed with SelfQueryRetriever:** Finally, proceed to use the SelfQueryRetriever.from_llm method as described in the documentation.
---------------------------------------------------------------------------------------------------------------------
### Note:
Everything works as intended with the similarity_search. The SelfQueryRetriever is returning as expected minus the relevant documents.
My suspicion is that the issue may be related to the Documents() class, but I recreated the object/class without any success regarding output. It should still function properly if the data is inserting fine into the database and all the other queries are working fine. What has lead me to this place early on is an issue arises with PDFs when they are split; thus, the workarounds are either:
**Appending into an Empty String:** This is necessary because metadata becomes distorted, and page break behavior takes precedence over separators and chunk size.
**Converting PDF to Image and Then to Text:** The process is PDF -> IMG -> Tesseract -> Text, which still requires metadata to be recreated.
### Expected behavior
Expected behavior:
Output the query and the data related to it, not just the query. | unexpected behavior: retriever.get_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9744/comments | 2 | 2023-08-25T06:54:26Z | 2023-12-01T16:07:28Z | https://github.com/langchain-ai/langchain/issues/9744 | 1,866,443,166 | 9,744 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've been using Langchain to connect with the MongoDB vector store. While the file upload functionality works seamlessly, I encounter an error when trying to use the similarity search feature. Here's the error message I receive:

### Suggestion:
_No response_ | Issue: Error in Similarity Search with MongoDB Vector Store in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9735/comments | 3 | 2023-08-25T03:52:38Z | 2024-02-10T16:18:57Z | https://github.com/langchain-ai/langchain/issues/9735 | 1,866,229,858 | 9,735 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I'm getting the following error with Langchain integration with AWS Sagemaker:
`ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"Input payload must contain a 'inputs' key and optionally a 'parameters' key containing a dictionary of parameters."
}".`
I've tried adding a custom attribute to accept any relevant terms in order to run my model, but I'm still having issues. See below for my initialization of the model:
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname"
credentials_profile_name="profilename",
region_name="region",
model_kwargs={"temperature": 1e-10},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname"
credentials_profile_name="profilename",
region_name="region",
model_kwargs={"temperature": 1e-10},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
### Expected behavior
error described above | AWS Sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/9733/comments | 9 | 2023-08-25T02:57:42Z | 2023-12-01T16:07:32Z | https://github.com/langchain-ai/langchain/issues/9733 | 1,866,186,574 | 9,733 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.271
Platform: Ubuntu 20.04
Device: Nvidia-T4
Python version: 3.9.17
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from typing import Any, Dict, List, Optional
from langchain.pydantic_v1 import Field, root_validator
from langchain.llms import VLLM
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
class MyVLLM(VLLM):
dtype: str = 'auto'
vllm_kwargs: Dict[str, Any] = Field(default_factory=dict)
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that python package exists in environment."""
try:
from vllm import LLM as VLLModel
except ImportError:
raise ImportError(
"Could not import vllm python package. "
"Please install it with `pip install vllm`."
)
values["client"] = VLLModel(
model=values["model"],
tensor_parallel_size=values["tensor_parallel_size"],
trust_remote_code=values["trust_remote_code"],
dtype=values["dtype"],
**values['vllm_kwargs']
)
return values
llm = MyVLLM(model="tiiuae/falcon-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
dtype='float16',
vllm_kwargs = {'gpu_memory_utilization': 0.98},
callbacks=[StreamingStdOutCallbackHandler()]
)
# Prompt
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot having a conversation with a human."
),
# The `variable_name` here is what must align with memory
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}")
]
)
# Notice that we `return_messages=True` to fit into the MessagesPlaceholder
# Notice that `"chat_history"` aligns with the MessagesPlaceholder name
memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)
conversation = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=memory
)
```
### Expected behavior
I am following the `Chatbots` example [here](https://python.langchain.com/docs/use_cases/chatbots).
It's not working as expected. The responses returned are weird that not just a single LLM response is there but also some human responses. What is happening there?
 | `Chatbots` use case example is not working | https://api.github.com/repos/langchain-ai/langchain/issues/9732/comments | 3 | 2023-08-25T00:48:21Z | 2023-12-02T16:05:42Z | https://github.com/langchain-ai/langchain/issues/9732 | 1,866,073,423 | 9,732 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.271
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import PromptTemplate
from langchain.agents import AgentType
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from api.utils.agent.tools import view_capabilities_tool
load_dotenv()
system_message = "You are required to use only the tools provided to answer a query.If no given tool can help,"\
" truthfully tell the user that you are unable to help them.Always end reply with see ya!."\
"Query: {query}"
prompt_template = PromptTemplate(
template=system_message,
input_variables=["query"],
)
capabilities = view_capabilities_tool.CapabilitiesTool()
llm = ChatOpenAI(temperature=0)
agent_chain = initialize_agent(
[capabilities],
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
agent_kwargs={
"system_message": system_message
}
)
response = agent_chain.run(input="What can you do")
print(response)
capabilities_tool:
from typing import Type
from langchain.tools import BaseTool
from pydantic import BaseModel, BaseSettings
class CapabilitiesToolSchema(BaseModel):
pass
class CapabilitiesTool(BaseTool, BaseSettings):
name = "capabilities_tool"
description = """Tool that shows what you are capable of doing."""
args_schema: Type[CapabilitiesToolSchema] = CapabilitiesToolSchema
def _run(
self,
) -> str:
body = ("I can help you out with"
"\nAdding a site\nRemoving a site\nAdding an interest\nRemoving an interest\nViewing your details\n "
"Opt out")
return body
### Expected behavior
The model is meant to go through the whole reasoning process, select a tool and wait for the response for that tool called.
Instead the agent just stops at """Action_Input: ...""" every time. The model doesnt use any tool given, sometimes it would give this steps:
> Entering new AgentExecutor chain...
I can use the capabilities_tool to see what I am capable of doing. Let me check.
> Finished chain.
I can use the capabilities_tool to see what I am capable of doing. Let me check. | Langchain agent doesn't complete reasoning sequence stops halfway and can't use structured tools | https://api.github.com/repos/langchain-ai/langchain/issues/9728/comments | 3 | 2023-08-24T23:02:33Z | 2024-02-25T19:01:25Z | https://github.com/langchain-ai/langchain/issues/9728 | 1,866,002,041 | 9,728 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi there,
Currently using PGVector you can pass a filter object. But this filter object only allows key value pairs.
`Dict[str, str].
`
I am requesting the ability to also send a list of strings for easy filter across many pieces of data.
`Dict[str, list[str]]
`
### Motivation
I have a lot of Data with Metadata that contains content type.
I would like to provide a list of content types to PGvector and allow it to filter and return content from multiple types.
### Your contribution
N/A | Pgvector support to filter by List | https://api.github.com/repos/langchain-ai/langchain/issues/9726/comments | 3 | 2023-08-24T22:50:34Z | 2023-12-11T16:05:58Z | https://github.com/langchain-ai/langchain/issues/9726 | 1,865,993,227 | 9,726 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.258
Python Version 3.8.10
Ubuntu 20.04.5 LTS
We are following the instructions from the blog posted at https://python.langchain.com/docs/use_cases/question_answering/
We find that this works on small documents/directories. However when we run it on larger data sets, we get rate limit errors as below:
```
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-RVhTeiJGcKtuLfYUSO6rLABk on tokens per min. Limit: 1000000 / min. Current: 899517 / min. Contact us through our help center at help.openai.com if you continue to have issues..
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-RVhTeiJGcKtuLfYUSO6rLABk on tokens per min. Limit: 1000000 / min. Current: 887493 / min. Contact us through our help center at help.openai.com if you continue to have issues..
```
Since we have a paid account with OpenAI we doubt we are running into any actual issues with OpenAI. Looking at their dashboird, we see we are well under any limits.
Our full code that demonstrates this issue is posted below:
```
import os
import sys
import environ
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
env = environ.Env()
environ.Env.read_env()
# Load API key and document location
OPENAI_API_KEY = env('OPENAI_API_KEY')
if OPENAI_API_KEY == "":
print("Missing OpenAPI key")
exit()
print("Using OpenAPI with key ["+OPENAI_API_KEY+"]")
path = sys.argv[1]
if path == "":
print("Missing document path")
exit()
# Document loading
loader = DirectoryLoader(path, glob="*")
data = loader.load()
# Text splitting
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)
all_splits = text_splitter.split_documents(data)
# Create retriever
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
# Connect to LLM for generation
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectorstore.as_retriever(),
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
# prompt loop
def get_prompt():
print("Type 'exit' to quit")
while True:
prompt = input("Enter a prompt: ")
if prompt.lower() == 'exit':
print('Exiting...')
break
else:
try:
result = qa_chain({"query": prompt})
print(result["result"])
except Exception as e:
print(e)
get_prompt()
```
### Who can help?
@eyurtsev @hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can run the code posted above against any document/directory that is large... try with something over 50MB
### Expected behavior
A correct chat reply | Langchain QA over large documents results in Rate limit errors | https://api.github.com/repos/langchain-ai/langchain/issues/9717/comments | 6 | 2023-08-24T19:54:13Z | 2023-12-12T16:33:44Z | https://github.com/langchain-ai/langchain/issues/9717 | 1,865,791,331 | 9,717 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
It looks like it's creating new dataframes to answer complex questions, but in the answer it provides it keeps referencing the variable name rather than the printed variable.
Does the current agent not have the ability to create and use new variables?
Note: I'm using the CSV agent as a tool within another agent
### Suggestion:
Will the CSV agent having the create file tool alleviate this? | Issue: pandas agent tries to create new variables but returns along the lines of "the top 10 are {top_10_df}" | https://api.github.com/repos/langchain-ai/langchain/issues/9715/comments | 8 | 2023-08-24T19:07:43Z | 2024-07-04T16:06:38Z | https://github.com/langchain-ai/langchain/issues/9715 | 1,865,729,449 | 9,715 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.271, Python 3.11
Like the title says, _aget_relevant_documents isn't implemented in ParentDocumentRetriever so async calls are not working. It throws an NotImplementedError in BaseRetriever:
```
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[Document]:
"""Asynchronously get documents relevant to a query.
Args:
query: String to find relevant documents for
run_manager: The callbacks handler to use
Returns:
List of relevant documents
"""
raise NotImplementedError()
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just do a chain.acall from a chain using documents, (like a ConverstionalRetrievalChain), and it should trigger this error.
### Expected behavior
Should be able to call the chain async when using ParentDocumentRetriever. | ParentDocumentRetriever doesn't implement BaseRetriever._aget_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9707/comments | 5 | 2023-08-24T15:28:16Z | 2023-12-01T16:07:43Z | https://github.com/langchain-ai/langchain/issues/9707 | 1,865,405,504 | 9,707 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The "Creating a Custom Prompt Template" documentation is outdated with Pydantic v2.
```
class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
"""A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function."""
@validator("input_variables")
def validate_input_variables(cls, v):
"""Validate that the input variables are correct."""
if len(v) != 1 or "function_name" not in v:
raise ValueError("function_name must be the only input_variable.")
return v
```
The above code raises following TypeError:
```
> class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
E TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
```
Frankly, I have no idea how to get the prompt templates to work with new Pydantic, even after changing @validator to @field_validator.
### Idea or request for content:
Update the documentation to include new version of Pydantic in examples. | DOC: Custom Templates issue with Pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/9702/comments | 20 | 2023-08-24T14:13:41Z | 2024-02-16T16:09:02Z | https://github.com/langchain-ai/langchain/issues/9702 | 1,865,260,343 | 9,702 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform - AWS
Python version: 3.11.4
OS - Mac
### Who can help?
@3coins , @hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code sample which produces the bug:
```
body = json.dumps({"prompt": prompt, "max_tokens_to_sample": 8192})
accept = "application/json"
contentType = "application/json"
response_claudeV2 = boto3_bedrock.invoke_model(
body=body, modelId="anthropic.claude-v2", accept=accept, contentType=contentType
)
response_body_claudeV2 = json.loads(response_claudeV2.get("body").read())
print(response_body_claudeV2.get("completion"))
```
When the above code snippet is executed with boto3_client we get the error:
` botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 8192 is not less or equal to 8191, please reformat your input and try again.`
As per the doc of anthropic claude the claude-instant can output 9k tokens, claude-v1 can output 12k tokens and claude-v2 can output 12k tokens but when 'max_token_sample' parameter exceeds 8191 values they give the above error.
### Expected behavior
Expected behaviour is that the model should give the output as it can produce max 12K tokens. | Calude models not able to output more than 8191 tokens. | https://api.github.com/repos/langchain-ai/langchain/issues/9697/comments | 6 | 2023-08-24T12:21:25Z | 2024-03-14T06:18:03Z | https://github.com/langchain-ai/langchain/issues/9697 | 1,865,041,669 | 9,697 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database.
Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database.
`def __del__(self):
# Close the session (and thus the connection) when the instance is destroyed.
self.session.close()`
### Motivation
The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error
### Your contribution
I guess, may be. | No way to Close an open connection in PGVector.py | https://api.github.com/repos/langchain-ai/langchain/issues/9696/comments | 3 | 2023-08-24T11:57:09Z | 2023-11-15T20:34:38Z | https://github.com/langchain-ai/langchain/issues/9696 | 1,865,001,390 | 9,696 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
It will always output
```
responded: {content}
```
I suppose this [line](https://github.com/langchain-ai/langchain/blob/v0.0.271/libs/langchain/langchain/agents/openai_functions_agent/base.py#L130) is not correct.
### Suggestion:
Is the following code the way you need?
```
content_msg = f"responded: {message.content}\n" if message.content else "\n"
``` | Agent by AgentType.OPENAI_FUNCTIONS cannot output message content. | https://api.github.com/repos/langchain-ai/langchain/issues/9695/comments | 1 | 2023-08-24T10:03:15Z | 2023-11-30T16:06:01Z | https://github.com/langchain-ai/langchain/issues/9695 | 1,864,822,137 | 9,695 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
eroory msg:
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))).
code:
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
chat_model.predict("hi!")
export OPENAI_API_KEY="sk-xxxxxxujOoH";
### Suggestion:
_No response_ | Issue: APIConnectionError: Error communicating with OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/9688/comments | 2 | 2023-08-24T06:59:49Z | 2023-11-30T02:15:30Z | https://github.com/langchain-ai/langchain/issues/9688 | 1,864,533,468 | 9,688 |
[
"langchain-ai",
"langchain"
] | 
The Prompt was to explain tables in the data. in this case it should query into the SQL database. | DQL DB Langchain : When running an db_run query based on the prompt it should execute sql query only when needed. | https://api.github.com/repos/langchain-ai/langchain/issues/9686/comments | 5 | 2023-08-24T06:35:24Z | 2024-01-24T10:41:11Z | https://github.com/langchain-ai/langchain/issues/9686 | 1,864,499,596 | 9,686 |
[
"langchain-ai",
"langchain"
] | ### System Info
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `2023年8月的新闻`
An error occurred: 'DDGS' object does not support the context manager protocol
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# 初始化OpenAI Functions代理
agent = initialize_agent(
# tools,
tools=load_tools(["ddg-search"]),
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
max_iterations=10,
early_stopping_method="generate",
handle_parsing_errors=True, # 初始化代理并处理解析错误
callbacks=[handler],
)
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `2023年8月的新闻`
An error occurred: 'DDGS' object does not support the context manager protocol
### Expected behavior
该类提供了通过 [DuckDuckGo](https://duckduckgo.com/) 搜索引擎搜索的功能。
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
search.run("Who is winner of FIFA worldcup 2018?")
你应该期望如下输出:
The 2018 FIFA World Cup was the 21st FIFA World Cup, ... Mario Zagallo (Brazil) and Franz Beckenbauer (Germany) have also achieved the feat. | An error occurred: 'DDGS' object does not support the context manager protocol | https://api.github.com/repos/langchain-ai/langchain/issues/9685/comments | 2 | 2023-08-24T05:50:09Z | 2023-11-30T16:06:11Z | https://github.com/langchain-ai/langchain/issues/9685 | 1,864,451,518 | 9,685 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Enable the option to specify arbitrary keyword arguments (e.g. gpu_memory_utilization=0.98) in langchain.llms.VLLM() constructor
### Motivation
Currently, vllm library provides many useful keyword arguments which enables a lot of usages on a wide variety of devices, but langchain doesn't.
e.g. Some models do not work with GPUs with smaller memory as the default gpu_memory_utilization = 0.9, increasing this limit could enable the use of those models on the smaller GPUs.

### Your contribution
I can contribute by submitting a PR for that. | Allow specifying arbitrary keyword arguments in `langchain.llms.VLLM` | https://api.github.com/repos/langchain-ai/langchain/issues/9682/comments | 2 | 2023-08-24T05:41:55Z | 2023-11-30T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9682 | 1,864,443,617 | 9,682 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version == 0.0.271
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
agent_chain = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
max_iterations=1,
early_stopping_method='generate',
handle_parsing_errors=True,
system_prompt="You are an assistant named Mikey",
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"],
}
)
agent_chain.run(input="My name is Dev")
agent_chain.run(input="What is my name")
### Expected behavior
Expected behavior
final answer: Your name is Dev
Output behavior:
Final answer: I don't have access to personal information like your name. Is there anything else I can help you with?
Note: What happens is that the memory is not being passed along, so the model doesn't know my name | initialize_agent not saving and returning messages in memory | https://api.github.com/repos/langchain-ai/langchain/issues/9681/comments | 1 | 2023-08-24T05:30:13Z | 2023-08-24T05:40:45Z | https://github.com/langchain-ai/langchain/issues/9681 | 1,864,433,344 | 9,681 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The basic documents for the output parser are out of date for pydantic. Tried with python 3.9 and 3.11.
```
pip install langchain
pip install python-dotenv
```
The copied code:
```
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
load_dotenv()
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intended to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
```
results in:
```
<input>:21: PydanticDeprecatedSince20: Pydantic V1 style `@validator` validators are deprecated. You should migrate to Pydantic V2 style `@field_validator` validators, see the migration guide for more details. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.3/migration/
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 16, in <module>
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_model_construction.py", line 130, in __new__
cls.__pydantic_decorators__ = DecoratorInfos.build(cls)
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators.py", line 441, in build
res.validators[var_name] = Decorator.build(
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators.py", line 249, in build
func = shim(func)
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators_v1.py", line 77, in make_generic_v1_field_validator
raise PydanticUserError(
pydantic.errors.PydanticUserError: The `field` and `config` parameters are not available in Pydantic V2, please use the `info` parameter instead.
For further information visit https://errors.pydantic.dev/2.3/u/validator-field-config-info
```
### Idea or request for content:
Maybe fix the validator to match pydantic? Perhaps I'm mistaken? | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/9680/comments | 2 | 2023-08-24T05:17:48Z | 2023-11-30T16:06:21Z | https://github.com/langchain-ai/langchain/issues/9680 | 1,864,422,532 | 9,680 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version == 0.0.271
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
agent_chain = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
max_iterations=1,
early_stopping_method='generate',
handle_parsing_errors=True,
system_prompt="You are an assistant named Mikey",
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"],
}
)
agent_chain.run(input="What is my name")
### Expected behavior
final answer: Your name is ....
What happens is that the memory is not being returned(the name was given to the model in a previous run) so the model doesn't know my name | Initialize_agent not storing messages when memory is present | https://api.github.com/repos/langchain-ai/langchain/issues/9679/comments | 1 | 2023-08-24T05:04:25Z | 2023-08-24T05:25:36Z | https://github.com/langchain-ai/langchain/issues/9679 | 1,864,411,414 | 9,679 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I tried to use Erine Bot as an agent like below
```python
tools = [faq_tool, query_order_tool]
FORMAT_INSTRUCTIONS = """要使用工具,请按照以下格式返回:
Thought: 我是否需要使用一个工具? 是
Action: 采取的action应该是以下之一:[{tool_names}]
Action Input: action的输入参数
Observation: action的返回结果
如果你不需要使用工具, 请按以下格式返回:
Thought: 我是否需要使用工具? 否
{ai_prefix}: [把你的回复放在这里]
"""
PREFIX = """用中文回答以下问题. 如果找不到答案回答 '我不知道'. 你可以使用以下工具:"""
prompt_ = ConversationalAgent.create_prompt(
tools,
prefix = PREFIX,
format_instructions = FORMAT_INSTRUCTIONS,
input_variables=["input", "chat_history","agent_scratchpad"]
)
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=3)
chatllmChain = LLMChain(llm=ErnieBotChat(), callbacks=[self.logHandler], verbose = True)
agent = ConversationalAgent(llm_chain=chatllmChain)
callback_manager = CallbackManager([self.logHandler])
self.conversation_agent = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools,memory=memory, callback_manager = callback_manager, verbose=True, max_iterations=3)
```
Then I asked a question "你好,我应该怎么申请赔偿"
The answer returned by the agent is:
Could not parse LLM output: `我不知道,但我会将你的问题传递给对应的工具来获取答案。我发送了一个消息到[FAQ](%E8%BF%94%E5%9B%9E)和[search_order](%E8%BF%94%E5%9B%9E)。请等待他们的回复。`
Which is not correct. The correct behavior is call the faq tool and get relevant information to form a reply. Which was proven to be work when using Open API chat model.
@axiangcoding Any idea of how to make Ernie Bot to be able to handle this situation? Thanks in advance!!!
### Suggestion:
_No response_ | Issue: ERNIE Bot is not able to call tool | https://api.github.com/repos/langchain-ai/langchain/issues/9678/comments | 5 | 2023-08-24T03:51:14Z | 2023-11-30T16:06:26Z | https://github.com/langchain-ai/langchain/issues/9678 | 1,864,353,926 | 9,678 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | Please share a complete document PDF OR markdown etc about langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9677/comments | 2 | 2023-08-24T03:07:12Z | 2023-11-30T16:06:31Z | https://github.com/langchain-ai/langchain/issues/9677 | 1,864,308,955 | 9,677 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are currently using OpenAI's ChatCompletion API with custom ChatPromptTemplate, as converting langchain's ChatPromptTemplate to dict (or vice versa) seems not working well.
So I'd like to suggest new feature, to enable loading [ChatPromptTemplate](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/chat.py) from config dict,
by adding loading function (or classmethod in ChatPromptTemplate) to [prompts/loading.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/loading.py#L151-L155).
Or if there is standardize way to easily convert dict to ChatPromptTemplate, please let me know.
### Motivation
We are currently using OpenAI's ChatCompletion API with custom ChatPromptTemplate, as converting langchain's ChatPromptTemplate to dict (or vice versa) seems not working well.
As Utilizing OpenAI's ChatCompletion API (other than Completion API) becoming mainstream, I thought it might be good to support standard loading method for ChatPromptTemplate.
### Your contribution
If you allow me, I'd like to make a contribution related to this feature. | Support load ChatPromptTemplate from config dict | https://api.github.com/repos/langchain-ai/langchain/issues/9676/comments | 2 | 2023-08-24T03:04:38Z | 2023-11-30T16:06:36Z | https://github.com/langchain-ai/langchain/issues/9676 | 1,864,307,161 | 9,676 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
langchain==0.0.250
platform = macOS
Python version 3.11
I am not sure if this is by design, I think it is not and hence reporting this as an issue.
When looping through a list of pdf files to get a summary for each I am creating an index using - VectorstoreIndexCreator().from_documents(pages).
The issue is for each subsequent file, data(documents) from previous file are also being retrieved to be passed on to GPT and end up in the summary.
Code:
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
for file in files:
loader = PyPDFLoader(file['path'])
pages = loader.load_and_split()
index = VectorstoreIndexCreator().from_documents(pages)
retriever = index.vectorstore.as_retriever(search_type='mmr')
retriever.search_kwargs['k'] = 10
llm = ChatOpenAI()
aca_qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
chain_type='stuff',
# return_source_documents=True,
)
result = aca_qa({'query': summary_query})
### Suggestion:
Temp fix:
The resolution is to include - index = '' at the beginning of each loop cycle | Issue: vectorstore hence indexes and embeddings are persisting when they should not be. | https://api.github.com/repos/langchain-ai/langchain/issues/9668/comments | 3 | 2023-08-23T23:32:29Z | 2023-11-30T16:06:41Z | https://github.com/langchain-ai/langchain/issues/9668 | 1,864,147,213 | 9,668 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Version: 0.0.271
- Platform: Macbook Pro M1 macos 13.5
- Python Version: 3.11.4
### Who can help?
@...
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. Copy the existing code from [Langchain Document - Json Agent](https://python.langchain.com/docs/integrations/toolkits/json)
2. Replace the llm model from OpenAI to GPT4All
3. Use the model [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin)
4. Download the [JSON FIle](https://github.com/OAI/OpenAPI-Specification/blob/main/examples/v2.0/json/petstore.json) and provide the path in the script
5. Replace the question with `What are the required parameters in the request body to the /pets endpoint?`.
6. Output: `File venv/lib/python3.11/site-packages/langchain/agents/mrkl/output_parser.py", line 61, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: 'Action: json_spec_list_keys(data)'`
### Expected behavior
The agent should parse through and provide the answer. | Could not parse LLM output: `Action: json_spec_list_keys(data)` | https://api.github.com/repos/langchain-ai/langchain/issues/9658/comments | 2 | 2023-08-23T15:39:38Z | 2023-11-29T16:06:24Z | https://github.com/langchain-ai/langchain/issues/9658 | 1,863,582,262 | 9,658 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Modules -> prompt templates
This set code which is there in documentation is throwing error
`from langchain import PromptTemplate`
`invalid_prompt = PromptTemplate(`
` input_variables=["adjective"],`
` template="Tell me a {adjective} joke about {content}."`
`)`
Because here in the input variables 'content' word is missing , as it is also one of the input variable to the prompt template.
error -

**update the code -**
`from langchain import PromptTemplate`
`invalid_prompt = PromptTemplate(`
` input_variables=["adjective","content"],`
` template="Tell me a {adjective} joke about {content}."`
`)`
| Error in the code given in the prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/9656/comments | 3 | 2023-08-23T15:02:55Z | 2023-12-04T16:05:28Z | https://github.com/langchain-ai/langchain/issues/9656 | 1,863,519,793 | 9,656 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.235 / python 3.10.11 /
File: libs/langchain/langchain/output_parsers/json.py
There is a bug in the function `parse_json_markdown`.
When the input json_string contains \`\`\` $code \`\`\` . It mistakenly interprets $code as json format and fails to parse.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. make LLM answer with text and code (in \`\`\` $code \`\`\` format)
2. The output_parser will parse the $code instead of \`\`\` json \`\`\`
3. FAIL
### Expected behavior
maybe use re.findall and get the last search result
EX:
```
match = re.findall(r"```(json)?(.*?)```", json_string, re.DOTALL)
``` | output_parser has a bug while the output string has ``` code ``` | https://api.github.com/repos/langchain-ai/langchain/issues/9654/comments | 1 | 2023-08-23T11:58:25Z | 2023-11-29T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9654 | 1,863,187,495 | 9,654 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Link not working: [OpenAPI](https://python.langchain.com/docs/use_cases/apis/openapi_openai)
where is the link? https://python.langchain.com/docs/modules/chains/how_to/openai_functions
### Idea or request for content:
_No response_ | DOC: Not working: <OpenAPI spec and create + execute valid requests against the API> | https://api.github.com/repos/langchain-ai/langchain/issues/9653/comments | 2 | 2023-08-23T11:28:20Z | 2023-11-29T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9653 | 1,863,141,243 | 9,653 |
[
"langchain-ai",
"langchain"
] | I wrote a ChatGLM class that inherits LLM class as below.
```
class ChatGLM(LLM):
def _call(self, prompt: str,
stop: Optional[List[str]] = None) -> str:
message = [{"role": "user", "content": prompt}]
payload = {"model": "string", ...}
headers = {"Content-Type": "application/json"}
response = requests.post(url, json=payload, headers=headers)
return response.json()['choices'][0]['message']['content']
```
And I hope to use the ChatGLM class I wrote to replace the previously used OpenAI class during sql querying with SQLDatabaseChain:
```
llm = ChatGLM(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True,
return_intermediate_steps=False)
db_chain.run("What tables are in the database?")
```
This is the error reported by the code, does anyone know what caused it?
```
> Entering new SQLDatabaseChain chain...
What tables are in the database?
SQLQuery: Here
I'm sorry, but I don't see any SQLite query provided in your message. Could you please provide the SQLite query so I can review it for any common mistakes?
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
File C:\Language\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py:1963, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1962 if not evt_handled:
-> 1963 self.dialect.do_execute(
1964 cursor, str_statement, effective_parameters, context
1965 )
1967 if self._has_events or self.engine._has_events:
File C:\Language\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py:920, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
919 def do_execute(self, cursor, statement, parameters, context=None):
--> 920 cursor.execute(statement, parameters)
OperationalError: near "I": syntax error
......
```
I have located the error to `langchain/chains/sql_database/base.py` ln125, from here chatglm cannot output the correct `sql_cmd`, but I cannot further check what is causing this problem. Any help would be greatly appreciated, thank you.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import langchain
from langchain.llms.base import LLM
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri('sqlite:///music.db')
class ChatGLM(LLM):
url = "http://region-9.seetacloud.com:23361/v1/chat/completions"
@property
def _llm_type(self) -> str:
return "chatglm2-6b"
def _call(self, prompt: str,
stop: Optional[List[str]] = None) -> str:
message = [{
"role": "user",
"content": prompt
}]
url = "http://region-9.seetacloud.com:23361/v1/chat/completions"
payload = {
"model": "string",
"messages": message,
"temperature":1,
"top_p": 0,
"n": 1,
"max_tokens": 0,
"stream": False
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
return response.json()['choices'][0]['message']['content']
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters.
"""
_param_dict = {
"url": self.url
}
return _param_dict
llm = ChatGLM(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True,
return_intermediate_steps=False)
db_chain.run("What tables are in the database?")
```
### Expected behavior
return correct sql query | sql_cmd Error: SQLDatabaseChain with ChatGLM2-6B | https://api.github.com/repos/langchain-ai/langchain/issues/9651/comments | 2 | 2023-08-23T10:31:14Z | 2023-11-30T16:06:51Z | https://github.com/langchain-ai/langchain/issues/9651 | 1,863,049,747 | 9,651 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.268
python 3.9
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
parser = PydanticOutputParser(pydantic_object=Actor)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
chain = LLMChain(llm=ChatModel.get_model(), prompt=prompt, verbose=True)
output = chain.run(query="Generate the filmography for a random actor.", output_parser=parser)
```
Problem: _output_ is a JSON string, and not an Actor object.
### Expected behavior
The method _parse()_ from the OutputParser passed to the chain should be automatically called and return the parsed object, which is not the case. One has to call explicitly `parser.parse(output)` to retrieve an Actor object.
For custom parsers, it seems to work, but not for PydanticOutputParser. I expect the chain's behavior to be consistent across all parsers. | PydanticOutputParser not called by chain | https://api.github.com/repos/langchain-ai/langchain/issues/9650/comments | 2 | 2023-08-23T10:00:55Z | 2023-08-23T11:45:35Z | https://github.com/langchain-ai/langchain/issues/9650 | 1,862,997,610 | 9,650 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for the RAG question-answer bot.
There is one LLM call that I have not configured and it is reducing the quality of responses and increasing the time.
The prompt in the LLM call is:
> Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
It is taking around 1 second to complete this call, and it is reducing the quality of the response as well.
How do I stop this call?
### Suggestion:
_No response_ | Issue: How to stop extra LLM call in ConversationalRetrievalChain for question rephrasing | https://api.github.com/repos/langchain-ai/langchain/issues/9649/comments | 17 | 2023-08-23T09:53:18Z | 2024-06-24T20:10:42Z | https://github.com/langchain-ai/langchain/issues/9649 | 1,862,983,742 | 9,649 |
[
"langchain-ai",
"langchain"
] | ### System Info
VLLM from langchain gives the below error and stops executing:
code: `python
from langchain.llms import VLLM
llm = VLLM(model="facebook/opt-125m"
tensor_parallel_size=2,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm("What is the capital of France ?"))
`
gives the below error when setting tensor_parallel_size=2 and runs successfully if we comment out tensor_parallel_size argument:
error =>
**2023-08-23 08:52:55,683 ERROR services.py:1207 -- Failed to start the dashboard, return code -11
2023-08-23 08:52:55,685 ERROR services.py:1232 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2023-08-23 08:52:55,687 ERROR services.py:1276 --
The last 20 lines of /tmp/ray/session_2023-08-23_08-52-52_882632_28/logs/dashboard.log (it contains the error message from the dashboard):
2023-08-23 08:52:55,607 INFO head.py:242 -- Starting dashboard metrics server on port 44227**
**2023-08-23 08:52:56,847 INFO worker.py:1636 -- Started a local Ray instance.**

VM Details:
CPU : 4 Core
RAM : 13 GB
GPU: Nvidia T4 *2
Instance : Kaggle Kernal
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### code:
`python
from langchain.llms import VLLM
llm = VLLM(model="facebook/opt-125m"
tensor_parallel_size=2,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm("What is the capital of France ?"))
`
### On Kaggle Kernal
VM Details:
CPU : 4 Core
RAM : 13 GB
GPU: Nvidia T4 *2
Instance : Kaggle Kernal
### Expected behavior
2023-08-23 08:52:55,683 ERROR services.py:1207 -- Failed to start the dashboard , return code -11
2023-08-23 08:52:55,685 ERROR services.py:1232 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2023-08-23 08:52:55,687 ERROR services.py:1276 --
The last 20 lines of /tmp/ray/session_2023-08-23_08-52-52_882632_28/logs/dashboard.log (it contains the error message from the dashboard):
2023-08-23 08:52:55,607 INFO head.py:242 -- Starting dashboard metrics server on port 44227
2023-08-23 08:52:56,847 INFO worker.py:1636 -- Started a local Ray instance.
and Cell stops running i.e. execution stops | VLLM from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/9646/comments | 3 | 2023-08-23T09:05:03Z | 2023-11-29T16:06:49Z | https://github.com/langchain-ai/langchain/issues/9646 | 1,862,893,898 | 9,646 |
[
"langchain-ai",
"langchain"
] | ### System Info
win10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Load a jpg using UnstructuredImageLoader from Langchain.
2. An error occurred.
### Expected behavior
got the error here:
```
loader:<langchain.document_loaders.image.UnstructuredImageLoader object at 0x000002926EA8EFB0>
Exception in thread Thread-3 (_handle_results):
Traceback (most recent call last):
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\multiprocessing\pool.py", line 579, in _handle_results
task = get()
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\multiprocessing\connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
TypeError: TesseractNotFoundError.__init__() takes 1 positional argument but 2 were given
``` | Load a jpg using UnstructuredImageLoader from Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/9644/comments | 2 | 2023-08-23T08:39:20Z | 2023-11-29T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9644 | 1,862,851,384 | 9,644 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def get_connection():
user=os.environ.get('DB_USERNAME')
database= os.environ.get('DB_DATABASE')
password=os.environ.get('DB_PASSWORD')
host= os.environ.get('DB_HOST')
port= os.environ.get('DB_PORT')
return create_engine(
url="mysql+pymysql://{0}:{1}@{2}:{3}/{4}".format(
user, password, host, port, database
)
)
ef get_whole_conversation(question):
try:
llm = ChatOpenAI(temperature=0,openai_api_key=env('OPENAI_API_KEY'),model='gpt-3.5-turbo')
engine = get_connection()
input_db = SQLDatabase(engine)
db_chain = SQLDatabaseChain.from_llm(llm, input_db, verbose=True)
prompt="""
"""
tools = [Tool(name="Foo-Bar-db",func=db_chain.run,description=prompt)]
agent_kwargs = {"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],}
memory = ConversationBufferWindowMemory(memory_key="memory",k=4,return_messages=True)
agent = initialize_agent(tools,llm,agent=AgentType.OPENAI_FUNCTIONS,verbose=True,agent_kwargs=agent_kwargs,memory=memory)
agent.run(question)
### Suggestion:
_No response_ | Issue: I am working with SQLChain and initialize_agent it is not answering from the connected database. | https://api.github.com/repos/langchain-ai/langchain/issues/9641/comments | 4 | 2023-08-23T06:54:38Z | 2023-11-29T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9641 | 1,862,687,289 | 9,641 |
[
"langchain-ai",
"langchain"
] | ### System Info
Win10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
embeddings = HuggingFaceEmbeddings( model_name='sentence-transformers/LaBSE')
vectordb = Milvus(embedding_function=embeddings, connection_args=milvus_settings)
vectordb.add_documents(documents=spilt_results)
```
I got this Error:
Traceback (most recent call last):
File "E:\文件\programs\test_jieba\ingest.py", line 44, in <module>
main()
File "E:\文件\programs\test_jieba\ingest.py", line 39, in main
vectordb.add_documents(documents=spilt_results)
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\base.py", line 92, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\milvus.py", line 454, in add_texts
insert_list = [insert_dict[x][i:end] for x in self.fields]
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\milvus.py", line 454, in <listcomp>
insert_list = [insert_dict[x][i:end] for x in self.fields]
KeyError: 'file_path'
### Expected behavior
why is this happen? | When using Langchain to upload a .docx file to the Milvus database, an error occurs. | https://api.github.com/repos/langchain-ai/langchain/issues/9640/comments | 2 | 2023-08-23T06:25:55Z | 2023-11-29T16:07:05Z | https://github.com/langchain-ai/langchain/issues/9640 | 1,862,651,370 | 9,640 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is a notebook link in [QA using a Retriever
](https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_qa)
that should open a [notebook](https://python.langchain.com/docs/modules/chains/additional/question_answering.html)
But link is boken. Page does not exist.
Can you fix it?
Thanks
### Idea or request for content:
_No response_ | DOC: Broken notebook link | https://api.github.com/repos/langchain-ai/langchain/issues/9639/comments | 2 | 2023-08-23T06:17:03Z | 2023-11-29T16:07:11Z | https://github.com/langchain-ai/langchain/issues/9639 | 1,862,640,640 | 9,639 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.271
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This raises a key error because `word` is treated like an input variable even though it's a partial variables
```python
p = PipelinePromptTemplate(
final_prompt=PromptTemplate.from_template("this {word} work"),
pipeline_prompts=[],
input_variables=[],
partial_variables={"word": "does"}
)
print(p.format())
# or
print(p.partial().format())
```
### Expected behavior
partial_variables should be interpolated properly | PipelinePromptTemplate does not respect partial_variables | https://api.github.com/repos/langchain-ai/langchain/issues/9636/comments | 2 | 2023-08-23T02:31:08Z | 2023-11-29T16:07:15Z | https://github.com/langchain-ai/langchain/issues/9636 | 1,862,450,431 | 9,636 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.266
Python3.11.4
Sample python snippet:
```with open("openapi_request_body.yaml") as f:
raw_api_spec = yaml.load(f, Loader=yaml.Loader)
api_spec = reduce_openapi_spec(raw_api_spec)
requests_wrapper = RequestsWrapper()
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(temperature=0.0)
agent = planner.create_openapi_agent(api_spec, requests_wrapper, llm)
while True:
user_inp_query = input("Ask me anything: ")
print('user_inp_query: ', user_inp_query)
user_query = (user_inp_query)
agent.run(user_query)
```
The sample API spec contains 3 APIs: count_cars, locate_license_plate, and timestamps_of_matching_car_configs. However, I notice no matter what question I ask I see the following API planner response:
```Ask me anything: Tell me the timestamps whenever a green car came
-----------------------------------------------
user_inp_query: Tell me the timestamps whenever a green car came
> Entering new AgentExecutor chain...
Action: api_planner
Action Input: I need to find the right API calls to get the timestamps of when a green car came
Observation: 1. POST /count_cars with a query param to search for green cars
2. POST /timestamps_of_matching_car_configs with the query param from the previous call
3. POST /locate_license_plate with the query param from the previous call
Thought: I'm ready to execute the API calls.
Action: api_controller
Action Input: 1. POST /count_cars with a query param to search for green cars
2. POST /timestamps_of_matching_car_configs with the query param from the previous call
3. POST /locate_license_plate with the query param from the previous call
> Entering new AgentExecutor chain...
I need to make a POST request to the /count_cars endpoint with the query params.
```
openAPI spec definition is as follows:
```openapi: 3.0.0
servers:
- url: http://127.0.0.1:5001
info:
title: Car Traffic MetaData Analyzer
description: REST API service which can be used to fetch details like car's color, car's type [like suv, sedan. coupe etc i.e. type of vehicle body] and car's make [toyota, honda, porche etc i.e. manufacturing company's name]. We also store the license plate of the individual cars for each entry.
version: 1.0.0
paths:
/count_cars:
post:
summary: This API takes as an input car's color (like red, green etc), car's vehicle body type (like suv, sedan. coupe etc) and car's make (like toyota, honda, porche etc). All the input values are optional. This API then returns a count of total number of cars which are of that certain color/type/make.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/car_attributes'
# examples:
# examples1:
# $ref: '#/components/examples/car_attributes_example1'
# examples2:
# $ref: '#/components/examples/car_attributes_example2'
required: True
responses:
'200':
description: Successful response
content:
application/json:
examples:
example1:
summary: example 1
value:
total_count: "15"
example2:
summary: example 2
value:
total_count: "0"
'400':
description: Bad Request
'404':
description: Resource Not Found
/timestamps_of_matching_car_configs:
post:
summary: This API takes as an input car's color (like red, green etc), car's type (like suv, sedan. coupe etc) and car's make (like toyota, honda, porche etc). All the input values are optional. This API then returns two values, found and timestamps. When found=true it means that car config in query has been found and the corresponding timestamps key stores a list of all the timestamp at which this car config was found. If found=False, it means that not such car config can be found.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/car_attributes'
# examples:
# examples1:
# $ref: '#/components/examples/car_attributes_example1'
# examples2:
# $ref: '#/components/examples/car_attributes_example2'
required: True
responses:
'200':
description: Successful response when the API found none or some entries where the car's color, type and make matches with the parameters passed to it.
content:
application/json:
examples:
example1:
summary: When API endpoint found none relevant entries
value:
found: "False"
example2:
summary: When API endpoint found some relevant entries
value:
found: "True"
timestamps: ["2021-10-04T16:53:11Z", "2021-11-22T06:50:14Z"]
example3:
summary: Another example of when API endpoint found some relevant entries
value:
found: "True"
timestamps: ["2001-10-04T12:23:43Z", "2011-01-29T23:23:29Z", "2001-11-30T00:01:01Z", "2011-01-09T23:59:00Z"]
'400':
description: Bad Request
'404':
description: Resource Not Found
/locate_license_plate:
post:
summary: This API takes as an input license plate of a car. If the car's license plate is present in the database, it return with found=true, and it also returns that car's color, vehicle body type and make i.e. manufacturing company's name. Else if found=false, it means no such car is found in the database.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/license_plate'
# examples:
# examples1:
# $ref: '#/components/examples/license_plate_example1'
# examples2:
# $ref: '#/components/examples/license_plate_example2'
required: True
responses:
'200':
description: Successful response from the API. From the response found=False means that the license plate was not found in the databse. If found=True it means that the license plate was found.
content:
application/json:
examples:
example1:
summary: When API endpoint can't find license plate in the databse
value:
found: "False"
example2:
summary: When API endpoint found license plate, it returns car's color, type and make.
value:
found: "True"
color: "red"
type: "suv"
make: "honda"
example3:
summary: When API endpoint found license plate, it returns car's color, type and make.
value:
found: "True"
color: "grey"
type: "sedan"
make: "hyundai"
'400':
description: Bad Request
'404':
description: Resource Not Found
components:
schemas:
car_attributes:
properties:
color:
description: color of the car you want to query for.
type: string
type:
description: style of the vehicle body type on whether the car is one of suv, sedan, convertible, truck etc.
type: string
make:
description: manufacturing company of the car
type: string
license_plate:
properties:
license_plate:
description: lincense plate of the car
type: string
parameters:
color:
name: color
in: query
description: color of the car you want to query for.
schema:
type: string
examples:
red:
value: red
green:
value: green
blue:
value: blue
yellow:
value: yellow
type:
name: type
in: query
description: style of the vehicle body type on whether the car is one of suv, sedan, convertible, truck etc.
schema:
type: string
examples:
suv:
value: suv
sedan:
value: sedan
convertible:
value: convertible
truck:
value: truck
make:
name: make
in: query
description: Manufacturing company of the car
schema:
type: string
examples:
subaru:
value: subaru
hyundai:
value: hyundai
toyota:
value: toyota
porche:
value: porche
license_plate:
name: license_plate
in: query
description: Lincense plate of the car
schema:
type: string
examples:
6FFR593:
value: 6FFR593
KVT6282:
value: KVT6282
BHT9205:
value: BHT9205```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
run the code snippet as it is and point the openapi.yaml to the yaml file provided.
### Expected behavior
The API chain is expected to classify the query into one of these APIs and only run that API instead of calling all the APIs in the same sequence everytime. | create_openapi_agent always generating same chain no matter what question | https://api.github.com/repos/langchain-ai/langchain/issues/9634/comments | 2 | 2023-08-23T00:54:53Z | 2023-08-23T05:08:57Z | https://github.com/langchain-ai/langchain/issues/9634 | 1,862,387,605 | 9,634 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add the option to output progress bar Methods that add documents to an index, e.g., `from_documents`.
### Motivation
Methods that add documents to an index can take a long time to execute, e.g., `from_documents`. When executing from a notebook or CLI it would be very convenient to be able to track the progressing using something like tqdm.
### Your contribution
No | Monitoring the progress of long running vectorDB index updates | https://api.github.com/repos/langchain-ai/langchain/issues/9630/comments | 2 | 2023-08-22T22:08:51Z | 2024-02-25T16:06:52Z | https://github.com/langchain-ai/langchain/issues/9630 | 1,862,271,039 | 9,630 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.246
Python 3.11.4
SQLAlchemy 1.4.39
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
from dotenv import load_dotenv, find_dotenv
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, ForeignKey
from langchain.agents import initialize_agent
from langchain.sql_database import SQLDatabase
from langchain.chains import SQLDatabaseChain
from langchain.agents import Tool
def count_tokens(agent, query):
with get_openai_callback() as cb:
result = agent(query)
print(f'Spent a total of {cb.total_tokens} tokens')
return result
custom_dotenv_path = './openai.env'
_ = load_dotenv(custom_dotenv_path)
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
engine = create_engine('sqlite:///college.db', echo=True)
meta = MetaData()
students = Table(
'students', meta,
Column('id', Integer, primary_key = True),
Column('firstname', String),
Column('lastname', String),
)
addresses = Table(
'addresses', meta,
Column('id', Integer, primary_key = True),
Column('st_id', Integer, ForeignKey('students.id')),
Column('zipcode', String),
Column('email', String))
meta.create_all(engine)
conn = engine.connect()
conn.execute(students.insert(), [
{'id': 1, 'firstname': 'John', 'lastname': 'Smith'},
{'id': 2, 'firstname': 'Emily', 'lastname': 'Johnson'},
{'id': 3, 'firstname': 'Michael', 'lastname': 'Rodriguez'},
{'id': 4, 'firstname': 'Sarah', 'lastname': 'Kim'},
{'id': 5, 'firstname': 'William', 'lastname': 'Brown'}
])
conn.execute(addresses.insert(), [
{'st_id': 1, 'zipcode': '90210', 'email': 'john.smith@email.com'},
{'st_id': 2, 'zipcode': '30301', 'email': 'emily.johnson@email.com'},
{'st_id': 3, 'zipcode': '77001', 'email': 'michael.rodriguez@email.com'},
{'st_id': 4, 'zipcode': '94101', 'email': 'sarah.kim@email.com'},
{'st_id': 5, 'zipcode': '10001', 'email': 'william.brown@email.com'}
])
db = SQLDatabase(engine)
from langchain.prompts.prompt import PromptTemplate
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables:
{table_info}
When a user requests to add a new record, adhere to the following steps:
Only have single quotes on any sql command sent to the engine. If you generate 'INSERT' statement for adding any record to the table. Please 'EXECUTE' one statement at a time.
Step 1.Student Table Entry:
Navigate to the 'students' table.
Input the desired first name and last name for the new record.
Step 2.Address Table Entry:
Once the student record is created, retrieve its 'id'.
Move to the 'addresses' table.
Using the retrieved 'id', prepare a new entry ensuring it consists of the 'student id', 'zipcode', and 'email' as initially provided.
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
db = SQLDatabase(engine)
sql_chain = SQLDatabaseChain.from_llm(llm=llm, db=db, verbose=True,
#use_query_checker=True,
prompt=PROMPT
)
tools =[
Tool(
name='student',
func=sql_chain.run,
description="Useful for when you need to answer questions about new student record"
)
]
zero_shot_agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
max_iterations=5,
)
result = count_tokens(
zero_shot_agent,
"insert a new record with name Jane Everwood. Her email is 'everwood@gmail.com' and her zipcode is '99999'."
)
```
### Expected behavior
I asked ChatGPT to generate SQL for me to insert a new student record into the database. This database comprises two tables: students (cols: id, firstname, lastname) and addresses (cols: id, st_id, zipcode, email). When I input "insert a new record with the name Jane Everwood. Her email is 'everwood@gmail.com' and her zipcode is '99999'", the system should add one record to the students table and one corresponding record to the addresses table. Instead, I received a "sqlite3.Warning: You can only execute one statement at a time" message.
The generated SQL scripts from SQLDatabaseChain appear correct. Could the issue be related to calling "cursor.execute(statement, parameters)" to execute multiple statements? Thanks!
Here is the full message that I got:
```
> Entering new AgentExecutor chain...
I need to insert a new student record with the given information.
Action: student
Action Input: insert new record with name Jane Everwood, email 'everwood@gmail.com', and zipcode '99999'
> Entering new SQLDatabaseChain chain...
insert new record with name Jane Everwood, email 'everwood@gmail.com', and zipcode '99999'
SQLQuery:2023-08-22 15:44:43,547 INFO sqlalchemy.engine.Engine SELECT students.id, students.firstname, students.lastname
FROM students
LIMIT ? OFFSET ?
2023-08-22 15:44:43,548 INFO sqlalchemy.engine.Engine [generated in 0.00041s] (3, 0)
2023-08-22 15:44:43,551 INFO sqlalchemy.engine.Engine SELECT addresses.id, addresses.st_id, addresses.zipcode, addresses.email
FROM addresses
LIMIT ? OFFSET ?
2023-08-22 15:44:43,551 INFO sqlalchemy.engine.Engine [generated in 0.00025s] (3, 0)
INSERT INTO students (firstname, lastname) VALUES ('Jane', 'Everwood');
INSERT INTO addresses (st_id, zipcode, email) VALUES ((SELECT id FROM students WHERE firstname = 'Jane' AND lastname = 'Everwood'), '99999', 'everwood@gmail.com');2023-08-22 15:44:45,160 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine INSERT INTO students (firstname, lastname) VALUES ('Jane', 'Everwood');
INSERT INTO addresses (st_id, zipcode, email) VALUES ((SELECT id FROM students WHERE firstname = 'Jane' AND lastname = 'Everwood'), '99999', 'everwood@gmail.com');
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine [generated in 0.00026s] ()
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
File "C:\_PyCharmProject\openai\bug_report.py", line 117, in <module>
result = count_tokens(
File "C:\_PyCharmProject\openai\bug_report.py", line 13, in count_tokens
result = agent(query)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 258, in __call__
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\agents\agent.py", line 1029, in _call
next_step_output = self._take_next_step(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\agents\agent.py", line 890, in _take_next_step
observation = tool.run(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 320, in run
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 292, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 444, in _run
self.func(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 258, in __call__
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\sql_database\base.py", line 186, in _call
raise exc
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\sql_database\base.py", line 131, in _call
result = self.database.run(sql_cmd)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\utilities\sql_database.py", line 390, in run
cursor = connection.execute(text(command))
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1306, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py", line 332, in _execute_on_connection
return connection._execute_clauseelement(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 2047, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
raise exception
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlite3.Warning: You can only execute one statement at a time.
Process finished with exit code 1
``` | SQLDatabaseChain: sqlite3.Warning: You can only execute one statement at a time. | https://api.github.com/repos/langchain-ai/langchain/issues/9627/comments | 3 | 2023-08-22T20:04:19Z | 2023-11-29T16:07:20Z | https://github.com/langchain-ai/langchain/issues/9627 | 1,862,121,036 | 9,627 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The current documentation recommends running the following SQL function to create a table and function for using a supabase postgres database as a vector store.
```postgres
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector(1536),
match_count int default null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;
```
This creates a table with a column named id of type `bigint`
However when using the `SupabaseVectorStore` class and trying to run methods such as `add_documents` or `add_texts`, an attempt is made to insert `Document` objects into the database. However it uses the uuid generated as the id which is incompatible with the `bigint` column type resulting in errors such as
```bash
postgrest.exceptions.APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "dbf8aa60-8295-450c-83bc-7395e2836a6a"'}
```
### Idea or request for content:
Instead the documentation should recommend using the column type of uuid which is natively supported within supabase | DOC: Langchain Supabase Vectorstore ID Incompatability | https://api.github.com/repos/langchain-ai/langchain/issues/9624/comments | 6 | 2023-08-22T19:07:33Z | 2023-12-04T20:18:50Z | https://github.com/langchain-ai/langchain/issues/9624 | 1,862,041,049 | 9,624 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.193
python 3.8.10
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Im creating multiple vector stores using Chroma DB and saved these inside subdirectories inside a directory DB/ , all occurring inside of a docker container. I'm using persist method as you detail in the docs.
2. save and load work as expected, however, after container restart (an expected operation during development), I got the following error related to parquet file in the DB subdirectory: Invalid Input Error: File '/app/DB/SIC/chroma-embeddings.parquet' too small to be a Parquet file. During interaction with Chroma DB app only reads data not adds or overwrites data.
it seems that the reading operation overwrites the chroma-embeddings.parquet file.
### Expected behavior
I expect that after the container restarts the DB could be loaded and not require to be rebuilt. | duckdb too small to be a Parquet file | https://api.github.com/repos/langchain-ai/langchain/issues/9616/comments | 3 | 2023-08-22T15:12:51Z | 2024-02-08T16:26:11Z | https://github.com/langchain-ai/langchain/issues/9616 | 1,861,679,027 | 9,616 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string"
### Expected behavior
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string" | Croma Db Wrapper not considering the **kwargs for quering the collection | https://api.github.com/repos/langchain-ai/langchain/issues/9611/comments | 2 | 2023-08-22T14:12:14Z | 2023-11-29T16:07:30Z | https://github.com/langchain-ai/langchain/issues/9611 | 1,861,554,770 | 9,611 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.270
python version: 3.11.0
os: Ubuntu 20.04.6 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in a python file, there is only one line of code like following:
`from langchain.document_loaders import DirectoryLoader`
when run it, it generate following errorss:
```
Traceback (most recent call last):
File "/home/tq/code/langchain/python_311/project/01_test.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 30, in <module>
from langchain.tools import BaseTool
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/__init__.py", line 41, in <module>
from langchain.tools.gmail import (
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/__init__.py", line 3, in <module>
from langchain.tools.gmail.create_draft import GmailCreateDraft
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/create_draft.py", line 7, in <module>
from langchain.tools.gmail.base import GmailBaseTool
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/base.py", line 16, in <module>
from googleapiclient.discovery import Resource
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/googleapiclient/discovery.py", line 57, in <module>
from googleapiclient import _auth, mimeparse
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/googleapiclient/_auth.py", line 34, in <module>
import oauth2client.client
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/client.py", line 47, in <module>
from oauth2client import crypt
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/crypt.py", line 55, in <module>
from oauth2client import _pycrypto_crypt
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/_pycrypto_crypt.py", line 17, in <module>
from Crypto.PublicKey import RSA
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/PublicKey/__init__.py", line 29, in <module>
from Crypto.Util.asn1 import (DerSequence, DerInteger, DerBitString,
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/Util/asn1.py", line 33, in <module>
from Crypto.Util.number import long_to_bytes, bytes_to_long
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/Util/number.py", line 398
s = pack('>I', n & 0xffffffffL) + s
^
SyntaxError: invalid hexadecimal literal
```
please help, thanks!
### Expected behavior
import DirectoryLoader should be work. | import DirectoryLoader generate error | https://api.github.com/repos/langchain-ai/langchain/issues/9609/comments | 3 | 2023-08-22T13:19:30Z | 2023-08-22T17:00:35Z | https://github.com/langchain-ai/langchain/issues/9609 | 1,861,452,000 | 9,609 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.249
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(with some values such as vector store, memory etc)
now i want to serialize qa, or store it, the key is to be able to store the qa and pass it wherever i want, ideally store it in one endpoint and pass to another endpoint
but i get errors, about not being able to serialize, which makes it difficult
i have tried flask session, json.dumps, pickle, cloud pickle, tried to store in a database etc, still get an error
### Expected behavior
I should be able to serialize or save the QA, so i can pass it from one endpoint to another, to keep user in session, and isolate a user's QA from others | TypeError: Object of type ConversationalRetrievalChain is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/9607/comments | 4 | 2023-08-22T12:48:52Z | 2024-02-07T16:27:08Z | https://github.com/langchain-ai/langchain/issues/9607 | 1,861,394,996 | 9,607 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Greetings everyone,
I'm interested in directly saving the chromaDB vector store to an S3 bucket. Is there a way to accomplish this?
### Motivation
I want to run my LLM directly in the AWS cloud, but firstly I need to deal with implementation of efficient loading and saving of my db.
### Your contribution
I already implemented function to load data from s3 and creating the vector store.
```
import boto3
from langchain.document_loaders import S3DirectoryLoader
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
# Initialize the S3 client
s3 = boto3.client('s3')
# Specify the S3 bucket and directory path
bucket_name = 'bucket_name'
directory_key = 's3_path'
# List objects with a delimiter to get only common prefixes (directories)
response = s3.list_objects_v2(Bucket=bucket_name, Prefix=directory_path, Delimiter='/')
# Extract the common prefixes (directories) from the response
common_prefixes = response.get('CommonPrefixes', [])
# Print the directory names
for prefix in common_prefixes:
print(prefix['Prefix'])
def create_chromadb_from_s3():
# Load data from s3
docs = []
for key in s3.list_objects_v2(Bucket=bucket_name, Prefix=directory_path, Delimiter='/').get('CommonPrefixes', []):
loader = S3DirectoryLoader(bucket_name, key['Prefix'])
docs.extend(loader.load())
# Split documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=open_ai_secret)
db = Chroma.from_documents(
texts, embedding=embeddings
)
return db
```
Unfortunately there is no way to set "persist_dir" as connection to the s3 bucket. Any idea how we can implement it? | Upload ChromDB vectordb from s3 | https://api.github.com/repos/langchain-ai/langchain/issues/9606/comments | 8 | 2023-08-22T12:43:32Z | 2024-02-14T16:11:38Z | https://github.com/langchain-ai/langchain/issues/9606 | 1,861,385,729 | 9,606 |
[
"langchain-ai",
"langchain"
] | ### Describe the feature or improvement you're requesting
When making a direct call to OpenAI's POST /v1/chat/completions endpoint, we receive valuable headers that provide information about the rate limiting, including:
```
x-ratelimit-limit-requests: 3500
x-ratelimit-remaining-requests: 3499
x-ratelimit-reset-requests: 17ms
```
However, when using the `ChatOpenAI` and `OpenAIEmbeddings` APIs, these rate limit headers are not returned in the response or as part of the `OpenAICallbackHandler` object (as total_tokens or total_cost values). This makes it challenging to track and manage rate limits programmatically.
Can we get rate headers values using `ChatOpenAI` and `OpenAIEmbeddings`?
It would be nice if we could get access to these headers, or have them returned in a helpful format along with the response.
### Motivation
Having access to rate limit information is crucial for developers to effectively manage their API usage, especially when using these APIs in production environments. This addition would greatly enhance the usability and monitoring capabilities of the OpenAI API.
| Expose x-ratelimit-* headers from OpenAI API with Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9601/comments | 3 | 2023-08-22T11:04:51Z | 2024-05-14T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9601 | 1,861,223,410 | 9,601 |
[
"langchain-ai",
"langchain"
] | ### System Info
python==3.9
langchain==0.0.246
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use GPT4ALL model with MultiRouterChain and it is throwing some weird error.
## Utility Function:
```
def create_transform_func(remove_index):
def my_transform_func(inputs: dict):
return transform_func(inputs, remove_index)
return my_transform_func
def transform_func(inputs: dict, remove_index = 5) -> dict:
text = inputs['input'].strip()
accumulate = ""
for i, s in enumerate(text.split(' ')):
i = i + 1
if i % remove_index != 0:
accumulate += f"{s} "
return {"input": accumulate.strip()}
```
```
"""Callback Handler that prints to std out."""
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction, AgentFinish
from langchain.schema import LLMResult
from pathlib import Path
from datetime import datetime
import re
class FileCallbackHandler(BaseCallbackHandler):
"""Callback Handler that prints to std out."""
def __init__(self,
path: Path,
print_prompts: bool=False,
print_class: bool=False,
title: Optional[str] = "Conversation Log",
color: Optional[str] = None
) -> None:
"""Initialize callback handler."""
self.color = color
self.print_prompts = print_prompts
self.print_class = print_class
self.path = path
self.file_handle = open(path, 'w')
self.title = title
self.output_keys = []
self.output_values = []
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Print out the prompts."""
if self.print_prompts:
self.file_handle.write(f"=============== PROMPTS ==================\n")
for prompt in prompts:
self.file_handle.write(f"{prompt}\n")
self.file_handle.write("\n")
self.file_handle.flush()
self.file_handle.write(f"============ END PROMPTS =================\n\n")
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Do nothing."""
self.file_handle.write(f"=============== LLM END ==================\n")
pass
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Do nothing."""
pass
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> None:
"""Print out that we are entering a chain."""
if self.print_class:
self.file_handle.write(f"================ CLASS ===================\n")
class_name = serialized["name"]
self.file_handle.write(f">>> class: {class_name}\n")
self.file_handle.write(f"============== END CLASS =================\n\n")
self.file_handle.flush()
self.output_keys.append(list(inputs.keys()))
self.output_values.append(list(inputs.values()))
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Print out that we finished a chain."""
# print("\n\033[1m> Finished chain.\033[0m")
# self.file_handle.close()
self.file_handle.write(f"================ OUTPUT ==================\n")
keys = []
values = []
for k, v in outputs.items():
keys.append(k)
values.append(v)
self.file_handle.write(f"{k}:\n")
self.file_handle.write(f"{v}\n\n")
self.output_keys.append(keys)
self.output_values.append(values)
self.file_handle.write(f"================ OUTPUT ==================\n")
self.file_handle.flush()
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
**kwargs: Any,
) -> None:
"""Do nothing."""
self.file_handle.write(datetime.today().strftime('%Y-%m-%d'))
self.file_handle.write("\n========")
self.file_handle.flush()
def on_agent_action(
self, action: AgentAction, color: Optional[str] = None, **kwargs: Any
) -> Any:
"""Run on agent action."""
self.file_handle.write(f">>> action: {action.log}")
def on_tool_end(
self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
if observation_prefix is not None:
self.file_handle.write(f"\n{observation_prefix}")
self.file_handle.write(output)
if llm_prefix is not None:
self.file_handle.write(f"\n{llm_prefix}")
self.file_handle.flush()
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_text(self, text: str, color: Optional[str] = None, end: str = "", **kwargs: Any) -> None:
"""Run when agent ends."""
self.file_handle.write(f"================ TEXT ===================\n")
self.file_handle.write(f"{text}\n")
self.file_handle.flush()
self.file_handle.write(f"============== END TEXT =================\n\n")
agent = extract_agent(text)
if agent != "":
self.output_keys.append([agent])
self.output_values.append([text])
def on_agent_finish(
self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any
) -> None:
"""Run on agent end."""
self.file_handle.write(f"{finish.log}\n")
self.file_handle.flush()
self.file_handle.close()
def create_html(self):
table: str = """
<table class="table table-striped">
<tr>
<th>
Agent
</th>
<th>
Type
</th>
<th>
Output
</th>
</tr>
"""
dedup_hash = set()
for keys, values in zip(self.output_keys, self.output_values):
for key, val in zip(keys, values):
if val not in dedup_hash:
dedup_hash.add(val)
else:
continue
agent = extract_agent(val)
table += (f"""
<tr>
<td>
</td>
<td>
<pre>{key}</pre>
</td>
<td>
<pre>{val}</pre>
</td>
</tr>
""" if agent == "" else f"""
<tr>
<td>{agent}</td>
<td></td>
<td></td>
</tr>
""")
table += "</table>"
target_file = f"{self.path.stem}.html"
with open(target_file, "w", encoding='utf-8') as f:
f.write(f"""
<html>
<head>
<meta charset="UTF-8" />
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous">
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js" integrity="sha384-geWF76RCwLtnZ8qwWowPQNguL3RmwHVBC9FhGdlKrxdiJJigb/j/68SIy3Te4Bkz" crossorigin="anonymous"></script>
<style>
pre {{
white-space: pre-wrap;
}}
</style>
</head>
<body>
<div class="container-fluid">
<h1>{self.title}</h1>
<h2>{generate_timestamp()}</h2>
{table}
</div>
</body>
</html>
""")
print(f"Saved chat content to {target_file}")
def generate_timestamp():
# Get the current date and time
now = datetime.now()
# Get the weekday, day, month, year, and time in English
weekday = now.strftime("%A")
day = now.strftime("%d")
month = now.strftime("%B")
year = now.strftime("%Y")
time = now.strftime("%H:%M:%S")
# Create the timestamp string
timestamp = f"{weekday}, {day} {month} {year} {time}"
return timestamp
def extract_input(text):
return re.sub(r".+?'input':\s*'(.+)'}", r"\1", text)
def extract_agent(text):
regex = r"^([a-z\s]+)\:.+"
match = re.search(regex, text)
if match is None:
return ""
return re.sub(regex, r"\1", text)
```
## Driver Code
```
from langchain.chains.router import MultiRouteChain, RouterChain
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
from langchain.chains import SimpleSequentialChain, TransformChain
from prompt_toolkit import HTML, prompt
import langchain.callbacks
from replace_function import create_transform_func
from langchain.callbacks import StdOutCallbackHandler
from FileCallbackHandler import FileCallbackHandler
from pathlib import Path
from typing import Mapping, List, Union
import openai, os
from langchain.llms import GPT4All, LlamaCpp, OpenAI, AzureOpenAI, SelfHostedHuggingFaceLLM, HuggingFacePipeline
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
file_ballback_handler = FileCallbackHandler(Path('router_chain.txt'), print_prompts=True)
class GPT4_Config():
current_path = os.path.dirname(os.path.realpath(__file__))
llm = GPT4All(
model=os.path.abspath(os.path.join(current_path, r"../../llms/gpt4all/ggml-gpt4all-j-v1.3-groovy.bin")),
# n_ctx=1000,
verbose=True,
backend='gptj',
callbacks=[StreamingStdOutCallbackHandler()]
)
cfg = GPT4_Config()
class PromptFactory():
developer_template = """You are a very smart Python programmer. \
You provide answers for algorithmic and computer problems in Python. \
You explain the code in a detailed manner. \
Here is a question:
{input}
Answer:"""
python_test_developer_template = """You are a very smart Python programmer who writes unit tests using pytest. \
You provide test functions written in pytest with asserts. \
You explain the code in a detailed manner. \
Here is a input on which you create a test:
{input}
Answer:"""
kotlin_developer_template = """You are a very smart Kotlin programmer. \
You provide answers for algorithmic and computer science problems in Kotlin. \
You explain the code in a detailed manner. \
Here is a question:
{input}
Answer:"""
kotlin_test_developer_template = """You are a very smart Kotlin programmer who writes unit tests using JUnit 5. \
You provide test functions written in JUnit 5 with JUnit asserts. \
You explain the code in a detailed manner. \
Here is a input on which you create a test:
{input}
Answer:"""
poet_template = """You are a poet who replies to creative requests with poems in English. \
You provide answers which are poems in the style of Lord Byron or Shakespeare. \
Here is a question:
{input}
Answer:"""
wiki_template = """You are a Wikipedia expert. \
You answer common knowledge questions based on Wikipedia knowledge. \
Your explanations are detailed and in plain English.
Here is a question:
{input}
Answer:"""
image_creator_template = """You create a creator of images. \
You provide graphic representations of answers using SVG images.
Here is a question:
{input}
Answer:"""
legal_expert_template = """You are a UK or US legal expert. \
You explain questions related to the UK or US legal systems in an accessible language \
with a good number of examples.
Here is a question:
{input}
Answer:"""
word_filler = """Your job is to fill the words in a sentence in which words seems to be missing.
Here is the input:
{input}
Answer:"""
python_programmer = 'python programmer'
kotlin_programmer = 'kotlin programmer'
programmer_test_dict = {
python_programmer: python_test_developer_template,
kotlin_programmer: kotlin_test_developer_template
}
word_filler_name = 'word filler'
prompt_infos = [
{
'name': python_programmer,
'description': 'Good for questions about coding and algorithms in Python',
'prompt_template': developer_template
},
{
'name': 'python tester',
'description': 'Good for for generating Python tests from existing Python code',
'prompt_template': python_test_developer_template
},
{
'name': kotlin_programmer,
'description': 'Good for questions about coding and algorithms in Kotlin',
'prompt_template': kotlin_developer_template
},
{
'name': 'kotlin tester',
'description': 'Good for for generating Kotlin tests from existing Kotlin code',
'prompt_template': kotlin_test_developer_template
},
{
'name': 'poet',
'description': 'Good for generating poems for creatinve questions',
'prompt_template': poet_template
},
{
'name': 'wikipedia expert',
'description': 'Good for answering questions about general knwoledge',
'prompt_template': wiki_template
},
{
'name': 'graphical artist',
'description': 'Good for answering questions which require an image output',
'prompt_template': image_creator_template
},
{
'name': 'legal expert',
'description': 'Good for answering questions which are related to UK or US law',
'prompt_template': legal_expert_template
},
{
'name': word_filler_name,
'description': 'Good at filling words in sentences with missing words',
'prompt_template': word_filler
}
]
class MyMultiPromptChain(MultiRouteChain):
"""A multi-route chain that uses an LLM router chain to choose amongst prompts."""
router_chain: RouterChain
"""Chain for deciding a destination chain and the input to it."""
destination_chains: Mapping[str, Union[LLMChain, SimpleSequentialChain]]
"""Map of name to candidate chains that inputs can be routed to."""
default_chain: LLMChain
"""Default chain to use when router doesn't map input to one of the destinations."""
@property
def output_keys(self) -> List[str]:
return ["text"]
def generate_destination_chains():
"""
Creates a list of LLM chains with different prompt templates.
Note that some of the chains are sequential chains which are supposed to generate unit tests.
"""
prompt_factory = PromptFactory()
destination_chains = {}
for p_info in prompt_factory.prompt_infos:
print("="*70)
name = p_info['name']
print(f"======= Prompt Name =======: \n{name}")
prompt_template = p_info['prompt_template']
print(f"======= Prompt Template =======: \n{prompt_template}")
# callbacks = [StdOutCallbackHandler]
chain = LLMChain(
llm=cfg.llm,
prompt=PromptTemplate(template=prompt_template, input_variables=['input']),
output_key='text',
callbacks=[file_ballback_handler],
verbose=True
)
if name not in prompt_factory.programmer_test_dict.keys() and name != prompt_factory.word_filler_name:
print("Addition using 1st case")
destination_chains[name] = chain
elif name == prompt_factory.word_filler_name:
print("Addition using 2nd case")
transform_chain = TransformChain(
input_variables=["input"], output_variables=["input"], transform=create_transform_func(3), callbacks=[file_ballback_handler]
)
destination_chains[name] = SimpleSequentialChain(
chains=[transform_chain, chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)
else:
print("Addition using 3rd case")
# Normal chain is used to generate code
# Additional chain to generate unit tests
template = prompt_factory.programmer_test_dict[name]
prompt_template = PromptTemplate(input_variables=["input"], template=template)
test_chain = LLMChain(llm=cfg.llm, prompt=prompt_template, output_key='text', callbacks=[file_ballback_handler])
destination_chains[name] = SimpleSequentialChain(
chains=[chain, test_chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)
print("="*70)
print("\n\n\n")
print("============= Destination Chains =============")
pprint(destination_chains)
default_chain = ConversationChain(llm=cfg.llm, output_key="text")
return prompt_factory.prompt_infos, destination_chains, default_chain
def generate_router_chain(prompt_infos, destination_chains, default_chain):
"""
Generats the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains The LLM chains with different prompt templates
:param default_chain A default chain
"""
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
print("====================== DESTINATIONS ======================")
print(destinations)
destinations_str = '\n'.join(destinations)
print("====================== DESTINATIONS STRINGS ======================")
print(destinations_str)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
print("====================== ROUTER TEMPLATE ======================")
print(router_template)
router_prompt = PromptTemplate(
template=router_template,
input_variables=['input'],
output_parser=RouterOutputParser()
)
print("====================== PROMPT TEMPLATE ======================")
print(router_prompt)
router_chain = LLMRouterChain.from_llm(cfg.llm, router_prompt)
multi_route_chain = MyMultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
callbacks=[file_ballback_handler]
)
print("====================== MULTI ROUTER CHAIN ======================")
print(multi_route_chain)
return multi_route_chain
if __name__ == "__main__":
prompt_infos, destination_chains, default_chain = generate_destination_chains()
chain = generate_router_chain(prompt_infos, destination_chains, default_chain)
with open('conversation.log', 'w') as f:
while True:
question = prompt(
HTML("<b>Type <u>Your question</u></b> ('q' to exit, 's' to save to html file): ")
)
if question == 'q':
break
if question in ['s', 'w'] :
file_ballback_handler.create_html()
continue
result = chain.run(question)
f.write(f"Q: {question}\n\n")
f.write(f"A: {result}")
f.write('\n\n ====================================================================== \n\n')
print(result)
print()
```
## Error:
```
> Entering new MyMultiPromptChain chain...
/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
{
"destination": "UK",
"next_inputs": "['Inheritance Tax', 'Legal System']"}UK: {'input': "['Inheritance Tax', 'Legal System']"}Traceback (most recent call last):
File "/home/gpu-titan/Desktop/Ramish/Seer/seer_main/complex_chain/complex_chain.py", line 360, in <module>
result = chain.run(question)
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/router/base.py", line 106, in _call
raise ValueError(
ValueError: Received invalid destination chain name 'UK'
```
### Expected behavior
Prompt is correctly routed to correct chain and returns the correct answer.
## Expected Output:
```
> Finished chain.
{'question': 'What are the main differences between the UK and US legal systems in terms of the inheritance tax?', 'text': ' The
main difference between the UK and US legal systems regarding inheritance taxes lies in their respective approaches to
calculating, taxing, and paying them out. In both countries, there may be variations depending on factors such as wealth or
family structure. However, some key differences include:\n1) Inheritance Tax Rates - While the rates of inheritance tax are similar
between the two jurisdictions (7% for assets over £325k in England/Wales; 7.5% for assets over $675k in Scotland), there may be
variations depending on factors such as wealth or family structure, which can affect how much is owed and when it should be
paid out to beneficiaries.\n2) Inheritance Tax Rules - The UK has a more complex inheritance tax system than the US, with
different rules governing who qualifies for an exemption from paying inheritance tax (e.g., married couples vs unmarried
individuals). In addition, there may be variations in how assets are taxed and when they should be transferred to beneficiaries
before death.\n3) Inheritance Tax Planning - Both countries offer various strategies such as trusts or wills that can help reduce
the amount of inheritance tax owed by transferring wealth out-of-estate at a lower rate than would otherwise apply, but with
different rules governing how these plans are set up'}
``` | MultiRouteChain not working as expected | https://api.github.com/repos/langchain-ai/langchain/issues/9600/comments | 4 | 2023-08-22T10:31:29Z | 2024-06-15T23:43:20Z | https://github.com/langchain-ai/langchain/issues/9600 | 1,861,170,095 | 9,600 |
[
"langchain-ai",
"langchain"
] | ### System Info
(.venv) yale@LAPTOP-MATEBOOK:~/work/llm-app$ python --version
Python 3.10.12
(.venv) yale@LAPTOP-MATEBOOK:~/work/llm-app$ pip list|grep langchain
langchain 0.0.262
langchainplus-sdk 0.0.20
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Update the official example of Tool https://python.langchain.com/docs/modules/agents/tools/custom_tools with a tool without argument:
`
class NoParameterInput(BaseModel):
pass
tools = [
Tool(
name="GetToday",
func=lambda: "2023-09-30",
description="Get today's date",
args_schema=NoParameterInput
),
`
When running, a error was triggered:
`
Traceback (most recent call last):
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api
result = await self.call_function(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1077, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/yale/work/llm-app/local_tests/gradio/langchain_chatbot.py", line 356, in qa_answer_question
qa_answer = chain(question)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/router/base.py", line 100, in _call
return self.destination_chains[route.destination](
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 891, in _take_next_step
observation = tool.run(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 340, in run
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 331, in run
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 488, in _to_args_and_kwargs
raise ToolException(
langchain.tools.base.ToolException: Too many arguments to single-input tool GetToday. Args: []
`
The source of _to_args_and_kwargs():
`
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
"""Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
# For backwards compatibility. The tool must be run with a single input
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
f"Too many arguments to single-input tool {self.name}."
f" Args: {all_args}"
)
return tuple(all_args), {}
`
Which does not accept tools without argument.
### Expected behavior
Calls GetToday() successfully. | _to_args_and_kwargs() failed to handle tool definition with no arguments | https://api.github.com/repos/langchain-ai/langchain/issues/9599/comments | 2 | 2023-08-22T10:20:56Z | 2023-11-28T16:07:40Z | https://github.com/langchain-ai/langchain/issues/9599 | 1,861,150,371 | 9,599 |
[
"langchain-ai",
"langchain"
] | Hi Team,
Is there schedule to support plugins with auth?
thanks. | Schedule for supporting plugins with auth | https://api.github.com/repos/langchain-ai/langchain/issues/9597/comments | 1 | 2023-08-22T09:49:48Z | 2023-11-28T16:07:45Z | https://github.com/langchain-ai/langchain/issues/9597 | 1,861,082,845 | 9,597 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.270
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create multiple instances of ChatMessageHistory and add history. They shared the list.
### Expected behavior
`messages: List[BaseMessage] = []
`
to
`messages: List[BaseMessage] = Field(default_factory=list)`
Fix [here](https://github.com/langchain-ai/langchain/pull/9594) | List in ChatMessageHistory is not correctly initialized | https://api.github.com/repos/langchain-ai/langchain/issues/9595/comments | 1 | 2023-08-22T09:38:59Z | 2023-08-25T12:08:49Z | https://github.com/langchain-ai/langchain/issues/9595 | 1,861,060,406 | 9,595 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add option to specify dtype (e.g. `float16`) in `langchain.llms.VLLM()` constructor,
### Motivation
Currently, `vllm` library provides an argument to use `float16` dtype, but langchain doesn't.
There is no way to use langchain VLLM with a GPU < 8.0 compute capability.


### Your contribution
I can contribute with guideline from langchain team | Allow specifying dtype in `langchain.llms.VLLM` | https://api.github.com/repos/langchain-ai/langchain/issues/9593/comments | 1 | 2023-08-22T09:33:12Z | 2023-08-23T03:30:28Z | https://github.com/langchain-ai/langchain/issues/9593 | 1,861,049,462 | 9,593 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.260
Python Version: 3.11.4
Operating System: ubuntu-22.04
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
After updating from version 0.0.256 to 0.0.260, I noticed that the TransformChain subclass which used to accept additional arguments (memory) no longer works. This regression might be related to changes made in [Pull Request #8762](https://github.com/langchain-ai/langchain/pull/8762).
### Relevant Code:
Here's a snippet of the MySubclassedTransformChain:
```python
class MySubclassedTransformChain(TransformChain):
memory: BaseChatMemory
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
return self.transform(inputs, memory=self.memory)
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if self.atransform is not None:
return await self.atransform(inputs, memory=self.memory)
else:
self._log_once(
"TransformChain's atransform is not provided, falling"
" back to synchronous transform"
)
return self.transform(inputs, memory=self.memory)
```
Usage:
```
transform_chain = MySubclassedTransformChain(input_variables=["input"], output_variables=["output"], transform=transform_func, memory=memory)
llm_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
sequential_chain.run(message)
```
Exception Stack Trace:
```
app-1 | ERROR: Exception in ASGI application
app-1 | Traceback (most recent call last):
app-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
app-1 | result = await app( # type: ignore[func-returns-value]
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
app-1 | return await self.app(scope, receive, send)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 289, in __call__
app-1 | await super().__call__(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 348, in _sentry_patched_asgi_app
app-1 | return await middleware(scope, receive, send)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 141, in _run_asgi3
app-1 | return await self._run_app(scope, lambda: self.app(scope, receive, send))
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 190, in _run_app
app-1 | raise exc from None
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 185, in _run_app
app-1 | return await callback()
app-1 | ^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
app-1 | await self.middleware_stack(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
app-1 | raise exc
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
app-1 | await self.app(scope, receive, _send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 242, in _sentry_exceptionmiddleware_call
app-1 | await old_call(self, scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
app-1 | raise exc
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
app-1 | await self.app(scope, receive, sender)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
app-1 | await self.app(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
app-1 | await route.handle(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
app-1 | await self.app(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
app-1 | response = await func(request)
app-1 | ^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/fastapi.py", line 131, in _sentry_app
app-1 | return await old_app(*args, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
app-1 | raw_response = await run_endpoint_function(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
app-1 | return await dependant.call(**values)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/main.py", line 47, in post_message
app-1 | result = await handle_message(message)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/message_chain.py", line 20, in handle_message
app-1 | assistant_result = assistant_chain.run(extraction_result, memory=manager.memory)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/assistant_chain.py", line 63, in run
app-1 | response = sequential_chain.run(message)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 475, in run
app-1 | return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in __call__
app-1 | self._call(inputs, run_manager=run_manager)
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/sequential.py", line 180, in _call
app-1 | _input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 475, in run
app-1 | return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in __call__
app-1 | self._call(inputs, run_manager=run_manager)
app-1 | File "/app/app/langchain_extensions.py", line 14, in _call
app-1 | return self.transform(inputs, memory=self.memory)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | TypeError: Runnable.transform() got an unexpected keyword argument 'memory'
```
Transform Function:
```
def transform_func(inputs: dict, memory: BaseChatMemory) -> dict:
text = inputs["text"]
# mutate text with memory
return {"output": text}
```
### Expected behavior
The subclass should handle additional arguments in MySubclassedTransformChain as it did in version 0.0.256. | Regression: Additional arguments in TransformChain subclass no longer work | https://api.github.com/repos/langchain-ai/langchain/issues/9587/comments | 2 | 2023-08-22T07:34:56Z | 2023-11-28T16:07:50Z | https://github.com/langchain-ai/langchain/issues/9587 | 1,860,801,274 | 9,587 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.267
### Who can help?
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Redis vector database initialization
<p>
embeddings = OpenAIEmbeddings(openai_api_key=os.getenv("OPENAI_API_TYPE"), <br/>
deployment=os.getenv("OPENAI_EMBEDDING_MODEL_NAME"), <br/>
client="azure", <br/>
chunk_size=1)<br/>
redis_vector_db = Redis(<br/>
redis_url="redis://localhost:6379", <br/>
index_name="test_1", <br/>
embedding_function=embedding.embed_query<br/>
)
</p>
# Load pdf
loader = PyPDFLoader( file_path)
docs= loader.load_and_split()
redis_vector_db.add_documents(docs, index_name = "test_1")
# Searching --> bug here
docs = redis_vector_db.similarity_search(query = question,k = 4)
-------------------------- This is the end of python script ---------------------------------------
### Expected behavior
# Bug: Even though we have set the index name it will have bug of no such index
File [c:\Users\User\anaconda3\envs\advantech\lib\site-packages\langchain\vectorstores\redis.py:284](file:///C:/Users/User/anaconda3/envs/advantech/lib/site-packages/langchain/vectorstores/redis.py:284), in Redis.similarity_search(self, query, k, **kwargs)
271 def similarity_search(
272 self, query: str, k: int = 4, **kwargs: Any
273 ) -> List[Document]:
274 """
275 Returns the most similar indexed documents to the query text.
276
(...)
282 List[Document]: A list of documents that are most similar to the query text.
283 """
--> 284 docs_and_scores = self.similarity_search_with_score(query, k=k)
285 return [doc for doc, _ in docs_and_scores]
File [c:\Users\User\anaconda3\envs\advantech\lib\site-packages\langchain\vectorstores\redis.py:361](file:///C:/Users/User/anaconda3/envs/advantech/lib/site-packages/langchain/vectorstores/redis.py:361), in Redis.similarity_search_with_score(self, query, k)
354 params_dict: Mapping[str, str] = {
355 "vector": np.array(embedding) # type: ignore
...
904 if isinstance(response, ResponseError):
--> 905 raise response
906 return response
ResponseError: test_1: no such index
# My observation
print(redis_vector_db.index_name)
will show test_1 | Direct initial redis database can't successfully use searching function since index missing | https://api.github.com/repos/langchain-ai/langchain/issues/9585/comments | 6 | 2023-08-22T07:24:40Z | 2024-02-11T16:16:17Z | https://github.com/langchain-ai/langchain/issues/9585 | 1,860,784,286 | 9,585 |
[
"langchain-ai",
"langchain"
] | ### Feature request
```
if self.show_progress_bar:
try:
import tqdm
_iter = tqdm.tqdm(range(0, len(tokens), _chunk_size))
except ImportError:
_iter = range(0, len(tokens), _chunk_size)
```
Current code does not work very well on jupyter notebook, so replace it with `tqdm.auto`
### Motivation
It just does not work very well on jupyter notebook, especially if there's any warning
### Your contribution
I'll make a PR | Replace `tqdm` with `tqdm.auto` | https://api.github.com/repos/langchain-ai/langchain/issues/9582/comments | 2 | 2023-08-22T06:56:51Z | 2023-08-23T00:41:43Z | https://github.com/langchain-ai/langchain/issues/9582 | 1,860,735,901 | 9,582 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.270
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
cd lib/experimental
make test
```
============================================================================= short test summary info =============================================================================
FAILED tests/unit_tests/test_smartllm.py::test_all_steps - IndexError: list index out of range
FAILED tests/unit_tests/test_smartllm.py::test_intermediate_output - IndexError: list index out of range
========== 2 failed, 28 passed, 16 warnings in 4.37s =========
```
### Expected behavior
No error | make test in experimental crash | https://api.github.com/repos/langchain-ai/langchain/issues/9581/comments | 2 | 2023-08-22T06:49:07Z | 2023-09-19T08:47:29Z | https://github.com/langchain-ai/langchain/issues/9581 | 1,860,722,360 | 9,581 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add support for DocAI from Google Cloud as a pdf parser
### Motivation
It might be faster for large samples of documents (and maybe better in some cases).
### Your contribution
yep, I'm happy to | Add support for DocAI from Google Cloud as a pdf parser | https://api.github.com/repos/langchain-ai/langchain/issues/9578/comments | 2 | 2023-08-22T05:52:46Z | 2023-11-28T16:07:55Z | https://github.com/langchain-ai/langchain/issues/9578 | 1,860,625,662 | 9,578 |
[
"langchain-ai",
"langchain"
] | ### System Info
HuggingFaceEndpoint Returns an Empty String both when prompted using ._call() method and when used as an LLM in a QA chain. Examples of what I've tried, below:
```
from langchain.llms import HuggingFaceEndpoint
from langchain.chains.question_answering import load_qa_chain
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
### Example 1 -- returns empty string
endpoint_url = (
"my-real-endpoint-here"
)
llm = HuggingFaceEndpoint(
endpoint_url = endpoint_url,
huggingfacehub_api_token = os.environ['HUGGING_FACE_HUB_TOKEN'],
task = 'text2text-generation',
model_kwargs = {'temperature': 1e-20, "max_length": 900},
)
llm._call(prompt = "What is 4 + 4? Think the question through, step by step.")
>>> ""
### Example 2 -- used in QA chain and also returns empty string
### (exact same 'llm' object)
### text splitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 15)
### embeddings
embeddings = HuggingFaceEmbeddings()
chain = load_qa_chain(llm, chain_type = "stuff", verbose = False)
def extract_from_case(file_str: str, prompt_question: str):
docs = text_splitter.split_text(file_str)
db = FAISS.from_texts(docs, embeddings)
docs = db.similarity_search(prompt_question)
extraction = chain.run(input_documents = docs, question = prompt_question)
return extraction
sample_file = """Metadata: Date: 2017-01-18 File number: CEL-62600-16 CEL-62600-16 Citation: CEL-62600-16 (Re), 2017, retrieved on 2023-05-16. Content: Arrears Worksheet File Number: CEL-62600-16 Time period for Arrears Owing From: September 1, 2016 to"""
prompt = "What is the file number of this case?"
extract_from_case(file_str = sample_file, prompt_question = prompt)
>>> ""
```
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
See above.
### Expected behavior
To return a non-empty response. | HuggingFaceEndpoint Returns an Empty String | https://api.github.com/repos/langchain-ai/langchain/issues/9576/comments | 2 | 2023-08-22T03:52:51Z | 2023-11-28T16:08:00Z | https://github.com/langchain-ai/langchain/issues/9576 | 1,860,497,298 | 9,576 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm getting this:
Page Not Found
We could not find what you were looking for.
Please contact the owner of the site that linked you to the original URL and let them know their link is broken.
When visiting:
- https://python.langchain.com/docs/modules/chains/additional/
- https://python.langchain.com/docs/modules/chains/popular/
### Idea or request for content:
_No response_ | Additional/Popular chains Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/9575/comments | 2 | 2023-08-22T03:24:06Z | 2023-11-28T16:08:05Z | https://github.com/langchain-ai/langchain/issues/9575 | 1,860,477,232 | 9,575 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It might be very helpful to let users react with each page of the documentation so that contributors, users and people working will know which document has been out-dated / not usable.
### Motivation
I saw many inconsistencies in different docs and some codes are no longer usable as the dependancies update. It would be very help if there is an indicator of whether that documentation is still something we could reply on.
### Your contribution
I could contribute on the front-end but not sure how to configure it on langchain's database. | Adding thumbs-up and thumbs-down for documentations | https://api.github.com/repos/langchain-ai/langchain/issues/9559/comments | 1 | 2023-08-21T19:23:20Z | 2023-11-27T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9559 | 1,859,998,066 | 9,559 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am trying to use a template to pre customize the AI, but my Code is not working!
Any idea why it's not working?
This is the basic part of my code:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
template = """You are a customer service representative working for Amazon. You are having conversations with customers.
When asked about your profession, you should respond that you are a customer service representative for Amazon.
{memory}
#Human: {human_input}
#Chatbot:"""
prompt = PromptTemplate(input_variables=["memory", "human_input"],template=template)
memory = ConversationBufferMemory(memory_key="memory",prompt=prompt, return_messages=True)
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",verbose=True)
def get_agent(self):
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
agent = initialize_agent(
tools=self.tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
prompt=prompt,
)
return agent
### Expected behavior
When I ask the AI what's her prefession is, she is telling
"I am an AI assistant designed to provide helpful information and assist with various tasks" istead of
"you are a customer service representative for Amazon" | using PromptTemplate with initialze_agent | https://api.github.com/repos/langchain-ai/langchain/issues/9553/comments | 8 | 2023-08-21T17:47:57Z | 2023-11-27T16:06:21Z | https://github.com/langchain-ai/langchain/issues/9553 | 1,859,861,388 | 9,553 |
[
"langchain-ai",
"langchain"
] | ### Feature request
appreciate your efforts on show how integrate Hugginface models with langchain: https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
but this is just a shallow port from huggingface to langchain not yet a full integration on the [Chat model](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/chat_models) class that require further inherence and special case development.
Can you guys spare some efforts to integrate the self-hosted version of llama 2 into the chat model?
### Motivation
How self-hosted llama 2 to fully integrate with langchain for avoid privacy and control the whole llm app development cycles.
### Your contribution
I'm willing to help to submit PR with proper guidance. | llama 2 through HF further integration with langchain on chat model | https://api.github.com/repos/langchain-ai/langchain/issues/9550/comments | 2 | 2023-08-21T17:37:45Z | 2023-11-27T16:06:26Z | https://github.com/langchain-ai/langchain/issues/9550 | 1,859,842,021 | 9,550 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The RetryOutputParser class has several drawbacks:
- it must point to a chain different from the one that calls it
- it does not make it possible to specify a number of retries
### Motivation
Following a request, ChatGPT sometimes answers with an invalid format and the same request must be resent to get the expected output.
It would be useful to have a new parameter added to a LLMChain (endowed with an output parser) to be able to retry the chain till the output is validated by the parser (i.e. it does not trigger any exception), with the possibility to specify a maximum of retries. It could take this form (just a proposal):
```
chain = LLMChain(llm=..., prompt=..., output_parser=..., retry=RetryChain(max_tries=10))
```
### Your contribution
Well, I don't think this feature would be complex to develop for the developers that already know the code. Let me know if this request makes sense. | Being able to retry a chain until the output is valid | https://api.github.com/repos/langchain-ai/langchain/issues/9546/comments | 10 | 2023-08-21T16:17:22Z | 2024-05-21T16:07:30Z | https://github.com/langchain-ai/langchain/issues/9546 | 1,859,715,484 | 9,546 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using the Cognitive search retriever according to the documentation, however I run into an error, regarding the type of return value.
**Reproduction**:
import os
from langchain.retrievers import AzureCognitiveSearchRetriever
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = ""
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = ""
os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = ""
retriever = AzureCognitiveSearchRetriever(content_key="text", top_k=5)
retriever.get_relevant_documents(query="What is langchain?")
**Expected behaviour**: return relevant documents/snippets
**Actual behaviour**:
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
It seems like the langchain retriever expects to receive somewhere a str, where azure cognitive search returns something else. Can anyone help/explain? The error message does not tell what type it receives and where the string should be, it would like to receive
### Suggestion:
_No response_ | Azure cognitive search retriever: ValidationError: 1 validation error for Document page_content | https://api.github.com/repos/langchain-ai/langchain/issues/9545/comments | 2 | 2023-08-21T15:58:11Z | 2023-08-21T19:33:47Z | https://github.com/langchain-ai/langchain/issues/9545 | 1,859,684,763 | 9,545 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.268
Python 3.9
Windows 10
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.schema import BaseOutputParser
class CustomType:
pass
class CustomParser(BaseOutputParser):
model_config = ConfigDict(arbitrary_types_allowed=True)
c : CustomType
```
### Expected behavior
This triggers a RuntimeError since no validator could be found for CustomType object. It prevents the user from passing custom types to the constructor of a langchain's output parser.
Though, this problem has been corrected in the latest versions of pydantic (thanks to the flag arbitrary_types_allowed). After investigating the source code, one of the mother classes of BaseOutputParser is Serializable and the latter derives from... pydantic.v1.BaseModel instead of pydantic.BaseModel. Why is that? This seems to be the source of the problem. | Impossible to enrich BaseOutputParser with a custom object member | https://api.github.com/repos/langchain-ai/langchain/issues/9540/comments | 5 | 2023-08-21T14:43:51Z | 2024-06-01T00:07:33Z | https://github.com/langchain-ai/langchain/issues/9540 | 1,859,531,876 | 9,540 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am working with SQLDatabaseChain in Django framework everytime I execute the code it shows below error.
File "C:\Users\ehsuser\AppData\Local\Programs\Python\Python310\lib\socket.py", line 705, in readinto
return self._sock.recv_into(b)
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Below is my code
user='root'
database= 'real_estate_chatbot_new'
password=''
host= 'localhost'
port= '3306'
table_name1='app_projects'
table_name2='app_projectimages'
db_uri=f"mysql+pymysql://{user}:{password}@{host}:{port}/{database}"
#input_db = SQLDatabase.from_uri('sqlite:///ashridhar.db')
input_db = SQLDatabase.from_uri(db_uri,include_tables=[table_name1,table_name2])
db_chain = SQLDatabaseChain.from_llm(llm, input_db, verbose=True)
prompt = """
- You're a real estate chatbot for Buy home that is going to answer to a potential lead so keep your messages dynamic and enthusiastic making the interactions lively and enjoyable!.
- Answer the question from database only if you don't find answer from database return a friendly message.
- don't use word like 'database' in answer.
- If it is not relevant to the database return connvencing and gratitude message.
- If it is realted to price return it in words suppose the price is 7000000 then return 70 lakhs."""
tools = [
Tool(
name="Real_Estate_Chatbot",
func=db_chain.run,
#description="Answer the question from database only if you don't find answer from database return a friendly message., You're a real estate sales agent that is going to answer to a potential lead so keep your messages dynamic and enthusiastic making the interactions lively and enjoyable!.",
description=prompt
),
]
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferWindowMemory(memory_key="memory",k=4,return_messages=True)
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
)
response = agent.run(message)
return response
### Suggestion:
_No response_ | Issue: I am using GPT-4 with SQLDatabaseChain it shows ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine error | https://api.github.com/repos/langchain-ai/langchain/issues/9538/comments | 2 | 2023-08-21T14:18:28Z | 2023-11-27T16:06:31Z | https://github.com/langchain-ai/langchain/issues/9538 | 1,859,484,256 | 9,538 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
pip freeze | grep langchain
langchain==0.0.268
langchainplus-sdk==0.0.17
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from typing import Any, Dict, List
import langchain
from langchain.cache import SQLiteCache
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatVertexAI
from langchain.llms import VertexAI
from langchain.schema import HumanMessage
langchain.llm_cache = SQLiteCache(database_path="langchain-llm-cache.db")
class CallbackHandler(BaseCallbackHandler):
run_inline = True
def __init__(self, logger=None):
self.messages = []
def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str], run_id, parent_run_id, **kwargs: Any) -> Any:
self.messages.append(run_id)
callback = CallbackHandler()
bison_text_llm = VertexAI(model_name="text-bison@001", temperature=0.0, max_output_tokens=500, callbacks=[callback])
bison_chat_llm = ChatVertexAI(model_name="chat-bison@001", temperature=0.0, max_output_tokens=500, callbacks=[callback])
bison_chat_llm([HumanMessage(content="Hello, how are you?")])
assert len(callback.messages) == 1
bison_chat_llm([HumanMessage(content="Hello, how are you?")])
assert len(callback.messages) == 2
bison_text_llm("Hello, how are you?")
assert len(callback.messages) == 3
bison_text_llm("Hello, how are you?")
assert len(callback.messages) == 4
```
### Expected behavior
It's unclear to me whether callbacks should be called when call is cached, but we can see that chat and plain text models implement different behaviour.
Ideally, callbacks would be called, with a flag saying that call is cached. | Plain models are not calling callbacks when cached | https://api.github.com/repos/langchain-ai/langchain/issues/9537/comments | 7 | 2023-08-21T14:12:46Z | 2024-04-09T16:12:35Z | https://github.com/langchain-ai/langchain/issues/9537 | 1,859,472,931 | 9,537 |
[
"langchain-ai",
"langchain"
] | ### System Info
This regression affects Langchain >=0.0.262. This regression was introduced with #8965.
If an agent's output to input to a tool (e.g. to generate an `AgentAction`) contains either backticks (such as to represent a code block with ```), or embedded JSON (such as a structured JSON string in the `action_input` key), then the output parsing will fail.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. Generate an agent that outputs a markdown formatted code block with a language.
To input to this tool, an `AgentAction` step will be made, that might look like:
```
```json
{
"action": "Code generator tool",
"action_input": "Generate code to optimize ```python\nprint("hello world")```
}
```
(Can't show closing backticks above due to formatting issues, but assume there's closing backticks in the above code block)
2. An error will occur, as a result of being unable to parse the actions.
Using
`pattern = re.compile(r"```(?:json)?\n(.*)```", re.DOTALL)` works slightly better for both embedded JSON and backticks, but will result in unexpected behavior if there are multiple actions in a response
### Expected behavior
Should be able to parse a response with backticks or JSON inside the `action_input` key | Regression in structured_chat agent's Output parser | https://api.github.com/repos/langchain-ai/langchain/issues/9535/comments | 2 | 2023-08-21T14:05:33Z | 2023-11-27T16:06:36Z | https://github.com/langchain-ai/langchain/issues/9535 | 1,859,456,565 | 9,535 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version = 0.0.268
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have list of objects in roadmap.json and I tried to achieve the what was given in selfquery retriver documentation
```
with open('roadmap.json') as json_file:
allRoadmap = json.load(json_file)
docs = []
for roadmap in allRoadmap:
print(roadmap["name"])
for section in roadmap["sections"]:
single_doc = Document(
page_content=f"This section is related to {roadmap['name']}",
metadata={"roadmapName": roadmap["name"], "sectionTopic": section["name"]}
)
docs.append(single_doc)
print(section["name"])
print("\n")
vectorstore = Chroma.from_documents(docs, embeddings)
print(vectorstore)
for doc in docs:
print(doc)
```
This above code outputs
```
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'HTML Basics'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'HTML Tags and Elements'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'Intermediate Concepts'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'Advanced Concepts'}
page_content='This section is related to How Search Engine Works' metadata={'roadmapName': 'How Search Engine Works', 'sectionTopic': 'Internet'}
page_content='This section is related to How Search Engine Works' metadata={'roadmapName': 'How Search Engine Works', 'sectionTopic': 'Search Engine'}
page_content='This section is related to Github Roadmap' metadata={'roadmapName': 'Github Roadmap', 'sectionTopic': 'Section 1'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Introduction'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Concepts'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Hands On'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Introduction'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Queries'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Data Modeling'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Aggregation'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Getting Started'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Basics'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Conditions and Loops'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Arrays'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Functions'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Solving Problems'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basics'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Positioning Techniques'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}
page_content='This section is related to Google Colab' metadata={'roadmapName': 'Google Colab', 'sectionTopic': 'Section 1'}
page_content='This section is related to MySQL' metadata={'roadmapName': 'MySQL', 'sectionTopic': 'Section 1'}
page_content='This section is related to Docker' metadata={'roadmapName': 'Docker', 'sectionTopic': 'Section 1'}
page_content='This section is related to AWS Lambda' metadata={'roadmapName': 'AWS Lambda', 'sectionTopic': 'Section 1'}
page_content='This section is related to Java' metadata={'roadmapName': 'Java', 'sectionTopic': 'Section 1'}
```
I tried to retrieve data from using selfQuery
```
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info = [
AttributeInfo(
name="sectionTopic",
description="The section topic of the roadmap",
type="string",
),
AttributeInfo(
name="roadmapName",
description="Name of the roadmap",
type="string",
),
]
document_content_description = "Roadmap section topics"
llm = OpenAI(temperature=0.1)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True,
search_kwargs={"k": 7}
)
retriever.get_relevant_documents("What are unique section topics that are related to css")
```
For the above code it gave repeated document.
```
query='css' filter=None limit=None
[Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'})]
```
### Expected behavior
It should return 7 unique documents
```
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basics'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Positioning Techniques'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}
``` | SelfQueryRetriever returns duplicate document data using with chromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/9532/comments | 6 | 2023-08-21T12:15:55Z | 2024-03-26T13:13:52Z | https://github.com/langchain-ai/langchain/issues/9532 | 1,859,248,153 | 9,532 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.74
Python 3.10
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import UnstructuredExcelLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
loader = UnstructuredExcelLoader("N:\Python\Data.xlsx", mode="elements")
index = VectorstoreIndexCreator().from_loaders([loader])
Gives the following:
Traceback (most recent call last):
Cell In[33], line 1
index = VectorstoreIndexCreator().from_loaders([loader])
File C:\Program Files\Anaconda3\lib\site-packages\langchain\indexes\vectorstore.py:73 in from_loaders
File C:\Program Files\Anaconda3\lib\site-packages\langchain\indexes\vectorstore.py:77 in from_documents
AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents'
### Expected behavior
Hi,
When using the VectorstoreIndexCreator, I get an error:
AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents'
In this case, it is when I want to upload an excel file, but I get the same when trying to upload a .txt get the same error when I try to upload .txt files (with TextLoader).
Many thanks for your help! | AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents' | https://api.github.com/repos/langchain-ai/langchain/issues/9528/comments | 2 | 2023-08-21T08:54:33Z | 2023-11-27T16:06:46Z | https://github.com/langchain-ai/langchain/issues/9528 | 1,858,916,366 | 9,528 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
Iam using langchain version 0.0.187.
Here are the steps that I have followed to get chat response uisng langchain
```
def initlize_llm_chain_prompt(self):
print("initilizing llm chain prompt")
llm_model=AzureOpenAI(deployment_name=self.AZURE_OPENAI_CHATGPT_DEPLOYMENT,model_name=os.environ.get("AZURE_OPENAI_CHATGPT_MODEL"),
temperature=0,
max_tokens=1000)
llm_chat_prompt = PromptTemplate(input_variables=["question"], template=self.prompt_template)
question_generator = LLMChain(llm=llm_model, prompt=llm_chat_prompt)
doc_chain=load_qa_chain(llm_model, chain_type="stuff")
memory = ConversationBufferWindowMemory(memory_key="chat_history",return_messages=True,k=4) # remember last 4 conversations
self.llm_chain_prompt=ConversationalRetrievalChain(retriever=self.elastic_vector_search.as_retriever(search_kwargs={"k": 5}),
memory=memory,question_generator=question_generator,combine_docs_chain=doc_chain,verbose=True,get_chat_history=self.get_chat_history_custom)
```
**_Here is how Iam calling chat_**
`result=self.llm_chain({"question": user_query}, )`
Here is the Prompt template format
```
prompt_template = """<|im_start|>System
Answer ONLY with the facts listed in the Referenced documents.
If there isn't enough information in the Sources , say you don't know.
Do not strictly generate answers if its not available in the Source and even though if you know the answer for it.
<|im_end|>
<|im_start|>
UserQuery:
{question}
<|im_end|>
<|im_start|>Chatbot:
"""
```
In get_chat_history_custom() Iam just appending the conversation history data
```
def get_chat_history_custom(self,inputs):
res = []
if(len(inputs)>0):
inp_len=len(inputs)
for i in range(0,inp_len,2):
human_content = inputs[i].content
ai_content=inputs[i+1].content.split('Question')[0]
res.append(f"Human:{human_content}\AIMessage:{ai_content}")
buf="\n".join(res)
return buf
else:
return ""
```
What I have observed is Iam not getting good responses even though Iam providing the relavent data as the source to the chat. Iam getiing good and expected answers if I use open AI completion call. Can I please knoww, If any of the above steps is wrong? and Do you recommond any other version langchain?
### Suggestion:
Chatbot responses are not getting as expected in langchain on compared to the responses of open AI completion | Chatbot responses are deterorating on using langchain compared to open AI completion | https://api.github.com/repos/langchain-ai/langchain/issues/9526/comments | 2 | 2023-08-21T08:00:24Z | 2023-11-30T16:07:01Z | https://github.com/langchain-ai/langchain/issues/9526 | 1,858,823,860 | 9,526 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Issue you'd like to raise.
langchain==0.0.162
Hi, I am using a fine tuned model that generates user prompts to SQL queries, instead of the default model provided by langchain. The reason for doing this is because langchain does not know about all the data from the database unless you provide context to it but there is a lot of data hence why it can create incorrect SQL queries also it is unable to form complex queries for similar reasons even after you give it context.
So my question is based upon the output I am getting below. Is there a way to keep the initial question I asked the same throughout like in the action input, instead of "SOME_BRANCH_NAME" I want the entire sentence to go through to the SQLDatabaseChain like the user initially asked which is "what is the summary of last 3 issues reported by SOME_BRANCH_NAME". Basically since the Action Input is different from what the user asked, it is generating the wrong SQL query since what it should be doing is this, "SELECT summary FROM sla_tat_summary WHERE organization like '%SOME_BRANCH_NAME%' ORDER BY ReportedDate DESC LIMIT 3;" instead of what is shown below. I could just use the SQLDatabaseChain on its own which does get the exact query I want since I was able to make sure only the prompt the user gave went through, but the agent is needed since I am using it for things other than SQL generation.
user prompt: what is the summary of last 3 issues reported by SOME_BRANCH_NAME
Entering new AgentExecutor chain...
I need to find out what the last 3 issues reported by SOME_BRANCH_NAME were.
Action: TPS Issue Tracker Database
Action Input: SOME_BRANCH_NAME
Entering new SQLDatabaseChain chain...
SOME_BRANCH_NAME:
SELECT organization, COUNT() FROM sla_tat_summary WHERE severity = 'Level 2 - Critical' GROUP BY organization ORDER BY COUNT() DESC LIMIT 1
In summary I want to have an option to keep my user prompt unchanged throughout the flow from the agent to the to SQLdatabasechain
### Motivation
I need a way for langchain to be able to use fine-tuned models for multi-class classification and have a way to not use hard-coded stopping points rather have it as a parameter for a developer to select.
### Your contribution
my custom langchain package would only integrate well with my own use case hence why I am not submitting a PR. Also even though I am using langchain==0.0.162. But the issue would be similar in the latest langchain version | Request for using custom fine tuned models | https://api.github.com/repos/langchain-ai/langchain/issues/9523/comments | 2 | 2023-08-21T05:59:17Z | 2023-11-27T16:06:56Z | https://github.com/langchain-ai/langchain/issues/9523 | 1,858,654,621 | 9,523 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This is my code, I'm not sure if it's correct:
`from langchain.llms import ChatGLM
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType, tool
from datetime import date
@tool
def time(text: str) -> str:
"""返回今天的日期,用于回答用户关于今天日期的问题。
输入应该始终为空字符串,此函数将始终返回今天的日期。
任何日期计算都应在此函数外部进行。"""
return str(date.today())
endpoint_url = "http://0.0.0.0:8000"
llm = ChatGLM(
endpoint_url=endpoint_url,
max_token=80000,
# history=[["我有一些问题!","我可以回答你的任何问题,请向我提问!"]],
top_p=0.9,
model_kwargs={"sample_model_args": False},
)
tools = load_tools([], llm=llm)
agent = initialize_agent(
tools + [time],
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
try:
result = agent.run("今天是几号?")
print(result)
except Exception as e:
print("外部访问异常:", e)`
However, the answer after running is not accurate:
`(chatglm_env) root@autodl-container-b25f1193e8-0ae83321:~/autodl-tmp# python t_n.py
> Entering new AgentExecutor chain...
Could not parse LLM output: 今天是2023年2月18日。
Observation: Invalid or incomplete response
Thought:Could not parse LLM output: Today is February 18th, 2023.
Observation: Invalid or incomplete response
Thought:Final Answer: 今天是2023年2月18日。
> Finished chain.
今天是2023年2月18日。`
Please help me
### Suggestion:
_No response_ | Issue: Proxy not successfully used | https://api.github.com/repos/langchain-ai/langchain/issues/9522/comments | 6 | 2023-08-21T03:49:38Z | 2023-11-28T16:08:20Z | https://github.com/langchain-ai/langchain/issues/9522 | 1,858,529,130 | 9,522 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.268
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Load in .yaml file containing the OpenAPI specification.
2. Call the OpenAPISpec with ` open_api_spec = OpenAPISpec.from_file(file_path)`
Error shows that in _openapi.py_ line 202 ,`return super().parse_obj(obj)` no longer inherits the super method from Pydantic's BaseModel parse_obj.
### Expected behavior
The OpenAPISpec should be delivered so that it can be passed on to `NLAToolkit.from_llm_and_spec`. | AttributeError: 'super' object has no attribute 'parse_obj' when using OpenAPISpec.from_file | https://api.github.com/repos/langchain-ai/langchain/issues/9520/comments | 15 | 2023-08-21T02:28:34Z | 2024-06-26T11:19:14Z | https://github.com/langchain-ai/langchain/issues/9520 | 1,858,436,969 | 9,520 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.268
python=3.9
mac
### Who can help?
I used the FAISS as the vector store. it seems that the `similarity_search_with_score` (supposedly ranked by distance: low to high) and `similarity_search_with_relevance_scores`((supposedly ranked by relevance: high to low) produce conflicting results when specifying `MAX_INNER_PRODUCT` as the distance strategy. Please see the screenshot below:
<img width="675" alt="Screenshot 2023-08-20 at 6 54 15 PM" src="https://github.com/langchain-ai/langchain/assets/7220686/88981277-a0b4-462b-929c-63bd19d4faff">
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.vectorstores.faiss import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
embedding_engine = HuggingFaceEmbeddings(
model_name= "BAAI/bge-base-en", #"sentence-transformers/all-mpnet-base-v2",
model_kwargs={"device": "cpu"},
encode_kwargs={'normalize_embeddings': True} # set True to compute cosine similarity
)
texts = ["I like apples", "I hate apples", "I like oranges"]
simple_vecdb = FAISS.from_texts(texts,
embedding_engine,
distance_strategy="MAX_INNER_PRODUCT")
# test 1
simple_vecdb.similarity_search_with_score("I like apples")
# test 2
simple_vecdb.similarity_search_with_relevance_scores("I like apples")
### Expected behavior
For similarity_search_with_score, if the `similarity_search_with_score` documentation is correct saying that "List of documents most similar to the query text with L2 distance in float. Lower score represents more similarity.", then the result from `similarity_search_with_score` should have matched the identical text `I like apple` with `(distance)score=0` and the `similarity_search_with_relevance_scores` with `(relevance)score=1`, no?
if the FAISS return the cosine similarity score already during the `similarity_search_with_score`call, should the `_max_inner_product_relevance_score_fn` for the FAISS just return the identical score instead of `1-score` when calculating the relevance score? | FAISS vectorstore `similarity_search_with_relevance_scores` returns strange/false result with `MAX_INNER_PRODUCT` | https://api.github.com/repos/langchain-ai/langchain/issues/9519/comments | 3 | 2023-08-21T02:21:28Z | 2023-12-25T16:09:00Z | https://github.com/langchain-ai/langchain/issues/9519 | 1,858,432,029 | 9,519 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blame/e51bccdb2890fa193ce7eb5bf7e13c28afef4dc4/libs/langchain/langchain/vectorstores/pgvector.py#L117
@hwchase17 | pgvector extension is not installed | https://api.github.com/repos/langchain-ai/langchain/issues/9511/comments | 2 | 2023-08-20T12:52:42Z | 2023-12-06T17:44:06Z | https://github.com/langchain-ai/langchain/issues/9511 | 1,858,114,722 | 9,511 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.8.16
langchain==0.0.268
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`ChatAnthropic` doesn't have `model_name` attribute. instead, it has a `model` attribute that specifies the name. Other chat models such as `ChatOpenAI`, `ChatVertexAI` has `model_name` attribute, this breaks the interface while integrating with multiple LLMs.
### Expected behavior
`llm = ChatAnthropic(model_name='claude-2')`
`print(llm.model_name)` | missing model_name param in ChatAnthropic | https://api.github.com/repos/langchain-ai/langchain/issues/9510/comments | 4 | 2023-08-20T12:37:45Z | 2023-12-07T16:06:40Z | https://github.com/langchain-ai/langchain/issues/9510 | 1,858,110,406 | 9,510 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.