issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為耶米拉,原先為創世神,但遭受偷襲受到重創,重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
# max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 1500},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
chat_history = []
while True:
query = input("冒險者: ")
result = qa({"question": query}, )
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
How do I read my chat history when I reply?
### Idea or request for content:
How do I read my chat_history when I reply? | ConversationalRetrievalChain.from ConversationBufferMemory Unable to achieve memories | https://api.github.com/repos/langchain-ai/langchain/issues/16621/comments | 4 | 2024-01-26T09:15:33Z | 2024-05-03T16:06:45Z | https://github.com/langchain-ai/langchain/issues/16621 | 2,101,858,096 | 16,621 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| qa_prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": ensemble_retriever, "question": RunnablePassthrough(),"chat_history":contextualized_question}
).assign(answer=qa_chain)
response = rag_chain_with_source.invoke(query)
chat_history.extend([HumanMessage(content=query),AIMessage(content=response)])
return response
### Error Message and Stack Trace (if applicable)
'str' object has no attribute 'get'
### Description
i am trying to add chat history to qa with source chain
### System Info
python version 3.11 | not able to add chat history in qa with source chain | https://api.github.com/repos/langchain-ai/langchain/issues/16620/comments | 2 | 2024-01-26T09:05:36Z | 2024-01-26T17:39:16Z | https://github.com/langchain-ai/langchain/issues/16620 | 2,101,844,405 | 16,620 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
chat_history = []
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為耶米拉,原先為創世神,但遭受偷襲受到重創,利用僅存的菲利斯多之力重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
f'history: {chat_history}'
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 128},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
# search_index_kwargs={
# "hnsw_ef": 32,
# "hnsw_m": 16,
# "index_time_ budget": 500,
# "stop_words": "結束 bye 幹".split(),
# "batch_size": 128,
# "chunk_size:": 128,
# "chunk_overlap": 32,
# }
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", output_key='answer',
return_messages=True, k=3),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
while True:
# chat_history=[(query,result["answer"])]
# qa.load_memory_variables({"chat_history": chat_history})
query = input("冒險者: ")
result = qa.invoke({"question": query})
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
My vector library retrieval is complete.
PLEASE, WHY CAN'T MY MEMORY REMEMBER WHAT I SAID LAST SENTENCE?
Help me check the loop
### Idea or request for content:
_No response_ | ConversationBufferMemory+ConversationalRetrievalChain Can't remember history | https://api.github.com/repos/langchain-ai/langchain/issues/16619/comments | 11 | 2024-01-26T07:29:38Z | 2024-05-07T16:08:24Z | https://github.com/langchain-ai/langchain/issues/16619 | 2,101,724,361 | 16,619 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
def storefunction():
loader=(r"dir path",loade_cs=Pypdfloader)
documents=loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000, chunk_overlap=0)
chunks = text_splitter.split_text(documents)
embeddings = openAIEmbeddings(openai_api_key=os.environ["OPEN_API_KEY"])
vector_store = FAISS.from_texts(documents=documents, embedding=embeddings)
vector_store.save_local("faiss_index")
storefunction()
def question(query):
store=Faiss.load_local("faiss_index",embeddings=openai_api_key=os.environ["OPEN_API_KEY"])
llm=ChatOpenAI(openai_api_key=os.environ["OPEN_API_KEY"],model_name="gpt-3.5-turbo,temperature=0.5)
prompt_template = """Do not give me any information about procedure and service feature that are not mentioned in the provided context.
{context}
Q:{question}
A:
"""
prompt = PromptTemplate(template = prompt_template, input_variables = ["context", "question"])
memory=ConversationBufferWindowMemory(k=1,returm_messges=True)
chain = ConverstionRetrievalqa(llm,memory=memory,chain_type="stuff",
retreiver=store.as_retreiver(search_kwargs={'k':2},search_type="mmr"),chain_type_kwargs{"prompt":prompt})
result=(qa_chain.run(query)
print(result)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain,openai multiple medical drugs multiple brand products pdf files having same context (specification,dieaseses symptoms, doses . Help required to create conversationbufferwindowmemory using conversationretreivalqa chain and answering data from specific provided pdf documents and only one brand related info.
### System Info
python 3.11
windows 10
langchain 0.0.312
openai 1.8.0
| Building document-based questions answer system with Lanchain, Python, FAISS like chat GPT-3 from multiple drug PDF files having same specs,different brand name,same symtoms,disease info | https://api.github.com/repos/langchain-ai/langchain/issues/16617/comments | 3 | 2024-01-26T06:32:42Z | 2024-05-03T16:06:35Z | https://github.com/langchain-ai/langchain/issues/16617 | 2,101,667,706 | 16,617 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為阿狗,原先為創世神,但遭受偷襲受到重創,利用僅存的菲利斯多之力重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 128},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
search_index_kwargs={
"hnsw_ef": 32,
"hnsw_m": 16,
"index_time_ budget": 500,
"stop_words": "結束 bye 幹".split(),
"batch_size": 128,
"chunk_size:": 128,
"chunk_overlap": 32,
}
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True, k=3),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
chat_history = []
while True:
qa.load_memory_variables({"chat_history": chat_history})
query = input("冒險者: ")
result = qa({"question": query}, )
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
執行結果:
Traceback (most recent call last):
File "C:\Users\sys\Downloads\Qdrant\new.py", line 107, in <module>
main()
File "C:\Users\sys\Downloads\Qdrant\new.py", line 94, in main
qa.load_memory_variables({"chat_history": chat_history})
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ConversationalRetrievalChain' object has no attribute 'load_memory_variables'
Please help me check if I can read the chat history in the loop
### Idea or request for content:
Achieve short-term memory and long-term memory at the same time (vector retrieval function) | memory and ConversationalRetrievalChain.from_llm how to share the same LLM, in the loop chat content to achieve short-term memory function, please help check the loop and short-term memory ... | https://api.github.com/repos/langchain-ai/langchain/issues/16612/comments | 2 | 2024-01-26T05:30:25Z | 2024-05-03T16:06:30Z | https://github.com/langchain-ai/langchain/issues/16612 | 2,101,620,173 | 16,612 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
toolkit = SlackToolkit()
tools = toolkit.get_tools()
llm = ChatOpenAI(temperature=0, model="gpt-4")
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(
tools=toolkit.get_tools(),
llm=llm,
prompt=prompt,
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke(
{
"input": "Tell me the number of messages sent in the #introductions channel from the past month."
}
)
```
### Output
> Entering new AgentExecutor chain...
First, I need to identify the channel ID for the #introductions channel.
Action: get_channelid_name_dict
Action Input: None[{"id": "C052SCUP4UD", "name": "general", "created": 1681297313, "num_members": 1}, {"id": "C052VBBU4M8", "name": "test-bots", "created": 1681297343, "num_members": 2}, {"id": "C053805TNUR", "name": "random", "created": 1681297313, "num_members": 2}, {"id": "C06FQGQ97AN", "name": "\u65e5\u672c\u8a9e\u30c1\u30e3\u30f3\u30cd\u30eb", "created": 1706240229, "num_members": 1}]The #introductions channel is not listed in the observed channels. I cannot proceed with the original question.
Final Answer: The #introductions channel does not exist.
> Finished chain.
### Description
Characters in languages like Japanese, which are not part of the ASCII character set, will be converted to their Unicode escape sequences (like \uXXXX).
NOTE: I have the plan to fix the issue. I will send pull request later.
### System Info
Langchain:
```
langchain_core: 0.1.13
langchain: 0.1.1
langchain_community: 0.0.1
```
Platform: mac
Python: 3.9.6 | Unicode escaping issue with tools in SlackToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/16610/comments | 1 | 2024-01-26T04:09:18Z | 2024-01-30T04:42:45Z | https://github.com/langchain-ai/langchain/issues/16610 | 2,101,566,693 | 16,610 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Example from https://api.python.langchain.com/en/stable/agents/langchain.agents.openai_assistant.base.OpenAIAssistantRunnable.html#langchain.agents.openai_assistant.base.OpenAIAssistantRunnable
```python
from langchain.agents import AgentExecutor
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.tools import E2BDataAnalysisTool
tools = [E2BDataAnalysisTool(api_key="...")]
agent = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant e2b tool",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"content": "What's 10 - 4 raised to the 2.7"})
```
### Error Message and Stack Trace (if applicable)
```
Argument of type "OpenAIAssistantRunnable" cannot be assigned to parameter "agent" of type "BaseSingleActionAgent | BaseMultiActionAgent" in function "__init__"
Type "OpenAIAssistantRunnable" cannot be assigned to type "BaseSingleActionAgent | BaseMultiActionAgent"
"OpenAIAssistantRunnable" is incompatible with "BaseSingleActionAgent"
"OpenAIAssistantRunnable" is incompatible with "BaseMultiActionAgent"Pylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues)
(variable) agent: OpenAIAssistantRunnable
```
### Description
Setting option `as_agent` to `True` should work.
### System Info
Langchain:
```
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
```
Platform: linux
Python: 3.11.6 | `OpenAIAssistantRunnable` type error with `AgentExecutor` | https://api.github.com/repos/langchain-ai/langchain/issues/16606/comments | 11 | 2024-01-26T01:22:54Z | 2024-05-20T16:08:24Z | https://github.com/langchain-ai/langchain/issues/16606 | 2,101,425,183 | 16,606 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.docstore.document import Document
to_add = Document(page_content="text", metadata={"document_name": "test.pdf"})
ids = to_add.metadata["document_name"]
print("BEFORE INSERT", retriever.vectorstore._collection.count())
retriever.add_documents([to_add], ids=[ids])
print("AFTER INSERT", retriever.vectorstore._collection.count())
retriever.vectorstore._collection.delete(ids=[ids])
retriever.docstore.mdelete(ids=[ids])
print(retriever.vectorstore._collection.count())
print("AFTER DELETE", retriever.vectorstore._collection.count())
```python
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 22
20 print("AFTER INSERT", retriever.vectorstore._collection.count())
21 retriever.vectorstore._collection.delete(ids=[ids])
---> 22 retriever.docstore.delete(ids=[ids])
23 print(retriever.vectorstore._collection.count())
24 print("AFTER DELETE", retriever.vectorstore._collection.count())
AttributeError: 'EncoderBackedStore' object has no attribute 'delete'
### Description
I added a document, checked the vectorstore size, deleted it, and nothing seems to be deleted.
### System Info
Chroma 0.4.22
Langchain 0.1.0
Lark 1.1.8
Python 3.11
Windows 10 | ChromaDB: Cannot delete document from ParentDocumentRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16604/comments | 10 | 2024-01-26T00:44:37Z | 2024-06-08T16:09:30Z | https://github.com/langchain-ai/langchain/issues/16604 | 2,101,392,417 | 16,604 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```py
def determine_subcategory(main_category, keyword):
try:
with open('subcategory_mapping.json') as file:
json_data = json.load(file)
category_info = json_data.get(main_category)
if not category_info:
return "Category info not found"
# Retrieve the model ID from the category_info
model_id = category_info.get("model")
if not model_id:
return "Model ID not found in category info"
# Initialize the fine-tuned model specified in the subcategory mapping
parameters = {
"candidate_count": 1,
"max_output_tokens": 1024,
"temperature": 0.9,
"top_p": 1
}
model = TextGenerationModel.from_pretrained("text-bison@002")
model = model.get_tuned_model(model_id)
# Construct the prompt including the keyword
prompt_context = category_info["prompt_context"]
prompt = f"{prompt_context}\n\nKeywords: {keyword}"
# Invoke the fine-tuned model with the constructed prompt
response = model.predict(prompt, **parameters)
# Extract 'text' from the response
subcategory = response.text.strip() if response.text else "Response format error"
return subcategory
except Exception as e:
logger.error(f"Subcategory determination error: {e}")
return "Subcategory determination failed"
```
### Error Message and Stack Trace (if applicable)
[00:01:03] ERROR Subcategory determination error: 404 Endpoint `projects/1055022903754/locations/europe-west4/endpoints/4453747770167132160` not found.
### Description
When using the `VertexAIModelGarden` class to send requests to fine-tuned models on Vertex AI, the class is designed to target endpoints rather than directly to a model. However, for my use case, I need to send requests directly to a fine-tuned model URL. The current implementation seems to only allow sending requests to an endpoint, which does not fit the requirement.
### Steps to Reproduce
1. Instantiate the `VertexAIModelGarden` class with the project ID and endpoint ID.
2. Use the `predict` method to send a prompt to the model.
3. The request is sent to an endpoint URL rather than the fine-tuned model URL.
### Expected Behavior
I expect to be able to specify a fine-tuned model URL directly, similar to how it's done using the `TextGenerationModel` class from the `vertexai` package:
```python
import vertexai
from vertexai.language_models import TextGenerationModel
vertexai.init(project="my-project-id", location="my-location")
model = TextGenerationModel.from_pretrained("base-model-name")
model = model.get_tuned_model("projects/my-project-id/locations/my-location/models/my-model-id")
response = model.predict(prompt, **parameters)
### System Info
MacOS | Issue with Specifying Fine-Tuned Model Endpoints in VertexAIModelGarden | https://api.github.com/repos/langchain-ai/langchain/issues/16601/comments | 4 | 2024-01-26T00:14:14Z | 2024-05-03T16:06:25Z | https://github.com/langchain-ai/langchain/issues/16601 | 2,101,367,467 | 16,601 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
In Terminal
```
ollama run mistral
```
(new tab) In Terminal
```
litellm --model ollama/mistral
```
Open notebook
```
from langchain_community.chat_models import ChatLiteLLM
ollama_chatlitellm = ChatLiteLLM(model="ollama/ollama2",api_base="http://127.0.0.1:8000", api_type="open_ai", api_key="")
messages = [
HumanMessage(
content="what model are you"
)
]
chat(messages)
```
### Error Message and Stack Trace (if applicable)
```
127.0.0.1:38742 - "POST /api/generate HTTP/1.1" 404
```
http://127.0.0.1:8000/chat/completion is the default LiteLLM endpoint.
### Description
* I am using Langchain's ChatLiteLLM to generate text from a local model that LiteLLM + Ollama is hosting.
* I expect text to be return
* I get 404 error message with the wrong endpoint being used
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-mistralai==0.0.3
langchain-openai==0.0.2.post1
platform = Ubuntu 20.04.06
pyhton 3.11
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.11.6 (main, Oct 19 2023, 15:48:25) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.3
> langchain_community: 0.0.15
> langchain_mistralai: 0.0.3
> langchain_openai: 0.0.2.post1 | ChatLiteLLM is not compatible for LiteLLM, '/api/generate' is being added to the endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/16594/comments | 1 | 2024-01-25T22:32:24Z | 2024-05-03T16:06:20Z | https://github.com/langchain-ai/langchain/issues/16594 | 2,101,271,090 | 16,594 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
'''
def creat_ai_search_new_agent(embeddings, llm, class_name_rich):
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=ai_search_endpoint,
azure_search_key=ai_search_admin_key,
index_name=class_name_rich,
embedding_function=embeddings.embed_query,
content_key=content_key
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type=search_type,
k=k,
top=n
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
class Response(BaseModel):
"""Final response to the question being asked"""
answer: str = Field(description="The final answer to respond to the user")
sources: List[int] = Field(
description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(
functions=[
# The retriever tool
format_tool_to_openai_function(retriever_tool),
# Response schema
convert_pydantic_to_openai_function(Response),
]
)
try:
agent = (
{
"input": lambda x: x["input"],
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| parse
)
agent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True, return_intermediate_steps=True)
except Exception as e:
print(e)
print("error instanciating the agent")
return agent_executor
'''
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Gives warning: Warning: model not found. Using cl100k encoding.
Does someone has any idea where it can come from?
### System Info
python version 3.10
langchain==0.1.1
openai==1.7.0 | Warning: model not found. Using cl100k encoding. | https://api.github.com/repos/langchain-ai/langchain/issues/16584/comments | 2 | 2024-01-25T17:43:56Z | 2024-01-25T19:39:50Z | https://github.com/langchain-ai/langchain/issues/16584 | 2,100,882,142 | 16,584 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.docstore.document import Document
text1 = """Outokumpu Annual report 2019 | Sustainability review 23 / 24 • For business travel: by estimated driven kilometers with emissions factors for the car, and for flights by CO2 eq. reports of the flight companies. Rental car emissions are included by the rental car company report. Upstream transport was assessed on data of environmental product declaration of 2019 but excluded from scope 3 emissions. The recycled content is calculated as the sum of pre and post consumer scrap related to crude steel production. Additionally, we report on the recycled content including all recycled metals from own treated waste streams entering the melt shop. Energy efficiency is defined as the sum of specific fuel and electricity energy of all processes calculated as energy consumption compared to the product output of that process. It covers all company productions: ferrochrome, melt shop, hot rolling and cold rolling processes. Used heat values and the consumption of energy are taken from supplier's invoices. Water withdrawal is measured for surface water, taken from municipal suppliers and estimated for rainwater amount. Waste is separately reported for mining and stainless production. In mining, amount of non-hazardous tailing sands is reported. For stainless production hazardous and non-hazardous wastes are reported as recycled, recovered and landfilled. Waste treated is counted as landfilled waste. Social responsibility Health and safety figures Health and safety figures reflect the scope of Outokumpu’s operations as they were in 2019. Safety indicators (accidents and preventive safety actions) are expressed per million hours worked (frequency). Safety indicators include Outokumpu employees, persons employed by a third party (contractor) or visitor accidents and preventive safety actions. A workplace accident is the direct result of a work-related activity and it has taken place during working hours at the workplace. Accident types • Lost time injury (LTI) is an accident that caused at least one day of sick leave (excluding the day of the injury or accident), as the World Steel Association defines it. One day of sick leave means that the injured person has not been able to return to work on their next scheduled period of working or any future working day if caused by an outcome of the original accident. Lost-day rate is defined as more than one calendar day absence from the day after the accident per million working hours. • Restricted work injury (RWI) does not cause the individual to be absent, but results in that person being restricted in their capabilities so that they are unable to undertake their normal duties. • Medically treated injury (MTI) has to be treated by a medical professional (doctor or nurse). • First aid treated injury (FTI), where the injury did not require medical care and was treated by a person himself/herself or by first aid trained colleague. • Total recordable injury (TRI) includes fatalities, LTIs, RWIs and MTIs, but FTIs are excluded. • All workplace accidents include total recordable injuries (TRI) and first aid treated injuries (FTI) Proactive safety actions Hazards refer to events, situations or actions that could have led to an accident, but where no injury occurred. Safety behavior observations (SBOs) are safety-based discussions between an observer and the person being observed. Other preventive safety action includes proactive measures. Sick-leave hours and absentee rate Sick-leave hours reported are total sick leave hours during a reporting period. Reporting units provide data on absence due to illness, injury and occupational diseases on a monthly basis. The absentee rate (%) includes the actual absentee hours lost expressed as a percentage of total hours scheduled. Total personnel costs This figure includes wages, salaries, bonuses, social costs or other personnel expenses, as well as fringe benefits paid and/ or accrued during the reporting period. Training costs Training costs include external training-related expenses such as participation fees. Wages, salaries and daily allowances for participants in training activities are not included, but the salaries of internal trainers are included. Training days per employee The number of days spent by an employee in training when each training day is counted as lasting eight hours. Bonuses A bonus is an additional payment for good performance. These figures are reported without social costs or fringe benefits. Personnel figures Rates are calculated using the total employee numbers at the end of the reporting period. The calculations follow the requirements of GRI Standards. The following calculation has been applied e.g. Hiring rate = New Hires / total number of permanent employees by year-end Average turnover rate = (Turnover + New Hires) / (total number of permanent employees by year-end × 2) Days lost due to strikes The number of days lost due to strikes is calculated by multiplying the number of Outokumpu employees who have been on strike by the number of scheduled working days lost. The day on which a strike starts is included. n Scope of the report"""
text2 = text1 + "a"
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024,
chunk_overlap=0,
separators=["\n\n", "\n", " ", ""],
add_start_index=True,
)
new_passages = text_splitter.split_documents([Document(page_content=text1)])
for passage in new_passages:
passage.metadata['end_index'] = passage.metadata['start_index'] + len(passage.page_content)
print([(p.metadata['start_index'], p.metadata['end_index']) for p in new_passages])
>>> [(0, 1022), (1023, 2044), (2045, 3068), (3069, 4087), (4088, 5111), (4412, 4418)]
new_passages = text_splitter.split_documents([Document(page_content=text2)])
for passage in new_passages:
passage.metadata['end_index'] = passage.metadata['start_index'] + len(passage.page_content)
print([(p.metadata['start_index'], p.metadata['end_index']) for p in new_passages])
>>> [(0, 1022), (1023, 2044), (2045, 3068), (3069, 4087), (4088, 5111), (5112, 5119)]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use `RecursiveCharacterTextSplitter` with `add_start_index=True` but I found out some texts where the `start_index` is wrong. For example:
- given the `text1` in the code, the 6th passage has (4412, 4418) but it's overlapped with the 5th passage that has (4088, 5111)... this is wrong
- if I simply add a char in the `text1` str (i.e. `text2`), now the 6th passage has (5112, 5119) and it's correct
### System Info
langchain 0.0.334 with python 3.8 | [BUG] Inconsistent results with `RecursiveCharacterTextSplitter`'s `add_start_index=True` | https://api.github.com/repos/langchain-ai/langchain/issues/16579/comments | 3 | 2024-01-25T14:51:16Z | 2024-01-26T07:32:44Z | https://github.com/langchain-ai/langchain/issues/16579 | 2,100,560,847 | 16,579 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name)
```
### Error Message and Stack Trace (if applicable)
```
File "/data/.cache/pypoetry/virtualenvs/rag-FGF9eHht-py3.10/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for SelfHostedHuggingFaceEmbeddings
model_name
extra fields not permitted (type=value_error.extra)
```
### Description
I noticed a discrepancy in the code comments where the parameter name is mentioned as model_name, but the actual parameter name in the code is model_id. This PR corrects the code comment to accurately reflect the parameter name used in the code. The corrected code snippet is as follows:
```python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_id = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_id=model_id, hardware=gpu)
```
source code
```python
class SelfHostedHuggingFaceEmbeddings(SelfHostedEmbeddings):
"""HuggingFace embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the ``runhouse`` python package installed.
Example:
.. code-block:: python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
"""
client: Any #: :meta private:
model_id: str = DEFAULT_MODEL_NAME
"""Model name to use."""
model_reqs: List[str] = ["./", "sentence_transformers", "torch"]
"""Requirements to install on hardware to inference the model."""
hardware: Any
"""Remote hardware to send the inference function to."""
model_load_fn: Callable = load_embedding_model
"""Function to load the model remotely on the server."""
load_fn_kwargs: Optional[dict] = None
"""Keyword arguments to pass to the model load function."""
inference_fn: Callable = _embed_documents
"""Inference function to extract the embeddings."""
```
This code indicates that the parameter should be `model_id` instead of `model_name`.
I'm willing to fix it by submitting a pull request. Would that be helpful, and should I proceed with preparing the PR?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #93-Ubuntu SMP Tue Sep 5 17:16:10 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.0
> langchain_community: 0.0.10
> langchain_openai: 0.0.2.post1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Fix code comment: Correct parameter name in example code | https://api.github.com/repos/langchain-ai/langchain/issues/16577/comments | 1 | 2024-01-25T13:16:57Z | 2024-05-02T16:06:14Z | https://github.com/langchain-ai/langchain/issues/16577 | 2,100,385,917 | 16,577 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm trying out this tutorial: https://python.langchain.com/docs/modules/model_io/prompts/example_selector_types/similarity
But I'm getting this error related to the Chroma vector store:
`vectorstore
instance of VectorStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=VectorStore)`
What is the correct way to do this?
I already tried:
- changing the import to `from langchain_community.vectorstores.chroma import Chroma`
- changing the import to `from langchain_community.vectorstores import chroma`
- trying out different versions of lanchain-community (0.0.15, 0.0.14, 0.0.13)
### Idea or request for content:
Please describe the correct way to set up the semantic similarity example selector and the cause of my bug | DOC: Error in semantic similarity example selector documentation | https://api.github.com/repos/langchain-ai/langchain/issues/16570/comments | 5 | 2024-01-25T10:03:24Z | 2024-06-21T20:25:11Z | https://github.com/langchain-ai/langchain/issues/16570 | 2,100,010,570 | 16,570 |
[
"langchain-ai",
"langchain"
] | ## Feature request
I'd like the option to pass the OpenAI api key to the `openai.Openai` client at runtime.
- ConfigurableField ([docs](https://python.langchain.com/docs/expression_language/how_to/configure)) already supports passing this variable to the client, and in fact allows me to change any config except `openai_api_key`.
- I suspect the original intent may have been to make it configurable: if `openai_api_key` is defined with a `configurable_field` there is no warning or Exception raised.
I can submit a PR; I'm already using the proposed change in my code.
## Motivation
There are many scenarios where I'd like to change the API key depending on the task I'm performing. This is especially true when using Langchain in the context of a microservice or API with a high volume of requests: building "cloned" modules or re-initializing modules is impractical.
## Your contribution
I can submit a PR in short order but am soliciting input here first.
The proposed change is to the validation function found here [(source)](https://github.com/langchain-ai/langchain/blob/2b2285dac0d6ae0f6b7c09c33882a0d5be26c078/libs/partners/openai/langchain_openai/chat_models/base.py#L344):
*Proposed change at the bottom*
```python
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
# Check OPENAI_ORGANIZATION for backwards compatibility.
values["openai_organization"] = (
values["openai_organization"]
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
values["openai_api_base"] = values["openai_api_base"] or os.getenv(
"OPENAI_API_BASE"
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
client_params = {
"api_key": values["openai_api_key"],
"organization": values["openai_organization"],
"base_url": values["openai_api_base"],
"timeout": values["request_timeout"],
"max_retries": values["max_retries"],
"default_headers": values["default_headers"],
"default_query": values["default_query"],
"http_client": values["http_client"],
}
##### PROPOSAL: REMOVE MARKED IF STATEMENTS FROM THIS CODE #####
if not values.get("client"): # <--- REMOVE
values["client"] = openai.OpenAI(**client_params).chat.completions
if not values.get("async_client"): # <--- REMOVE
values["async_client"] = openai.AsyncOpenAI(
**client_params
).chat.completions
##### END PROPOSAL #####
```
This would require no change in user behavior (putting OPENAI_API_KEY in the environment variables still works).
I'm not sure why these IF checks are here. If they're necessary to avoid re-defining `openai.OpenAI`, I'd suggest the benefit of a dynamic api key outweighs the cost of re-instantiation (and could be solved regardless by a quick check on whether the client_params changed).
Thanks for any input! | Proposal: Make OpenAI api key configurable at runtime | https://api.github.com/repos/langchain-ai/langchain/issues/16567/comments | 12 | 2024-01-25T09:15:09Z | 2024-07-11T08:24:09Z | https://github.com/langchain-ai/langchain/issues/16567 | 2,099,921,487 | 16,567 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Below is `langchain.libs.partners.anthropic.chat_models`.
```python
import os
from typing import Any, AsyncIterator, Dict, Iterator, List, Optional, Tuple
import anthropic
from langchain_core.callbacks import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str
_message_type_lookups = {"human": "user", "assistant": "ai"}
def _format_messages(messages: List[BaseMessage]) -> Tuple[Optional[str], List[Dict]]:
"""Format messages for anthropic."""
"""
[
{
"role": _message_type_lookups[m.type],
"content": [_AnthropicMessageContent(text=m.content).dict()],
}
for m in messages
]
"""
system = None
formatted_messages = []
for i, message in enumerate(messages):
if not isinstance(message.content, str):
raise ValueError("Anthropic Messages API only supports text generation.")
if message.type == "system":
if i != 0:
raise ValueError("System message must be at beginning of message list.")
system = message.content
else:
formatted_messages.append(
{
"role": _message_type_lookups[message.type],
"content": message.content,
}
)
return system, formatted_messages
```
### Error Message and Stack Trace (if applicable)
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages: Unexpected role "ai". Allowed roles are "user" or "assistant"'}}
### Description
The parameters for user and assistant in Anthropic should be 'ai -> assistant,' but they are reversed to 'assistant -> ai.'
`langchain_core.messages.ai.AIMessage.type` is `ai`
[anthropic](https://github.com/anthropics/anthropic-sdk-python/blob/7177f3a71f940d9f9842063a8198b7c3e92715dd/src/anthropic/types/beta/message_param.py#L13)
### System Info
```
langchain==0.1.0
langchain-anthropic==0.0.1.post1
langchain-community==0.0.11
langchain-core==0.1.8
``` | Issue : with _format_messages function in Langchain_Anthropic | https://api.github.com/repos/langchain-ai/langchain/issues/16561/comments | 1 | 2024-01-25T06:47:19Z | 2024-01-26T00:58:46Z | https://github.com/langchain-ai/langchain/issues/16561 | 2,099,682,967 | 16,561 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code :
```python
from langchain_community.vectorstores.chroma import Chroma
from langchain_community.embeddings.openai import OpenAIEmbeddings
from dotenv import load_dotenv
load_dotenv()
DBPATH = './tmpdir/'
print("[current db status]")
chroma1 = Chroma(collection_name='tmp1',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma(collection_name='tmp2',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
print("[add texts]")
chroma1 = Chroma.from_texts(texts=['aaaaa', 'bbbbb'],
collection_name='tmp1',
embedding=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma.from_texts(texts=['11111', '22222'],
collection_name='tmp2',
embedding=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma1.persist()
chroma2.persist()
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
print("[reload db]")
chroma1 = None
chroma2 = None
chroma1 = Chroma(collection_name='tmp1',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma(collection_name='tmp2',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
```
### What I am doing :
I want to make multiple collections in same single persistent directory.
### What is currently happening :
Running the following code gives this output :
```bash
[current db status]
tmp1 : []
tmp2 : []
[add texts]
tmp1 : ['aaaaa', 'bbbbb']
tmp2 : ['11111', '22222']
[reload db]
tmp1 : []
tmp2 : ['11111', '22222']
```
### What I expect :
I expect that the results below [reload db] should be same as results below [add texts].
But tmp1 collection has no texts saved after I init chromadb client object like this `chroma2=None`.
As you can see printed result.
I am not getting why this is happening.
Any help would be much appreciated.
### System Info
python==3.9.18
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
chromadb==0.3.26 | chromadb : Added texts in multiple collections within single persistent directory, but only one collection is working | https://api.github.com/repos/langchain-ai/langchain/issues/16558/comments | 2 | 2024-01-25T05:13:37Z | 2024-01-25T07:53:45Z | https://github.com/langchain-ai/langchain/issues/16558 | 2,099,585,023 | 16,558 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
To reproduce, simply open a dev container [here](https://github.com/langchain-ai/langchain/tree/master/.devcontainer#vs-code-dev-containers).
### Two problems:
1. Dependency azure-ai-vision [has been yanked ](https://pypi.org/project/azure-ai-vision/) from Pypi and replaced by [another MS package](replaced by https://pypi.org/project/azure-ai-vision-imageanalysis/1.0.0b1/). This change is not reflected in `pyproject.toml`.
2. Upon updating the package reference, Poetry dependency resolution takes hours before it ultimately crashes.
### Error Message and Stack Trace (if applicable)
### The first problem is here:
```
#13 [langchain langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs
2024-01-25 02:25:15.795Z: #13 1.180 Updating dependencies
2024-01-25 02:25:15.945Z: #13 1.180 Resolving dependencies...
2024-01-25 02:25:21.202Z: #13 6.587
#13 6.587 Because langchain depends on azure-ai-vision (^0.11.1b1) which doesn't match any versions, version solving failed.
2024-01-25 02:25:21.653Z: #13 ERROR: process "/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs" did not complete successfully: exit code: 1
2024-01-25 02:25:21.750Z: ------
> [langchain langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs:
1.180 Updating dependencies
1.180 Resolving dependencies...
6.587
6.587 Because langchain depends on azure-ai-vision (^0.11.1b1) which doesn't match any versions, version solving failed.
```
### The full stack trace is 1.5M lines (poetry solving dependencies in verbose mode), but the final message error is:
*(You may notice the poetry command is missing `--with dev,test,docs`. This is because I was experimenting with different installs to see if one would solve. The outcome is the same with or without.)*
```
failed to solve: process "/bin/sh -c poetry install -vvv --no-interaction --no-ansi --no-cache" did not complete successfully: exit code: 1
[2024-01-25T01:29:33.420Z] Stop (11249526 ms): Run: docker compose --project-name langchain_devcontainer -f /workspaces/langchain/.devcontainer/docker-compose.yaml -f /tmp/devcontainercli-root/docker-compose/docker-compose.devcontainer.build-1706134923893.yml build
[2024-01-25T01:29:34.801Z] Error: Command failed: docker compose --project-name langchain_devcontainer -f /workspaces/langchain/.devcontainer/docker-compose.yaml -f /tmp/devcontainercli-root/docker-compose/docker-compose.devcontainer.build-1706134923893.yml build
[2024-01-25T01:29:34.801Z] at pw (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:427:522)
[2024-01-25T01:29:34.801Z] at runMicrotasks (<anonymous>)
[2024-01-25T01:29:34.801Z] at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2024-01-25T01:29:34.801Z] at async L$ (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:427:2493)
[2024-01-25T01:29:34.802Z] at async N$ (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:409:3165)
[2024-01-25T01:29:34.802Z] at async tAA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:479:3833)
[2024-01-25T01:29:34.802Z] at async CC (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:479:4775)
[2024-01-25T01:29:34.802Z] at async NeA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:612:11107)
[2024-01-25T01:29:34.802Z] at async MeA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:612:10848)
[2024-01-25T01:29:34.942Z] Stop (11253362 ms): Run in container: node /root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js up --container-session-data-folder /tmp/devcontainers-7edf6f65-e5f1-47df-8de6-ecd2483665631706134915493 --workspace-folder /workspaces/langchain --workspace-mount-consistency cached --id-label vsch.local.repository=https://github.com/cvansteenburg/langchain --id-label vsch.local.repository.volume=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c --id-label vsch.local.repository.folder=langchain --id-label devcontainer.config_file=/workspaces/langchain/.devcontainer/devcontainer.json --log-level debug --log-format json --config /workspaces/langchain/.devcontainer/devcontainer.json --override-config /tmp/devcontainer-b183d847-3f84-4dfc-bf18-cb61dde92812.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c,target=/workspaces,external=true --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default off --mount-workspace-git-root --terminal-columns 178 --terminal-rows 15
[2024-01-25T01:29:34.945Z] Exit code 1
[2024-01-25T01:29:34.948Z] Start: Run: docker rm -f d465e5855c617b703922f6eafccdf2bde11f6140c0baaa5e1e9f860bc70973ac
[2024-01-25T01:29:34.964Z] Command failed: node /root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js up --container-session-data-folder /tmp/devcontainers-7edf6f65-e5f1-47df-8de6-ecd2483665631706134915493 --workspace-folder /workspaces/langchain --workspace-mount-consistency cached --id-label vsch.local.repository=https://github.com/cvansteenburg/langchain --id-label vsch.local.repository.volume=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c --id-label vsch.local.repository.folder=langchain --id-label devcontainer.config_file=/workspaces/langchain/.devcontainer/devcontainer.json --log-level debug --log-format json --config /workspaces/langchain/.devcontainer/devcontainer.json --override-config /tmp/devcontainer-b183d847-3f84-4dfc-bf18-cb61dde92812.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c,target=/workspaces,external=true --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default off --mount-workspace-git-root --terminal-columns 178 --terminal-rows 15
[2024-01-25T01:29:34.965Z] Exit code 1
[2024-01-25T01:29:35.311Z] Stop (11256492 ms): Run in container: /bin/sh
[2024-01-25T01:29:35.311Z] Container server terminated (code: 137, signal: null).
[2024-01-25T01:29:35.312Z] Stop (11256684 ms): Run in container: /bin/sh
[2024-01-25T01:29:35.647Z] Stop (699 ms): Run: docker rm -f d465e5855c617b703922f6eafccdf2bde11f6140c0baaa5e1e9f860bc70973ac
[2024-01-25T01:30:34.625Z] Start: Run: docker volume ls -q
[2024-01-25T01:30:34.707Z] Stop (82 ms): Run: docker volume ls -q
[2024-01-25T01:30:34.785Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2024-01-25T01:30:34.953Z] Stop (168 ms): Run: docker version --format {{.Server.APIVersion}}
[2024-01-25T01:30:34.953Z] 1.43
```
### Description
See above.
### System Info
VSCode and docker on MacOS 14.2.1 and on Github Codespaces. | Devcontainer hangs on Poetry dependency resolution | https://api.github.com/repos/langchain-ai/langchain/issues/16552/comments | 6 | 2024-01-25T02:42:08Z | 2024-03-29T16:25:27Z | https://github.com/langchain-ai/langchain/issues/16552 | 2,099,455,060 | 16,552 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.retrievers import BaseRetriever
from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from typing import List
class CustomRetriever(BaseRetriever):
# the signature is slightly modified with the addition of `spam`
def _get_relevant_documents(
self, query: str, *, spam, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
return [Document(page_content=query['query'])]
retriever = CustomRetriever()
# Binding a spam attribute,
spam = {'query': RunnablePassthrough()} | retriever.bind(spam=2)
spam.invoke("bar")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[33], [line 17](vscode-notebook-cell:?execution_count=33&line=17)
[14](vscode-notebook-cell:?execution_count=33&line=14) retriever = CustomRetriever()
[16](vscode-notebook-cell:?execution_count=33&line=16) spam = {'query': RunnablePassthrough()} | retriever.bind(spam=2)
---> [17](vscode-notebook-cell:?execution_count=33&line=17) spam.invoke("bar")
TypeError: BaseRetriever.invoke() got an unexpected keyword argument 'spam'
### Description
# Details
`.bind()` does not seem to be working well with `Retrievers`.
The `BaseRetriever.invoke()` does not support variadics arguments. Runtime binding on a retriever raises a `TypeError`.
A possible workaround is to use a `RunnableLambda` wrapping the `retriever.get_relevant_documents` and dispatching the bounded arguments to a `get_relevant_documents. `.
# How to reproduce
Instantiate a `CustomRetriever` as per [Langchain's documentation](https://python.langchain.com/docs/modules/data_connection/retrievers/) and `bind` an argument to it.
# What to expect
`bind` should dispatch bounded argument to `retrievers` too.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
> Python Version: 3.11.6 (main, Nov 1 2023, 14:10:18) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.0
> langchain_community: 0.0.12
> langchain_openai: 0.0.2.post1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | .bind(foo='foo') does not work for Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/16547/comments | 4 | 2024-01-24T23:46:32Z | 2024-05-02T16:06:04Z | https://github.com/langchain-ai/langchain/issues/16547 | 2,099,312,161 | 16,547 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os
os.unsetenv("LANGCHAIN_TRACING_V2")
os.unsetenv("LANGCHAIN_ENDPOINT")
os.unsetenv("LANGCHAIN_API_KEY")
os.unsetenv("LANGCHAIN_PROJECT")
### Error Message and Stack Trace (if applicable)
logger.warning(
Message: 'Unable to load requested LangChainTracer. To disable this warning, unset the LANGCHAIN_TRACING_V2 environment variables.'
Arguments: (LangSmithUserError('API key must be provided when using hosted LangSmith
API'),)
### Description
I have already unset the `LANGCHAIN_TRACING_V2` variable. Bus I still get the warning message and it becomes hard for me to trace the error for other messages.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-openai==0.0.3 | LangSmith: Warning message does not disappear when unsetting LANGCHAIN_TRACING_V2 | https://api.github.com/repos/langchain-ai/langchain/issues/16537/comments | 5 | 2024-01-24T21:11:23Z | 2024-01-30T21:32:36Z | https://github.com/langchain-ai/langchain/issues/16537 | 2,099,114,477 | 16,537 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Hi everyone!
If you have experience with LangChain and think you have enough expertise to help other community members and help support the project, please consider spending a bit of time answering discussion questions: https://github.com/langchain-ai/langchain/discussions :parrot: | For experienced users: Help with discussion questions :parrot: | https://api.github.com/repos/langchain-ai/langchain/issues/16534/comments | 4 | 2024-01-24T19:45:37Z | 2024-07-13T16:04:56Z | https://github.com/langchain-ai/langchain/issues/16534 | 2,098,965,186 | 16,534 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L105
```python
class BaseTool(RunnableSerializable[Union[str, Dict], Any]):
"""Interface LangChain tools must implement."""
def __init_subclass__(cls, **kwargs: Any) -> None:
"""Create the definition of the new tool class."""
super().__init_subclass__(**kwargs)
args_schema_type = cls.__annotations__.get("args_schema", None)
if args_schema_type is not None:
if args_schema_type is None or args_schema_type == BaseModel:
# Throw errors for common mis-annotations.
# TODO: Use get_args / get_origin and fully
# specify valid annotations.
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
name = cls.__name__
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This code first checks if args_schema_type is not None, and then immediately checks if it is None, which doesn't make logical sense.
It should be:
```
if args_schema_type is not None and args_schema_type == BaseModel:
```
Also, perhaps the TODO should have an issue associated with it, though I'm not sure I have enough context to create the issue.
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.0.353
> langchain_community: 0.0.12
> langserve: Not Found
``` | BaseTool __init_subclass__ has contradictory conditional | https://api.github.com/repos/langchain-ai/langchain/issues/16528/comments | 4 | 2024-01-24T19:04:43Z | 2024-01-25T01:57:02Z | https://github.com/langchain-ai/langchain/issues/16528 | 2,098,902,232 | 16,528 |
[
"langchain-ai",
"langchain"
] | It goes something like this:
This is why `BaseChatMemory.chat_memory` doesn't pruning https://github.com/langchain-ai/langchain/issues/14957#issuecomment-1907951114
So I made some mokey patches to fix the problem temporarily.
And the following demo implements to use `history` and `memory` in LCEL.
Because there are too many modules involved, I wanted core contributors to help me refine this idea.
```python
import json
from typing import Union, Any, List, Optional
from langchain.memory.chat_memory import BaseChatMemory
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.load import load
from langchain_core.messages import BaseMessage, message_to_dict
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.runnables.history import (
RunnableWithMessageHistory,
MessagesOrDictWithMessages,
GetSessionHistoryCallable
)
from langchain_core.tracers.schemas import Run
class MemoryList(list):
def __init__(self, *args, history=None, **kwargs):
self.__history: RedisChatMessageHistory = history
super().__init__(*args, **kwargs)
def pop(self, __index=-1):
if __index == 0:
self.__history.redis_client.rpop(self.__history.key)
elif __index == -1:
self.__history.redis_client.lpop(self.__history.key)
else:
raise IndexError("Redis doesn't support pop by index.")
return super().pop(__index)
def clear(self):
self.__history.clear()
super().clear()
class RedisChatMessageHistoryFixed(RedisChatMessageHistory):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
msgs = super().messages.copy()
self._messages: MemoryList = MemoryList(msgs, history=self)
@property
def messages(self) -> List[BaseMessage]: # type: ignore
msgs = super().messages.copy()
self._messages: MemoryList = MemoryList(msgs, history=self)
return self._messages
@messages.setter
def messages(self, msgs):
self._messages.clear()
if msgs:
self.redis_client.lpush(self.key, *[json.dumps(message_to_dict(msg)) for msg in msgs])
class RunnableWithMessageHistoryWithMemory(RunnableWithMessageHistory):
memory: Optional[BaseChatMemory] = None
def __init__(
self,
runnable: Runnable[
MessagesOrDictWithMessages,
Union[str, BaseMessage, MessagesOrDictWithMessages]
],
get_session_history: GetSessionHistoryCallable,
memory: BaseChatMemory = None,
**kwargs: Any
):
super().__init__(runnable, get_session_history, **kwargs)
if memory:
self.memory = memory
self.memory.input_key = self.input_messages_key
self.memory.output_key = self.output_messages_key
def _enter_history(self, input: Any, config: RunnableConfig) -> List[BaseMessage]:
hist = config["configurable"]["message_history"]
if not isinstance(self.memory.chat_memory, RedisChatMessageHistoryFixed):
self.memory.chat_memory = hist
# return only historic messages
if self.history_messages_key:
# Some of the 'BaseChatMemory' pruning features are in `load_memory_variables()`,
# such as `ConversationSummaryBufferMemory`.
# So we should extract the `messages` from 'load_memory_variables()'.
messages = self.memory.load_memory_variables({})[self.history_messages_key].copy()
hist.messages = messages
return messages
# return all messages
else:
input_val = (
input if not self.input_messages_key else input[self.input_messages_key]
)
return hist.messages.copy() + self._get_input_messages(input_val)
def _exit_history(self, run: Run, config: RunnableConfig) -> None:
hist = config["configurable"]["message_history"]
if not isinstance(self.memory.chat_memory, RedisChatMessageHistoryFixed):
self.memory.chat_memory = hist
# Get the input messages
inputs = load(run.inputs)
input_val = inputs[self.input_messages_key or "input"]
input_messages = self._get_input_messages(input_val)
# If historic messages were prepended to the input messages, remove them to
# avoid adding duplicate messages to history.
if not self.history_messages_key:
historic_messages = config["configurable"]["message_history"].messages
input_messages = input_messages[len(historic_messages):]
# Get the output messages
output_val = load(run.outputs)
output_messages = self._get_output_messages(output_val)
messages = zip(input_messages, output_messages)
# `BaseChatMemory.save_context()` will call `add_message()` and `prune()`.
# `RunnableWithMessageHistory` just call the `add_message()`.
for i, o in messages:
self.memory.save_context(
{self.input_messages_key or 'input': i.content},
{self.output_messages_key or 'output': o.content}
)
if __name__ == '__main__':
REDIS_URL = ...
prompt = ChatPromptTemplate.from_messages(
[
("system", 'You are a helpful assistant.'),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
model = ChatOpenAI(
model="gpt-3.5-turbo",
)
chain = prompt | model
chain_with_history = RunnableWithMessageHistoryPlus(
chain,
lambda session_id: RedisChatMessageHistoryFixed(session_id, url=REDIS_URL),
memory=ConversationSummaryBufferMemory(
llm=model,
memory_key="history",
return_messages=True,
max_token_limit=2000
),
input_messages_key="question",
history_messages_key="history",
)
def chat(question):
res = chain_with_history.stream(
{"question": question},
config={"configurable": {"session_id": 'test'}},
)
for message in res:
print(message.content, end='')
while _question := input('human:'):
chat(_question)
print()
```
| A monkey patch demo to use `memory` with `history` in LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/16525/comments | 4 | 2024-01-24T18:21:27Z | 2024-06-08T16:09:25Z | https://github.com/langchain-ai/langchain/issues/16525 | 2,098,839,256 | 16,525 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Attempting to initialize PineconeConnected
```
import pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from langchain.schema import Document
class PineconeConnected():
def __init__(self, index_name: str, pinecone_api_key: str, pinecone_env: str, openai_key: str):
embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
pinecone.init(api_key=pinecone_api_key)
self.vector_db = Pinecone.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone index loaded
def query(query:str, book_title=None)
```
### Description
when initializing PineconeConnected class, I get this error:
*Please note that this was previously working. pinecone-client updated its library a few days ago hence why this integration with 'init' no longer works.
```
AttributeError: init is no longer a top-level attribute of the pinecone package.
Please create an instance of the Pinecone class instead.
Example:
import os
from pinecone import Pinecone, ServerlessSpec
pc = Pinecone(
api_key=os.environ.get("PINECONE_API_KEY")
)
# Now do stuff
if 'my_index' not in pc.list_indexes().names():
pc.create_index(
name='my_index',
dimension=1536,
metric='euclidean',
spec=ServerlessSpec(
cloud='aws',
region='us-west-2'
)
)
```
### System Info
fastapi-poe==0.0.24
pydantic>=2
openai==0.28.1
langchain==0.0.348
pinecone-client==3.0.1
tiktoken
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | pinecone.init is no longer a top-level attribute of the pinecone package | https://api.github.com/repos/langchain-ai/langchain/issues/16513/comments | 14 | 2024-01-24T15:50:04Z | 2024-08-08T16:57:01Z | https://github.com/langchain-ai/langchain/issues/16513 | 2,098,565,291 | 16,513 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hi everybody, I'm new here and I have this issue. Thanks for all
```python
shared_memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', return_messages=True, output_key='answer')
system_message = """
Dada la siguiente conversación y el mensaje del usuario,
vas a asesorar al mismo de la mejor manera posible en base a tus propios conocimientos
las respuestas tienen que relacionarce con el contexto dado y
si no sabes la respuesta no inventes una.
Mantener el mismo idioma que los mensajes del usuario."
"""
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-1106",
temperature=0.5, streaming=True
)
custom_template = """Dada la siguiente conversación y el mensaje del usuario, \
vas a asesorar al mismo de la mejor manera posible en base a tus propios conocimientos \
las respuestas tienen que relacionarce con el contexto dado y \
si no sabes la respuesta no inventes una. \
Mantener el mismo idioma que los mensajes del usuario..
Contexto:
{context}
Mensaje de usuario: {question}
Pregunta o instruccion independiente:"""
custom_prompt = PromptTemplate(
template=custom_template,
input_variables=["context", "question"],
)
qa1 = ConversationalRetrievalChain.from_llm(
llm=llm,
verbose=True,
memory = shared_memory,
return_generated_question=True,
return_source_documents = True,
retriever=retriever,
combine_docs_chain_kwargs={"prompt": custom_prompt},
)
result = qa1({"question":"Hola chat, quiero saber sobre la ley de propiedad intelectual"}).
La ley de propiedad intelectual establece la protección de obras científicas, literarias y artísticas, así como también de programas de computación, compilaciones de datos, obras dramáticas, cinematográficas, entre otras. También establece la protección para autores extranjeros, siempre y cuando cumplan con las formalidades establecidas en su país de origen.
busqueda_local = [Tool.from_function(
name="doc_search",
func=qa1,
description= "Herramienta util para cuando hay que buscar informacion sobre documentacion propia"
) ] .
agent = initialize_agent(
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools= tools,
llm= llm,
memory=shared_memory,
return_source_documents=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
agent_kwargs={"system_message": system_message}
).
query = "Hola, quiero informacion sobre los datos personales, la ley"
result = agent({"question":"Hola qiero saber sobre la ley de datos personales"})
print("Pregunta: ", query)
print("Respuesta: ",result["answer"]).
```
AND THIS IS THE ERROR:
ValueError Traceback (most recent call last)
Cell In[115], [line 2](vscode-notebook-cell:?execution_count=115&line=2)
[1](vscode-notebook-cell:?execution_count=115&line=1) query = "Hola, quiero informacion sobre los datos personales, la ley"
----> [2](vscode-notebook-cell:?execution_count=115&line=2) result = agent({"questio":"Hola qiero saber sobre la ley de datos personales"})
[3](vscode-notebook-cell:?execution_count=115&line=3) print("Pregunta: ", query)
[4](vscode-notebook-cell:?execution_count=115&line=4) print("Respuesta: ",result["answer"])
File [c:\Users\nicos\anaconda3\envs\NicoCH\lib\site-packages\langchain_core\_api\deprecation.py:145](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:145), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[143](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:143) warned = True
[144](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:144) emit_warning()
--> [145](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:145) return wrapped(*args, **kwargs)
File [c:\Users\nicos\anaconda3\envs\NicoCH\lib\site-packages\langchain\chains\base.py:363](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:363), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[331](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:331) """Execute the chain.
[332](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:332)
[333](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:333) Args:
(...)
[354](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:354) `Chain.output_keys`.
[355](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:355) """
[356](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:356) config = {
[357](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:357) "callbacks": callbacks,
[358](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:358) "tags": tags,
[359](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:359) "metadata": metadata,
[360](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:360) "run_name": run_name,
...
[262](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:262) missing_keys = set(self.input_keys).difference(inputs)
[263](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:263) if missing_keys:
--> [264](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:264) raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'input'}
### Idea or request for content:
I think it's a problem with the last input with agent, it expects str, but I don't know. | DOC: <Problem with imput in agent: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/16511/comments | 9 | 2024-01-24T12:51:58Z | 2024-05-01T16:07:19Z | https://github.com/langchain-ai/langchain/issues/16511 | 2,098,204,503 | 16,511 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/google_vertex_ai_search.py
Gives the error 'MethodNotImplemented: 501 Received http2 header with status: 404'
### Description
I'm trying to use the Google Vertex AI Wrapper.
I expect to get results from my datastore.
But got the error 'MethodNotImplemented: 501 Received http2 header with status: 404'.
The 'content_search_spec' parameter creates the problem as it's not seen in the API call.
Will be glad to share the working version.
### System Info
Google Colab Notebook
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Google Vertex AI Wrapper Code | https://api.github.com/repos/langchain-ai/langchain/issues/16509/comments | 1 | 2024-01-24T12:19:06Z | 2024-05-01T16:07:13Z | https://github.com/langchain-ai/langchain/issues/16509 | 2,098,146,484 | 16,509 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [ ] I searched the LangChain documentation with the integrated search.
- [ ] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I created two different indexes using
```
def create_index(node_label, index, text_properties):
existing_graph = Neo4jVector.from_existing_graph(
embedding=embedding_model,
url=url,
username='neo4j',
password=password,
index_name=index,
node_label=node_label,
text_node_properties=text_properties,
embedding_node_property=f"embedding",
)
create_index('Chunk','ockham',['document','text'])
create_index('Embed','tr10', ["text", "number"])
```
I then tried to return the tr10 index using:
```
def neo4j_index(index_name):
"""Use langchain to return the neo4j index"""
index = Neo4jVector.from_existing_index(
embedding=embedding_model(),
url=NEO4J_URI,
username=NEO4J_USER,
password=NEO4J_PASS,
index_name=index_name,
)
return index
index = neo4j_index('tr10')
```
But instead returned the 'ockham' index.
This was fixed by changing my function that created the index to specify different embedding property names.
```
def create_index(node_label, index, text_properties):
existing_graph = Neo4jVector.from_existing_graph(
embedding=embedding_model,
url=url,
username='neo4j',
password=password,
index_name=index,
node_label=node_label,
text_node_properties=text_properties,
embedding_node_property=f"embedding_{index}",
)
### Description
See above.
@Tom
### System Info
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
python 3.10.12
Ubuntu 22.04
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Neo4jVector.from_existing_index() requires different embedding_property name when returning indexes. | https://api.github.com/repos/langchain-ai/langchain/issues/16505/comments | 3 | 2024-01-24T11:17:49Z | 2024-06-18T16:09:43Z | https://github.com/langchain-ai/langchain/issues/16505 | 2,098,043,103 | 16,505 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
def generate_extract_chain(profile):
template_ner_extract = """
schema中的"职业"可能对应多个实体,都应全部提取,表述不清晰的应该要概括准确,比如“送外卖”是“外卖员”。
schema中的“同行人”可以有多个,应该尽量把对应的姓名和与当事人的关系实体提取出来
返回结果以json格式返回,包括:
ner_result:
updated_labels: 返回画像标签集所有有更新变动的key-value,如果没有更新则值为空
profile: 更新后的{profile},如果没有更新则值为空
"""
ner_prompt = ChatPromptTemplate.from_template(template_ner_extract)
# Schema
schema = {
"properties": {
"姓名": {"type": "string"},
"年龄": {"type": "integer"},
"职业": {"type": "string"},
"同行人": {"type": "string"},
"常住地址": {"type": "string"},
"工作地址": {"type": "string"},
},
"required": ["姓名", "年龄"],
}
# Run chain
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")
chain = create_extraction_chain(schema, llm, ner_prompt, output_key='ner_result')
return chain
def generate_sequen_chain(ner_chain, reasonable_chain, ask_chain):
overall_chain = SequentialChain(
chains=[ner_chain, reasonable_chain, ask_chain],
input_variables=["profile", "dialogue", "pair", "question", "answer"],
# Here we return multiple variables
output_variables=["ask_result", "ner_result", "resonable_result"],
# output_variables=["new_result"],
verbose=True)
return overall_chain
ask_chain = generate_ask_chain()
ner_chain = generate_ner_chain()
reasonable_chain = generate_resonable_chain()
overall_chain = SequentialChain(
chains=[ner_chain, reasonable_chain, ask_chain]
### Description
TypeError: create_extraction_chain() got an unexpected keyword argument 'output_key'
请问如何将create_extraction_chain作为SequentialChain中的子链来运行。
### System Info
python3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | create_extraction_chain无法作为SequentialChain中的子链 | https://api.github.com/repos/langchain-ai/langchain/issues/16504/comments | 6 | 2024-01-24T11:16:59Z | 2024-03-08T16:43:39Z | https://github.com/langchain-ai/langchain/issues/16504 | 2,098,041,743 | 16,504 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_google_vertexai import ChatVertexAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatVertexAI(
model_name="gemini-pro",
project="my_project",
convert_system_message_to_human=True,
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
(
"You are a system that translates each input from english to german."
),
),
("user", "{question}"),
]
)
chain = prompt | llm
answer = chain.invoke({"question": "Hello, how are you?"})
print(answer)
```
This code raises the exception
```
...
env/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 167, in _parse_chat_history_gemini
raise ValueError(
ValueError: SystemMessages are not yet supported!
To automatically convert the leading SystemMessage to a HumanMessage,
set `convert_system_message_to_human` to True. Example:
llm = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True)
```
### Description
The problem is, that the check in langchain_google_vertexai/chat_models.py line 165 always evaluates to True, independent on whether convert_system_message_to_human is True or False:
```python
[...]
for i, message in enumerate(history):
if (
i == 0
and isinstance(message, SystemMessage)
and not convert_system_message_to_human,
):
```
As can be seen, the ',' after `and not convert_system_message_to_human` must be removed, otherwise its interpreted as a tuple and evaluates to `True` always, independent on whether `convert_system_message_to_human` is True or False.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-google-vertexai==0.0.2
langchain-openai==0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ChatVertexAI model - convert_system_message_to_human argument = True is ignored | https://api.github.com/repos/langchain-ai/langchain/issues/16503/comments | 4 | 2024-01-24T11:13:09Z | 2024-02-19T08:35:34Z | https://github.com/langchain-ai/langchain/issues/16503 | 2,098,035,111 | 16,503 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In the Metal section of the https://python.langchain.com/docs/integrations/llms/llamacpp document, the description of `n_gpu_layers` is `Metal set to 1 is enough.`
I haven't found the exact reason for this. and when I tested it locally, I felt that using a larger value of `n_gpu_layers` would significantly improve the execution speed. I have a complete ipynb file here: https://github.com/169/ai-snippets/blob/main/llama-cpp.ipynb
Here I explain why I came to this conclusion.
First, use the main compiled by `llama.cpp` to perform inference. You can see that by default, all 33 layers are offloaded to the GPU:
```
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 4095.07 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
```
It’s also fast: about 33 tokens/s (From `total time = 3760.69 ms / 124 tokens`)
But if I use `n_gpu_layers=1`, only one layer is offloaded to the GPU, and the rest is given to the CPU:
```
llm_load_tensors: offloading 1 repeating layers to GPU
llm_load_tensors: offloaded 1/33 layers to GPU
llm_load_tensors: CPU buffer size = 4165.37 MiB
llm_load_tensors: Metal buffer size = 132.51 MiB
```
Much slower: about 18 token/s (From `2435.17 ms / 43 tokens`)
The same condition, changed to `n_gpu_layers=33`, has the same effect as using `./main`:
```
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: Metal buffer size = 4095.07 MiB
```
The speed has also increased to about 31 token/s.
So I think that within the optional range, the larger the value of `n_gpu_layers`, the faster the inference. There are also posts similar to this one with doubts: https://www.reddit.com/r/LangChain/comments/18lb4n4/llamacpp_on_mac_n_gpu_layers_n_batch/
I'm a bit confused, So, I added a [PR](https://github.com/langchain-ai/langchain/pull/16501) to remove this part of the description.
@genewoo I see you added this part, do you have any other context proving that using `n_gpu_layers=1` is a best practice?
### Idea or request for content:
_No response_ | DOC: The description of ·n_gpu_layers· in https://python.langchain.com/docs/integrations/llms/llamacpp#metal is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/16502/comments | 3 | 2024-01-24T11:03:16Z | 2024-06-08T16:09:20Z | https://github.com/langchain-ai/langchain/issues/16502 | 2,098,016,823 | 16,502 |
[
"langchain-ai",
"langchain"
] | ### Feature request
https://www.assemblyai.com/products/lemur
Adding support for lemur endpoints as we already do have AAI integration
### Motivation
Helpful for folks with a paid AAI plan and are using lemur to migrate the codebase to langchain
### Your contribution
Perhaps. | Add support for Assembly AI Lemur | https://api.github.com/repos/langchain-ai/langchain/issues/16496/comments | 1 | 2024-01-24T06:44:02Z | 2024-05-01T16:07:09Z | https://github.com/langchain-ai/langchain/issues/16496 | 2,097,539,943 | 16,496 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Chroma
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.schema.messages import SystemMessage
embedding = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
retriever = vectordb.as_retriever()
tool = create_retriever_tool(
retriever,
"search_docs",
"Searches and returns documents."
)
tools = [tool]
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-1106")
system_msg = SystemMessage(content=system_msg_txt)
agent_executor = create_conversational_retrieval_agent(
llm, tools,
system_message=system_msg,
max_token_limit=6000,
remember_intermediate_steps=True,
verbose=True)
response = agent_executor.invoke({"input": "What did the president say about Ketanji Brown Jackson?"})
print(response)
```
### Description
There seems to be a change of output between v0.1.0 and v>=0.1.1
Question, is v0.1.1 the correct one behavior moving forward?
The FunctionMessage['content'] is:
- Document(..) in v==0.1.0
- Just text like, 'these\tcategories\tent...` in v>=0.1.1
See below.
ver 0.1.0
=========
{'input': 'What did the president say about Ketanji Brown Jackson?',
'chat_history': [
HumanMessage(content='What did the president say about Ketanji Brown Jackson?'),
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"president say about Ketanji Brown Jackson"}', 'name': 'search_docs'}}),
FunctionMessage(**content="[
Document(page_content='these\\tcategories\\tentailed\\ta\\tduty,....\\tpoint', metadata={'page': 2, 'source': 'media/statofunion.txt'}),
Document(page_content='fourth:\\tings\\tof\\tpleasure\\tand\\trapture,...\\tthe', metadata={'page': 5, 'source': 'media/statofunion.txt'})]**', name='search_docs'),
AIMessage(content='...')],
'output': '...'
'intermediate_steps':
...
}
ver>= 0.1.1
===========
{'input': 'What did the president say about Ketanji Brown Jackson?',
'chat_history': [
HumanMessage(content='What did the president say about Ketanji Brown Jackson?'),
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"president say about Ketanji Brown Jackson"}', 'name': 'search_docs'}}),
FunctionMessage(**content='these\tcategories\tentailed\ta\tduty,...**', name='search_docs'),
AIMessage(content='The president...')],
'output': '...',
'intermediate_steps':
...
}
### System Info
V0.1.0:
====
langchain==0.1.0
langchain-community==0.0.14
langchain-core==0.1.15
langchain-openai==0.0.3
v0.1.1
====
langchain==0.1.1
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | FunctionMessage['content'] is different between v==0.1.0 and v>=0.1.1; which is the correct one? | https://api.github.com/repos/langchain-ai/langchain/issues/16493/comments | 1 | 2024-01-24T05:25:36Z | 2024-05-05T16:06:32Z | https://github.com/langchain-ai/langchain/issues/16493 | 2,097,453,960 | 16,493 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
import sqlalchemy as sal
import os, sys, openai
import pandas as pd
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
def chat(question, sql_format):
# greetings = ["hi", "hello", "hey"]
# if question.lower() in greetings:
# return "Hello! How can I assist you today?"
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
Return the answer in a sentence form.
The question: {question}
"""
prompt_template = """
Use the following pieces of context to answer the question at the end. If you don't know the answer,
just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:"""
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(PROMPT.format(question=question))
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
if result_df.empty:
return "No results found"
answer = result_df.to_dict()
def handle_greetings(question):
greetings = ["hi", "hello", "hey"]
if any(greeting in question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
else:
return None
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=[question],
preprocessor=handle_greetings(question)
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
# return db_chain.run(question)
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
answer=chain1("what is the uan number for AD#######")
print(answer)
### Description
in the chatbot which is connected to the db, when i'm asking question like give UAN number which is not present in the db, instead its fetching euid number, if some particular data is not there it should written as invalid question, it should not execute a wrong queary.
Answer:I'm sorry, but I cannot answer the question "hi" as it is not a valid question. Please provide a specific question related to the data in the table.
> Finished chain.
I'm sorry, but I cannot answer the question "hi" as it is not a valid question. Please provide a specific question related to the data in the table.
> Entering new SQLDatabaseChain chain...
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
Return the answer in a sentence form.
The question: what is the uan number for AD23010923
SQLQuery:SELECT [EmployeeID], [EmployeeName], [EmployeeNameAsPerBank], [EmployeeEuid]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD23010923'
SQLResult: [('AD########', '######', 'S######## P', Decimal('######'))]
Answer:The UAN number for AD####### is #####.
> Finished chain.
The UAN number for AD###### is ####.
how to validate each answer before we get the output
for this can we modify the basy.py file in langchain_experimental.sql.base.py, as it seem to be fetching from the db
"""Chain for interacting with SQL Database."""
from __future__ import annotations
import warnings
from typing import Any, Dict, List, Optional
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.sql_database.prompt import QUERY_CHECKER
from langchain.utilities.sql_database import SQLDatabase
from langchain_experimental.pydantic_v1 import Extra, Field, root_validator
INTERMEDIATE_STEPS_KEY = "intermediate_steps"
class SQLDatabaseChain(Chain):
"""Chain for interacting with SQL Database.
Example:
.. code-block:: python
from langchain_experimental.sql import SQLDatabaseChain
from langchain.llms import OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)
*Security note*: Make sure that the database connection uses credentials
that are narrowly-scoped to only include the permissions this chain needs.
Failure to do so may result in data corruption or loss, since this chain may
attempt commands like `DROP TABLE` or `INSERT` if appropriately prompted.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this chain.
This issue shows an example negative outcome if these steps are not taken:
https://github.com/langchain-ai/langchain/issues/5923
"""
llm_chain: LLMChain
llm: Optional[BaseLanguageModel] = None
"""[Deprecated] LLM wrapper to use."""
database: SQLDatabase = Field(exclude=True)
"""SQL Database to connect to."""
prompt: Optional[BasePromptTemplate] = None
"""[Deprecated] Prompt to use to translate natural language to SQL."""
top_k: int = float('inf')
"""Number of results to return from the query"""
input_key: str = "query" #: :meta private:
output_key: str = "result" #: :meta private:
return_sql: bool = False
"""Will return sql-command directly without executing it"""
return_intermediate_steps: bool = False
"""Whether or not to return the intermediate steps along with the final answer."""
return_direct: bool = False
"""Whether or not to return the result of querying the SQL table directly."""
use_query_checker: bool = False
"""Whether or not the query checker tool should be used to attempt
to fix the initial SQL from the LLM."""
query_checker_prompt: Optional[BasePromptTemplate] = None
"""The prompt template that should be used by the query checker"""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@root_validator(pre=True)
def raise_deprecation(cls, values: Dict) -> Dict:
if "llm" in values:
warnings.warn(
"Directly instantiating an SQLDatabaseChain with an llm is deprecated. "
"Please instantiate with llm_chain argument or using the from_llm "
"class method."
)
if "llm_chain" not in values and values["llm"] is not None:
database = values["database"]
prompt = values.get("prompt") or SQL_PROMPTS.get(
database.dialect, PROMPT
)
values["llm_chain"] = LLMChain(llm=values["llm"], prompt=prompt)
return values
@property
def input_keys(self) -> List[str]:
"""Return the singular input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if not self.return_intermediate_steps:
return [self.output_key]
else:
return [self.output_key, INTERMEDIATE_STEPS_KEY]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
input_text = f"{inputs[self.input_key]}\nSQLQuery:"
print("SQLQuery")
_run_manager.on_text(input_text, verbose=self.verbose)
# If not present, then defaults to None which is all tables.
table_names_to_use = inputs.get("table_names_to_use")
table_info = self.database.get_table_info(table_names=table_names_to_use)
llm_inputs = {
"input": input_text,
"top_k": str(self.top_k),
"dialect": self.database.dialect,
"table_info": table_info,
"stop": ["\nSQLResult:"],
}
if self.memory is not None:
for k in self.memory.memory_variables:
llm_inputs[k] = inputs[k]
intermediate_steps: List = []
try:
intermediate_steps.append(llm_inputs.copy()) # input: sql generation
sql_cmd = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
).strip()
if self.return_sql:
return {self.output_key: sql_cmd}
if not self.use_query_checker:
_run_manager.on_text(sql_cmd, color="green", verbose=self.verbose)
intermediate_steps.append(
sql_cmd
) # output: sql generation (no checker)
intermediate_steps.append({"sql_cmd": sql_cmd}) # input: sql exec
print(sql_cmd)
result = self.database.run(sql_cmd)
print(result)
intermediate_steps.append(str(result)) # output: sql exec
else:
query_checker_prompt = self.query_checker_prompt or PromptTemplate(
template=QUERY_CHECKER, input_variables=["query", "dialect"]
)
query_checker_chain = LLMChain(
llm=self.llm_chain.llm, prompt=query_checker_prompt
)
query_checker_inputs = {
"query": sql_cmd,
"dialect": self.database.dialect,
}
checked_sql_command: str = query_checker_chain.predict(
callbacks=_run_manager.get_child(), **query_checker_inputs
).strip()
intermediate_steps.append(
checked_sql_command
) # output: sql generation (checker)
_run_manager.on_text(
checked_sql_command, color="green", verbose=self.verbose
)
intermediate_steps.append(
{"sql_cmd": checked_sql_command}
) # input: sql exec
result = self.database.run(checked_sql_command)
intermediate_steps.append(str(result)) # output: sql exec
sql_cmd = checked_sql_command
_run_manager.on_text("\nSQLResult: ", verbose=self.verbose)
_run_manager.on_text(result, color="yellow", verbose=self.verbose)
# If return direct, we just set the final result equal to
# the result of the sql query result, otherwise try to get a human readable
# final answer
if self.return_direct:
final_result = result
else:
_run_manager.on_text("\nAnswer:", verbose=self.verbose)
# if result:
# input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer:"
# else:
# input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer: {'No result found' if not result else ''}"
input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer:"
llm_inputs["input"] = input_text
intermediate_steps.append(llm_inputs.copy()) # input: final answer
final_result = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
).strip()
# print("------", result)
if not result:
final_result = 'Invalid Question'
# print("....",final_result)
intermediate_steps.append(final_result) # output: final answer
_run_manager.on_text(final_result, color="green", verbose=self.verbose)
chain_result: Dict[str, Any] = {self.output_key: final_result}
if self.return_intermediate_steps:
chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps
print("----"+str(chain_result)+"-----")
return chain_result
except Exception as exc:
# Append intermediate steps to exception, to aid in logging and later
# improvement of few shot prompt seeds
exc.intermediate_steps = intermediate_steps # type: ignore
raise exc
@property
def _chain_type(self) -> str:
return "sql_database_chain"
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
db: SQLDatabase,
prompt: Optional[BasePromptTemplate] = None,
**kwargs: Any,
) -> SQLDatabaseChain:
"""Create a SQLDatabaseChain from an LLM and a database connection.
*Security note*: Make sure that the database connection uses credentials
that are narrowly-scoped to only include the permissions this chain needs.
Failure to do so may result in data corruption or loss, since this chain may
attempt commands like `DROP TABLE` or `INSERT` if appropriately prompted.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this chain.
This issue shows an example negative outcome if these steps are not taken:
https://github.com/langchain-ai/langchain/issues/5923
"""
prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)
llm_chain = LLMChain(llm=llm, prompt=prompt)
return cls(llm_chain=llm_chain, database=db, **kwargs)
class SQLDatabaseSequentialChain(Chain):
"""Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
3. Don't consider the table which are not mentoned, if no result is matching with the keyword Please return the answer as invalid question
This is useful in cases where the number of tables in the database is large.
"""
decider_chain: LLMChain
sql_chain: SQLDatabaseChain
input_key: str = "query" #: :meta private:
output_key: str = "result" #: :meta private:
return_intermediate_steps: bool = False
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
db: SQLDatabase,
query_prompt: BasePromptTemplate = PROMPT,
decider_prompt: BasePromptTemplate = DECIDER_PROMPT,
**kwargs: Any,
) -> SQLDatabaseSequentialChain:
"""Load the necessary chains."""
sql_chain = SQLDatabaseChain.from_llm(llm, db, prompt=query_prompt, **kwargs)
decider_chain = LLMChain(
llm=llm, prompt=decider_prompt, output_key="table_names"
)
return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs)
@property
def input_keys(self) -> List[str]:
"""Return the singular input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if not self.return_intermediate_steps:
return [self.output_key]
else:
return [self.output_key, INTERMEDIATE_STEPS_KEY]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
_table_names = self.sql_chain.database.get_usable_table_names()
table_names = ", ".join(_table_names)
llm_inputs = {
"query": inputs[self.input_key],
"table_names": table_names,
}
_lowercased_table_names = [name.lower() for name in _table_names]
table_names_from_chain = self.decider_chain.predict_and_parse(**llm_inputs)
table_names_to_use = [
name
for name in table_names_from_chain
if name.lower() in _lowercased_table_names
]
_run_manager.on_text("Table names to use:", end="\n", verbose=self.verbose)
_run_manager.on_text(
str(table_names_to_use), color="yellow", verbose=self.verbose
)
new_inputs = {
self.sql_chain.input_key: inputs[self.input_key],
"table_names_to_use": table_names_to_use,
}
return self.sql_chain(
new_inputs, callbacks=_run_manager.get_child(), return_only_outputs=True
)
@property
def _chain_type(self) -> str:
return "sql_database_sequential_chain"
### System Info
python: 3.11
langchain: latest
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | fetching inaccurate answers from the database | https://api.github.com/repos/langchain-ai/langchain/issues/16491/comments | 13 | 2024-01-24T05:09:25Z | 2024-05-01T16:06:59Z | https://github.com/langchain-ai/langchain/issues/16491 | 2,097,440,450 | 16,491 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
`<html>
<body>
<!--StartFragment-->
agent_executor = create_sql_agent(
--
| prefix=MSSQL_AGENT_PREFIX,
| format_instructions=MSSQL_AGENT_OUTPUT_FORMAT_INSTRUCTIONS,
| suffix=MSSQL_AGENT_SUFFIX,
| llm=llm,
| toolkit=toolkit,
| extra_tools=custom_tool_list,
| agent_type=AgentType.OPENAI_FUNCTIONS,
| callback_manager=self.callbacks,
| top_k=self.k,
| verbose=True,
| handle_parsing_errors=True,
| return_intermediate_steps=True
| )
<!--EndFragment-->
</body>
</html>`
### Description
We are using create_sql_agent and want to show final response i.e. db records in HTML table and short summary.
When we are asking LLM (GPT-4) to generate the final response in HTML the completion tokens are around 1200 tokens resulting in high latency.
To overcome the latency issue we want to generate HTML table seperately and want LLM to provide insights in short summary.
To achieve the above mentioned things, we just want SQL query as output.
Are there any ways using callbacks or something else to achieve the same?
Please let us know if you have ideas for the same.
Thanks a lot!
### System Info
<html>
<body>
<!--StartFragment-->
langchain==0.0.351
--
| langchain-community==0.0.4
| langchain-core==0.1.1
| langsmith==0.0.72
<!--EndFragment-->
</body>
</html>
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | How to return SQL query only in create_sql_agent to avoid latency because of high completion tokens? | https://api.github.com/repos/langchain-ai/langchain/issues/16489/comments | 9 | 2024-01-24T04:29:10Z | 2024-05-01T16:06:53Z | https://github.com/langchain-ai/langchain/issues/16489 | 2,097,406,827 | 16,489 |
[
"langchain-ai",
"langchain"
] | ### Feature request
UnstructuredFileLoader currently only supports local files and the host Unstructured API. This request is to expand the loader ingest capabilities in the Python library by adding options for S3 (streaming) and a bytearray.
### Motivation
Running in environments like Kubernetes, it is inconvenient to have to fetch a document and store it local to the container in order to be processed. This requires planning for storage capacity (either local or configuring PVCs).
### Your contribution
Possibly, but not any time soon. | UnstructuredFileLoader support for S3 and bytearray | https://api.github.com/repos/langchain-ai/langchain/issues/16488/comments | 1 | 2024-01-24T02:49:00Z | 2024-05-01T16:06:49Z | https://github.com/langchain-ai/langchain/issues/16488 | 2,097,319,644 | 16,488 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain_community.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain_openai import OpenAI
from langchain.chains import create_sql_query_chain
from langchain_openai import ChatOpenAI
# it is initial by https://python.langchain.com/docs/use_cases/qa_structured/sql/#case-2-text-to-sql-query-and-execution
db = SQLDatabase.from_uri("sqlite:///./Chinook.db")
chain = create_sql_query_chain(ChatOpenAI(
openai_api_key = "my-api-here",
openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
temperature = 0,
model = 'gpt-35-turbo-1106'), db)
response = chain.invoke({"question": "How many employees are there"})
print(response)
```
I test the llm by this below
```
llm = ChatOpenAI(
openai_api_key = 'my-api-here',
openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
temperature = 0,
model = 'gpt-35-turbo-1106'
)
template = """Question: {question}
Answer: Let's think step by step"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
```
This one works well and generate result like
```
Justin Bieber was born on March 1, 1994. The Super Bowl for the 1993 NFL season was Super Bowl XXVIII, which was won by the Dallas Cowboys. Therefore, the Dallas Cowboys won the Super Bowl in the year Justin Bieber was born.
```
### Description
I get error when using sqlchain
the error is like below
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [7], line 7
1 chain = create_sql_query_chain(ChatOpenAI(
2 openai_api_key = 'mask-here,
3 openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
4 temperature = 0,
5 model = 'gpt-35-turbo-1106'),
6 db)
----> 7 response = chain.invoke({"question": "How many employees are there"})
8 print(response)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
2051 try:
2052 for i, step in enumerate(self.steps):
-> 2053 input = step.invoke(
2054 input,
2055 # mark each step as a child run
2056 patch_config(
2057 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2058 ),
2059 )
2060 # finish the root run
2061 except BaseException as e:
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/runnables/base.py:3887, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3881 def invoke(
3882 self,
3883 input: Input,
3884 config: Optional[RunnableConfig] = None,
3885 **kwargs: Optional[Any],
3886 ) -> Output:
-> 3887 return self.bound.invoke(
3888 input,
3889 self._merge_configs(config),
3890 **{**self.kwargs, **kwargs},
3891 )
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
154 def invoke(
155 self,
156 input: LanguageModelInput,
(...)
160 **kwargs: Any,
161 ) -> BaseMessage:
162 config = ensure_config(config)
163 return cast(
164 ChatGeneration,
--> 165 self.generate_prompt(
166 [self._convert_input(input)],
167 stop=stop,
168 callbacks=config.get("callbacks"),
169 tags=config.get("tags"),
170 metadata=config.get("metadata"),
171 run_name=config.get("run_name"),
172 **kwargs,
173 ).generations[0][0],
174 ).message
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
535 def generate_prompt(
536 self,
537 prompts: List[PromptValue],
(...)
540 **kwargs: Any,
541 ) -> LLMResult:
542 prompt_messages = [p.to_messages() for p in prompts]
--> 543 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
405 if run_managers:
406 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407 raise e
408 flattened_outputs = [
409 LLMResult(generations=[res.generations], llm_output=res.llm_output)
410 for res in results
411 ]
412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
394 for i, m in enumerate(messages):
395 try:
396 results.append(
--> 397 self._generate_with_cache(
398 m,
399 stop=stop,
400 run_manager=run_managers[i] if run_managers else None,
401 **kwargs,
402 )
403 )
404 except BaseException as e:
405 if run_managers:
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
572 raise ValueError(
573 "Asked to cache, but no cache found at `langchain.cache`."
574 )
575 if new_arg_supported:
--> 576 return self._generate(
577 messages, stop=stop, run_manager=run_manager, **kwargs
578 )
579 else:
580 return self._generate(messages, stop=stop, **kwargs)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_openai/chat_models/base.py:442, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
436 params = {
437 **params,
438 **({"stream": stream} if stream is not None else {}),
439 **kwargs,
440 }
441 response = self.client.create(messages=message_dicts, **params)
--> 442 return self._create_chat_result(response)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_openai/chat_models/base.py:459, in ChatOpenAI._create_chat_result(self, response)
457 if not isinstance(response, dict):
458 response = response.dict()
--> 459 for res in response["choices"]:
460 message = _convert_dict_to_message(res["message"])
461 generation_info = dict(finish_reason=res.get("finish_reason"))
TypeError: 'NoneType' object is not iterable
```
### System Info
langchian==0.1.1
python3.8.6
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using create_sql_query_chain get Nonetype error but LLMChain can generate correctly | https://api.github.com/repos/langchain-ai/langchain/issues/16484/comments | 1 | 2024-01-24T01:54:32Z | 2024-05-01T16:06:43Z | https://github.com/langchain-ai/langchain/issues/16484 | 2,097,277,679 | 16,484 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
calling `_prepare_range_query` with both a filter and distance_threshold arguments returns the following exception:
```
Traceback (most recent call last):
File "/Users/me/Codes/myproject/components/storage/redisHandler.py", line 122, in run_similarity_search
return vectorstore.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 837, in similarity_search_with_score
raise e
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 828, in similarity_search_with_score
results = self.client.ft(self.index_name).search(redis_query, params_dict) # type: ignore # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/commands/search/commands.py", line 501, in search
res = self.execute_command(SEARCH_CMD, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 536, in execute_command
return conn.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 537, in <lambda>
lambda: self._send_command_parse_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 513, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 553, in parse_response
response = connection.read_response()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/connection.py", line 524, in read_response
raise response
redis.exceptions.ResponseError: Invalid attribute yield_distance_as
```
Is there a way to provide both arguments ?
### Description
* I'm trying to use langchain for vector search on redis
* using `similarity_search_with_score` with a `distance_threshold` and `filter` invoke `_prepare_range_query` with both argument
* this generate the following Redis Query `(@content_vector:[VECTOR_RANGE $distance_threshold $vector] (@creation_date:[1705359600.0 +inf] @creation_date:[-inf 1705878000.0]))=>{$yield_distance_as: distance}``
* this query generate the previously given stack trace
### System Info
MacOS Python 3.11
langchain==0.0.349
langchain-community==0.0.1
langchain-core==0.0.13
redis==5.0.1
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using both filter and distance_threshold generate an `Invalid attribute yield_distance_as` | https://api.github.com/repos/langchain-ai/langchain/issues/16476/comments | 4 | 2024-01-23T19:48:56Z | 2024-06-08T16:09:15Z | https://github.com/langchain-ai/langchain/issues/16476 | 2,096,865,560 | 16,476 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
LangChain follows a monorepo architecture. It's difficult to see from the releases which packages were released, and which PRs went into them. https://github.com/langchain-ai/langchain/releases
We should update CI to draft a better release note with package information and potentially break PRs by package. | CI: Draft more readable release drafts that are broken down by package | https://api.github.com/repos/langchain-ai/langchain/issues/16471/comments | 1 | 2024-01-23T19:12:40Z | 2024-04-30T16:26:48Z | https://github.com/langchain-ai/langchain/issues/16471 | 2,096,805,196 | 16,471 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
db = DeepLake(dataset_path=dataset_path, embedding=embeddings)
retriver = db.as_retriever()
QUERY_PROMPT = PromptTemplate(
input_variables=["inputs"],
template=""" Use the input to retrieve the relevant information or data from the retriever & generate results based on the data
inputs = {inputs}
Generate new ideas & lay out all the information like Game Name, Mechanics, Objective, USP, Level fail condition & Rules. Get the idea from the dataset similar to as they have been described where number is equals to the number of ideas you need to generate.
"""
)
llm = ChatOpenAI(temperature=0.4)
query = "Generate 3 new game idea which includes solving puzzels, only get the idea from the retriever not the whole information,\n Learn the underlying semantics about their game design, mechaincs, USP & other details, do not just copy paste the information fromt the dataset, Learn & generate new ideas\n Verify your results that they do not match 100% to the info available at the dataset"
# Chain
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)
# Other inputs
# query="Generate 3 new game idea which includes solving puzzels"
inputs = {"inputs" : query}
# Run
retriever_one = MultiQueryRetriever(
retriever=retriver, llm_chain=llm_chain
)
# Results
unique_docs = retriever_one.get_relevant_documents(
query="Generate 3 new game idea which includes solving puzzels", inputs=inputs
)
```
Error Thrown:
ValueError Traceback (most recent call last)
Cell In[28], line 7
2 retriever_one = MultiQueryRetriever(
3 retriever=retriver, llm_chain=llm_chain
4 )
6 # Results
----> 7 unique_docs = retriever_one.get_relevant_documents(
8 query="Generate 3 new game idea which includes solving puzzels", inputs=inputs
9 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\multi_query.py:172, in MultiQueryRetriever._get_relevant_documents(self, query, run_manager)
158 def _get_relevant_documents(
159 self,
160 query: str,
161 *,
162 run_manager: CallbackManagerForRetrieverRun,
163 ) -> List[Document]:
164 """Get relevant documents given a user query.
165
166 Args:
(...)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\multi_query.py:189, in MultiQueryRetriever.generate_queries(self, question, run_manager)
178 def generate_queries(
179 self, question: str, run_manager: CallbackManagerForRetrieverRun
180 ) -> List[str]:
181 """Generate queries based upon user input.
182
183 Args:
(...)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
192 lines = getattr(response["text"], self.parser_key, [])
193 if self.verbose:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\_api\deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:363, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
331 """Execute the chain.
332
333 Args:
(...)
354 `Chain.output_keys`.
355 """
356 config = {
357 "callbacks": callbacks,
358 "tags": tags,
359 "metadata": metadata,
360 "run_name": run_name,
361 }
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
366 return_only_outputs=return_only_outputs,
367 include_run_info=include_run_info,
368 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:138, in Chain.invoke(self, input, config, **kwargs)
135 include_run_info = kwargs.get("include_run_info", False)
136 return_only_outputs = kwargs.get("return_only_outputs", False)
--> 138 inputs = self.prep_inputs(input)
139 callback_manager = CallbackManager.configure(
140 callbacks,
141 self.callbacks,
(...)
146 self.metadata,
147 )
148 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:475, in Chain.prep_inputs(self, inputs)
473 external_context = self.memory.load_memory_variables(inputs)
474 inputs = dict(inputs, **external_context)
--> 475 self._validate_inputs(inputs)
476 return inputs
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:264, in Chain._validate_inputs(self, inputs)
262 missing_keys = set(self.input_keys).difference(inputs)
263 if missing_keys:
--> 264 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'inputs'}
### Description
* I am trying to retrieve relevant information based on the given input from the database & want it to generate answers,
* But it is showing some {input} is missing.
* I am using MultiQuery Retriever
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainplus-sdk==0.0.20
jupyter-notebook ==7.0.0
Python==3.11.3
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValueError: Missing some input keys: {'inputs'} | https://api.github.com/repos/langchain-ai/langchain/issues/16465/comments | 2 | 2024-01-23T17:54:05Z | 2024-01-23T18:35:37Z | https://github.com/langchain-ai/langchain/issues/16465 | 2,096,652,486 | 16,465 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Runnable Parallel is poorly documented right now and it's one of the most important constructs in LCEL. https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/base.py#L2453-L2453
### Idea or request for content:
Add documentation to RunnableParallel in this style:
https://github.com/langchain-ai/langchain/blob/cfe95ab08521ddc01e9b65596ca50c9dba2d7677/libs/core/langchain_core/runnables/base.py#L102-L102
https://github.com/langchain-ai/langchain/blob/cfe95ab08521ddc01e9b65596ca50c9dba2d7677/libs/core/langchain_core/runnables/base.py#L1754-L1754
| DOC: Add in code documentation to RunnableParallel | https://api.github.com/repos/langchain-ai/langchain/issues/16462/comments | 1 | 2024-01-23T17:06:17Z | 2024-01-26T15:03:55Z | https://github.com/langchain-ai/langchain/issues/16462 | 2,096,565,174 | 16,462 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The below code does not print anything when the `test_tool` is run. Variations of this work for `on_llm_start` and `on_llm_end` but do not work for `on_tool_start` or `on_tool_end`
```
from langchain.callbacks.base import BaseCallbackHandler
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when tool starts running."""
print(f"on_tool_start (A tool is starting!!)")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
print(f"on_tool_start (I'm starting!!)")
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
"""Run when tool ends running."""
print(f"on_tool_end (I'm ending!!)")
@tool("test-tool")
def test_tool():
"""A tool that should always be run"""
return "This result should always be returned"
llm = ChatOpenAI(
callbacks=[MyCustomHandler()],
)
tools = [test_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_steps=True,
verbose=True,
callbacks=[MyCustomHandler()],
)
response = agent_executor.invoke(
{
"input": "please tell me the results of my test tool,
}
)
```
### Description
I am trying to get a simple custom callback running when an agent invokes a tool.
### System Info
Python 3.9.16
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | Custom Callback handler doesn't run for `on_tool_start` or `on_tool_end` | https://api.github.com/repos/langchain-ai/langchain/issues/16461/comments | 2 | 2024-01-23T16:50:04Z | 2024-01-23T17:12:03Z | https://github.com/langchain-ai/langchain/issues/16461 | 2,096,528,091 | 16,461 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
SAMPLE_JSON_OUTPUT_FROM_LLM_1:str = json.dumps({
"name" : "Henry",
"age" : 25
})
INVALID_JSON_STRING = SAMPLE_JSON_OUTPUT_FROM_LLM_1.replace("\"n", "n")
# Value of INVALID_JSON_STRING is '{name": "Henry", "age": 25}'
# Please note that the `name` key is not formatted properly
from langchain.output_parsers.json import SimpleJsonOutputParser
json_op = SimpleJsonOutputParser()
result = json_op.parse(INVALID_JSON_STRING)
# result is {} whereas I was expecting an error/exception
```
### Description
I was trying to test both the positive and negative cases for JsonOutpuParser
In the above code snippet you can see that I removed the " on the first key
I debugged the langchain code, it seems that there is a lot of effort to ignore the invalid/troubling characters and that leads to changing the original string to {}
I would have expected following behavior:
- By default, no trying to fix the provided string
- May be throw an error if the output is {}
### System Info
pip freeze | grep langchain
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | JsonOutputParser does not throw exception on invalid json | https://api.github.com/repos/langchain-ai/langchain/issues/16458/comments | 6 | 2024-01-23T16:02:26Z | 2024-04-30T16:28:05Z | https://github.com/langchain-ai/langchain/issues/16458 | 2,096,421,313 | 16,458 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I my ReAct agent, I'm trying to call a tool, that is defined as the following:
```python
@tool(args_schema=SliderInput)
def slider(object_name, value) -> str:
return "Ok"
```
and the corresponding pydantic model:
```python
class SliderInput(BaseModel):
object_name: str = Field(..., description="The name of the slider object")
value: int = Field(..., description="The value of the slider object to set")
```
I get the following error when the tool is being called:
```python
object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for SliderInput
value
field required (type=value_error.missing)
```
I started with this function definition:
```python
@tool
def slider(object_name: str, value: int) -> str:
return "Ok"
```
and received the same error.
Additionally, I've also tried this one:
```python
@tool(args_schema=SliderInput)
def slider(object_name: str, value: int) -> str:
return "Ok"
```
without success.
Why is my code failing.
### Description
I'm trying to use langchain library **Version: 0.1.1**
### System Info
langchain==0.1.1
langchain-cli==0.0.20
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
langchain-openai==0.0.2.post1
langchainhub==0.1.14
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValidationError field required (type=value_error.missing) while using react agent | https://api.github.com/repos/langchain-ai/langchain/issues/16456/comments | 3 | 2024-01-23T15:51:48Z | 2024-03-03T11:28:30Z | https://github.com/langchain-ai/langchain/issues/16456 | 2,096,399,198 | 16,456 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The correct type for group_id is Integer.
### Description
The current implementation of SQLRecordManager crashes with a database other than sqlite. Because of a schema bug. Sqlite is tolerant.
The correct type for group_id is Integer.
### System Info
langchain-community == 0.0.14
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | A bad schema in SQLRecordManager generates an exception when def index() with cleanup="incremental". | https://api.github.com/repos/langchain-ai/langchain/issues/16451/comments | 2 | 2024-01-23T13:43:07Z | 2024-04-30T16:22:03Z | https://github.com/langchain-ai/langchain/issues/16451 | 2,096,124,684 | 16,451 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryMemory
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot having a conversation with a human."
),
# The `variable_name` here is what must align with memory
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
memory.chat_memory.add_user_message("hi, i am Kevin")
memory.chat_memory.add_ai_message("hi, i am nan")
memory.chat_memory.add_user_message("i am late for school")
memory.chat_memory.add_ai_message("oh, sound bad, i hope you can be happier")
memory.chat_memory.add_user_message("in fact, because of mom's mistake, she forgot something always")
memory.chat_memory.add_ai_message("i see, it does not matter, little thing")
memory.chat_memory.add_user_message("ok, let's chat something ")
memory.chat_memory.add_ai_message("sure, i like chat too")
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
conversation({"question": "can you tell me why i was late for school"})
```
### Description
response is :Sure, I believe that you might have missed the time to leave or forgotten something important during the preparation process.
i hope response is: because of mom's mistake, she forgot something always
--------------------------------------------------------------
i wonder where is wrong?
thanks
### System Info
langchain 0.0.354
langchain-community 0.0.10
langchain-core 0.1.8
langdetect 1.0.9
langsmith 0.0.78
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ConversationBufferMemory does not work | https://api.github.com/repos/langchain-ai/langchain/issues/16448/comments | 9 | 2024-01-23T12:14:31Z | 2024-01-24T12:49:58Z | https://github.com/langchain-ai/langchain/issues/16448 | 2,095,956,356 | 16,448 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using BM25 retriever from langchain.
After building the retriever from documents, how do I get score for relevant document for a query?
Retriever = BM25Retriever.from_documents(...)
result = retriever.get_relevant_documents("foo")
### Motivation
The documentation is not good with details of parameters missing.
Actual BM25 python package has all functional ities including tokenizer options.
### Your contribution
Could try | BM25 langchain retriever should have similarity score | https://api.github.com/repos/langchain-ai/langchain/issues/16445/comments | 6 | 2024-01-23T11:47:53Z | 2024-05-13T16:09:56Z | https://github.com/langchain-ai/langchain/issues/16445 | 2,095,906,630 | 16,445 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
embeddings = HuggingFaceInferenceAPIEmbeddings(
api_key=inference_api_key,
api_url=api_url,
model_name="bge-large-en-v1.5"
)
pinecone.init(api_key=os.getenv("PINECONE_API_KEY"), environment=environment)
loader = PyPDFDirectoryLoader("data")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200)
chunks = text_splitter.split_documents(docs)
vectordb = Pinecone.from_documents(chunks, embeddings, index_name=index_name, namespace=namespace)
```
this code snippet is getting 314 request code from huggingface.py
```
response = requests.post(
self._api_url,
headers=self._headers,
json={
"inputs": texts,
"options": {"wait_for_model": True, "use_cache": True},
},
)
return response.json()
```
we should support batch size here. like local model embedding
### Description
I am trying to use pinecone with hugging face inference for the embedding model. My total chunks are 420. and it is trying to process in one request.
Also embedding_chunk_size is not parsable from Pinecone.from_documents() method
### System Info
```langchain==0.1.2
langchain-cli==0.0.20
langchain-community==0.0.14
langchain-core==0.1.14
langchainhub==0.1.14
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | HuggingFaceInferenceAPIEmbeddings getting 413 request code because of not batching mechanism like SentenceTransformer | https://api.github.com/repos/langchain-ai/langchain/issues/16443/comments | 1 | 2024-01-23T11:14:55Z | 2024-04-30T16:22:03Z | https://github.com/langchain-ai/langchain/issues/16443 | 2,095,842,833 | 16,443 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code can be used to reproduce the problem:
```
from langchain_community.embeddings import LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
openai_api_base="http://localhost:8080"
)
print(embeddings.embed_query("test"))
```
Error: `AttributeError: module 'openai' has no attribute 'error'`
```
Traceback (most recent call last):
File "/home/slug/udemy/010/main.py", line 22, in <module>
print(embeddings.embed_query("test"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 332, in embed_query
embedding = self._embedding_func(text, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 267, in _embedding_func
return embed_with_retry(
^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 98, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 45, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'error'
```
### Description
I am trying to use langchain to invoke Local AI's embedding endpoint to generate embeddings
### System Info
langchain-community==0.0.13
openai==1.9.0
python==3.11.7
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | LocalAIEmbeddings not compatible with openai > 1.6.x | https://api.github.com/repos/langchain-ai/langchain/issues/16442/comments | 1 | 2024-01-23T10:56:53Z | 2024-04-30T16:22:00Z | https://github.com/langchain-ai/langchain/issues/16442 | 2,095,809,240 | 16,442 |
[
"langchain-ai",
"langchain"
] | ### Feature request
add an additional try block to function `merge_dicts` in `langchain_core.utils._merge` and makes function more robust:
original:
```python
def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]:
merged = left.copy()
for k, v in right.items():
if k not in merged:
merged[k] = v
...
else:
raise TypeError(
f"Additional kwargs key {k} already exists in left dict and value has "
f"unsupported type {type(merged[k])}."
)
return merged
```
new:
```python
def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]:
merged = left.copy()
for k, v in right.items():
if k not in merged:
merged[k] = v
...
else:
try:
merged[k] = str(v)
except Exception as e:
raise TypeError(
f"Additional kwargs key {k} already exists in left dict and value has "
f"unsupported type {type(merged[k])}."
)
return merged
```
### Motivation
Some function has been malfunctioned due to the introduction of `merge_dicts` since langchain-core 0.1.13 releasing, we think this function should be more robust for a more generic scenario.
### Your contribution
as above | ADD A TRY BLOCK TO `merge_dicts` | https://api.github.com/repos/langchain-ai/langchain/issues/16441/comments | 4 | 2024-01-23T10:53:48Z | 2024-04-11T02:29:03Z | https://github.com/langchain-ai/langchain/issues/16441 | 2,095,801,586 | 16,441 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
# cat test_issue.py
from langchain.schema import SystemMessage, HumanMessage
from langchain_openai import AzureChatOpenAI
# pip install -U langchain-community
from langchain_community.callbacks import get_openai_callback
llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="gpt-35-turbo",
model_name="gpt3.5-turbo"
)
messages = [
SystemMessage(
content=(
"You are ExpertGPT, an AGI system capable of "
"anything except answering questions about cheese. "
"It turns out that AGI does not fathom cheese as a "
"concept, the reason for this is a mystery."
)
),
HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
]
with get_openai_callback() as cb:
res = llm(messages)
print(res.content)
# print the total tokens used
print(cb.total_tokens)
```
### Description
I still have problems with that simple LLM completion request that few months ago run correctly.
After updating langchain modules I got the deprecation error:
$ py test_issue.py
**/home/giorgio/.local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(**
I'm sorry, but as an AGI system, I cannot answer questions about cheese, including parmigiano. It seems that cheese is a concept that AGI systems do not comprehend. Is there anything else I can help you with?
114
---
The first question is:
why I had to import langchain-community module, as suggested by an initial run-.time suggestion?!
I didn't find any documentation.
Final question:
Why the deprecation error?
BTW, see also related issue: https://github.com/langchain-ai/langchain/issues/13785
Thanks
giorgio
### System Info
$ python3 --version
Python 3.11.7
$ pip show openai | grep Version
Version: 1.9.0
$ pip show langchain | grep Version
Version: 0.1.2
$ pip show langchain-openai | grep Version
Version: 0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | Azure OpenAI deprecation LangChainDeprecationWarning (the function `__call__`) ? | https://api.github.com/repos/langchain-ai/langchain/issues/16438/comments | 4 | 2024-01-23T10:29:30Z | 2024-05-13T16:09:27Z | https://github.com/langchain-ai/langchain/issues/16438 | 2,095,755,355 | 16,438 |
[
"langchain-ai",
"langchain"
] | ### Feature request
CSVAgent currently uses the same CSV file for schema to generate the query and data for executing it to generate results. The proposal is to separate these two files, say a smaller one for generation and larger one for execution.
### Motivation
The size of the CSV impacts generation of the query. Hence we want to provide smaller representative data for generation of the query. Once the query is generated we want to execute it directly on the original CSV, since no LLM is required at this time, which can be much larger than the representative one used for query generation.
### Your contribution
I am not sure about this at this time. If anyone from the community can provide guidance, I will try to take a look. | CSVAgent with different CSV files for schema and data | https://api.github.com/repos/langchain-ai/langchain/issues/16434/comments | 3 | 2024-01-23T07:38:46Z | 2024-04-30T16:13:23Z | https://github.com/langchain-ai/langchain/issues/16434 | 2,095,439,843 | 16,434 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I set up a chain and parser like following:
```
class Ticket(BaseModel):
reply: str
reply_explanation: str
ticket_parser = PydanticOutputParser(pydantic_object=Ticket)
partial_ticket_prompt = ticket_prompt.partial(
reply_schema=ticket_parser.get_format_instructions(), example=example_data.json()
)
ticket_chain = LLMChain(llm=llm, prompt=partial_ticket_prompt)
```
and than I use it following way:
```
async def generate_ai_suggestions(ticket, similar_docs, macro=""):
ticket = await ticket_chain.apredict(ticket=ticket, similar_docs=similar_docs, macro=macro,configparser={})
return await ticket_parser.aparse(ticket)
```
### Description
my problem is that the tracing not working for me for this convention ( it works for some basic exmaples with "invoke" ) , I tried mulitple ways, including
@traceable(run_type="chain")
is there any solution?
### System Info
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async | Tracing in langsmith not working for LLMChan | https://api.github.com/repos/langchain-ai/langchain/issues/16429/comments | 4 | 2024-01-23T07:20:28Z | 2024-06-19T16:06:48Z | https://github.com/langchain-ai/langchain/issues/16429 | 2,095,414,309 | 16,429 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
def custom_length_function(text):
return len(tokenizer.encode(text))
def split_data_token(doc, chunk_size=512, overlap=0):
text_splitter = CharacterTextSplitter(
# separator="\n",
separator=" ",
chunk_size=chunk_size,
chunk_overlap=overlap,
# length_function=custom_length_function,
length_function=len,
is_separator_regex=False,
)
return text_splitter.split_documents(doc)
### Description
it works well when using **"RecursiveCharacterTextSplitter.from_huggingface_tokenizer"**
because i can choose separators like ["\n\n", "\n", " ", ""]
but **"CharacterTextSplitter.from_huggingface_tokenizer"** didnt works because it only seperate "\n" base and
i tried **"CharacterTextSplitter"** to set separator like below
text_splitter = CharacterTextSplitter(
separator = "\n",
is_separator_regex = False)
text_splitter.from_huggingface_tokenizer(
tokenizer=tokenizer,
chunk_size=chunk_size,
chunk_overlap=overlap,
)
but it didnt work and custom_length_function also didnt work properly
def custom_length_function(text):
return len(tokenizer.encode(text))
def split_data_token(doc, chunk_size=512, overlap=0):
text_splitter = CharacterTextSplitter(
# separator="\n",
separator=" ",
chunk_size=chunk_size,
chunk_overlap=overlap,
# length_function=custom_length_function,
length_function=len,
is_separator_regex=False,
)
return text_splitter.split_documents(doc)
### System Info
.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | I can't split document by token (CharacterTextSplitter.from_huggingface_tokenizer) | https://api.github.com/repos/langchain-ai/langchain/issues/16427/comments | 1 | 2024-01-23T06:42:11Z | 2024-04-30T16:13:27Z | https://github.com/langchain-ai/langchain/issues/16427 | 2,095,349,082 | 16,427 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Using your simple example to parse astream_events, the issue is that is a tool has return_direct=True, the output is not shown -- you can only intercept it "on_tool_end", but event has no indication that output is return_direct.
### Motivation
This will help show those outputs as regular text in a client chat app, rather than a tool call. We have tools like "askUserForMoreInformation" which seem to help it during the conversation flow, so it's an example of one that simply returns the output to user.
### Your contribution
Sure, but I'm not sure will have the time to dig into it, hoping someone more familiar can address. | Add indication of return_direct tools in asteam_events | https://api.github.com/repos/langchain-ai/langchain/issues/16425/comments | 3 | 2024-01-23T05:00:48Z | 2024-04-30T16:26:39Z | https://github.com/langchain-ai/langchain/issues/16425 | 2,095,245,386 | 16,425 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Below is a chain that uses JsonOutputParser. Final result can be streamed with openai. When using anthropic, it is only available once the full response finishes streaming.
Likely there' a systematic difference in terms of how JSON is yielded in streaming between openai vs. anthropic, and our existing JSON parser only handles the convention from OpenAI.
A fix would require generating some chunks of JSON using Anthropic OpenAI and then extending our JsonOutputParser to be able to partially parse them.
```python
from typing import List
from langchain.chat_models import ChatAnthropic
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke.")
punchline: str = Field(description="answer to resolve the joke")
rating: int = Field(description='rating from 0-9 about how good the joke is')
# Set up a parser + inject instructions into the prompt template.
parser = JsonOutputParser(pydantic_object=Joke)
prompt = PromptTemplate.from_template(
template="Answer the user quer using a long jokey.\n{format_instructions}\n{query}\n",
).partial(format_instructions=parser.get_format_instructions())
model = ChatAnthropic(temperature=0)
# model = ChatOpenAI(temperature=0)
chain = prompt | model | parser
async for s in chain.astream({"query": "tell me a joke about space"}):
print(s)
```
---
Potentially hard task for for folks without background in CS (i.e., if you what a recursive descent parser is you feel free to pick this up :)) | JsonOutputParser Streaming works with ChatOpenAI but not ChatAnthropic | https://api.github.com/repos/langchain-ai/langchain/issues/16423/comments | 2 | 2024-01-23T04:04:13Z | 2024-02-05T21:32:17Z | https://github.com/langchain-ai/langchain/issues/16423 | 2,095,197,510 | 16,423 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain_community.llms import Tongyi
import os
from langchain_community.document_loaders import WebBaseLoader
from langchain.embeddings import DashScopeEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.documents import Document
from langchain.chains import create_retrieval_chain
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.tools.retriever import create_retriever_tool
# 引入大模型
os.environ["DASHSCOPE_API_KEY"] = "my_api_key"
llm = Tongyi(model_name="qwen-turbo")
loader = WebBaseLoader("https://docs.smith.langchain.com/overview")
docs = loader.load()
# 映射到向量空间
embeddings = DashScopeEmbeddings(
model="text-embedding-v1", dashscope_api_key="my_api_key"
)
# 分词
text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)
retriever = vector.as_retriever()
retriever_tool = create_retriever_tool(
retriever,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
# with api
os.environ["TAVILY_API_KEY"] = "my_api_key"
search = TavilySearchResults()
tools = [retriever_tool, search]
from langchain import hub
from langchain.agents import AgentExecutor, create_xml_agent
from langchain.agents import create_react_agent, create_json_chat_agent
prompt = hub.pull("hwchase17/react")
# 初始化 agent
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is LangChain?"})
```
inputs:
{'input': 'what is LangChain?'}
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], [line 69](vscode-notebook-cell:?execution_count=2&line=69)
[62](vscode-notebook-cell:?execution_count=2&line=62) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
[64](vscode-notebook-cell:?execution_count=2&line=64) # chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
[65](vscode-notebook-cell:?execution_count=2&line=65) # agent_executor.invoke({
[66](vscode-notebook-cell:?execution_count=2&line=66) # "chat_history": chat_history,
[67](vscode-notebook-cell:?execution_count=2&line=67) # "input": "Tell me how"
[68](vscode-notebook-cell:?execution_count=2&line=68) # })
---> [69](vscode-notebook-cell:?execution_count=2&line=69) agent_executor.invoke({"input": "what is LangChain?"})
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164), in Chain.invoke(self, input, config, **kwargs)
[161](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:161) except BaseException as e:
[163](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:163) run_manager.on_chain_error(e)
--> [164](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164) raise e
[165](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:165) run_manager.on_chain_end(outputs)
[166](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:166) final_outputs: Dict[str, Any] = self.prep_outputs(
[167](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:167) inputs, outputs, return_only_outputs
[168](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:168) )
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:157](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:157), in Chain.invoke(self, input, config, **kwargs)
[150](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:150) run_manager = callback_manager.on_chain_start(
[151](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:151) dumpd(self),
[152](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:152) inputs,
[153](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:153) name=run_name,
...
[42](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:42) f"unsupported type {type(merged[k])}."
[43](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:43) )
[44](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:44) return merged
TypeError: Additional kwargs key output_tokens already exists in left dict and value has unsupported type <class 'int'>.
### Description
* i'm trying to use the langchain to create an angent with `create_react_agent` or `create_json_chat_agent` or `create_xml_agent` with llm `qwen-max-longcontex`. i copy the major code from https://python.langchain.com/docs/get_started/quickstart and modify the code in agent creating , because i not using the openai model. I've been following this document step by step, and everything was running smoothly until I reached this step (creating agent)and started encountering errors.
* i expect to see the result like guide doc
* instead , it encountering errors.
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-experimental==0.0.49
langchainhub==0.1.14
macOS 12.6.5
python 3.11.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | use qwen to create agent TypeError: Additional kwargs key output_tokens already exists in left dict and value has unsupported type <class 'int'>. | https://api.github.com/repos/langchain-ai/langchain/issues/16422/comments | 4 | 2024-01-23T03:56:38Z | 2024-02-29T03:25:22Z | https://github.com/langchain-ai/langchain/issues/16422 | 2,095,191,993 | 16,422 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.retrievers import (
GoogleVertexAISearchRetriever,
GoogleCloudEnterpriseSearchRetriever
)
import time
PROJECT_ID = "my_project_id"
SEARCH_ENGINE_ID = "my_datastore_id"
LOCATION_ID = "global"
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
data_store_id=SEARCH_ENGINE_ID,
location_id=LOCATION_ID,
max_documents=3,
engine_data_type=1,
)
while 1:
message = input("Type: ")
print("input message: " + message)
result = retriever.get_relevant_documents(message)
for doc in result:
print(doc)
time.sleep(1) # Add a delay between each request
```
### Description
I tried to use GoogleVertexAISearchRetriever for RAG.
However, the output from ```retriever.get_relevant_documents(message)``` and the output response from GCP console's Vertex AI app preview are different.
At VertexAI Console, I could see the ideal result with the 5 most relevant results, but I couldn't get any response with the langchain script.
### System Info
langchain==0.1.2
langchain-community==0.0.14
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Different output from GoogleVertexAISearchRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16416/comments | 1 | 2024-01-23T00:11:20Z | 2024-04-30T16:13:23Z | https://github.com/langchain-ai/langchain/issues/16416 | 2,095,003,117 | 16,416 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I adapted the example for MultiQueryRetrieval to use a local ollama server with llama2 as LLM.
when running it I get an Value error: Expected each embedding in the embeddings to be a list, got [None].
This is my Code:
```python
# Build a sample vectorDB
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_community.llms import Ollama
from langchain_community.embeddings import OllamaEmbeddings
from langchain.retrievers.multi_query import MultiQueryRetriever
import logging
model_name = "llama2"
ollama = Ollama(base_url='http://localhost:11434',
model=model_name)
oembed = OllamaEmbeddings(base_url="http://localhost:11434", model=model_name)
# Load blog post
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
splits = text_splitter.split_documents(data)
# VectorDB
vectordb = Chroma.from_documents(documents=splits, embedding=oembed)
question = "What are the approaches to Task Decomposition?"
llm =ollama
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
# Set logging for the queries
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
print(len(unique_docs))
```
Does anyone have an idea what to do to fix this?
### Description
This is the error I get:
Traceback (most recent call last):
File "/home/lukas/code/content-assist/test-multi.py", line 41, in <module>
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 223, in get_relevant_documents
raise e
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 216, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain/retrievers/multi_query.py", line 175, in _get_relevant_documents
documents = self.retrieve_documents(queries, run_manager)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain/retrievers/multi_query.py", line 210, in retrieve_documents
docs = self.retriever.get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 223, in get_relevant_documents
raise e
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 216, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 654, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 438, in similarity_search_with_score
results = self.__query_collection(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/utils/utils.py", line 35, in wrapper
return func(*args, **kwargs)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 155, in __query_collection
return self._collection.query(
File "/home/lukas/.local/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 188, in query
validate_embeddings(maybe_cast_one_to_many(query_embeddings))
File "/home/lukas/.local/lib/python3.10/site-packages/chromadb/api/types.py", line 311, in validate_embeddings
raise ValueError(
ValueError: Expected each embedding in the embeddings to be a list, got [None]
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | MultiQueryRetriever with Ollama: ValueError: Expected each embedding in the embeddings to be a list, got [None] | https://api.github.com/repos/langchain-ai/langchain/issues/16415/comments | 3 | 2024-01-22T23:39:40Z | 2024-05-10T06:52:03Z | https://github.com/langchain-ai/langchain/issues/16415 | 2,094,973,143 | 16,415 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would like to learn how to modify the ingest.py to load multiple text file and include their source url.
### Motivation
Currently only one file is indexed.
### Your contribution
I know well Neo4J, LangChain but not LangServe. I can help once I get the concept.
@efriis and @tomasonjo original author | neo4j-advanced-rag multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16412/comments | 8 | 2024-01-22T22:51:33Z | 2024-04-30T16:29:30Z | https://github.com/langchain-ai/langchain/issues/16412 | 2,094,912,530 | 16,412 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[: self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
```
### Description
following the code here: https://python.langchain.com/docs/modules/model_io/llms/custom_llm
get the following error:
AttributeError: module 'langchain' has no attribute 'debug'
### System Info
langchain-0.0.147
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AttributeError: module 'langchain' has no attribute 'debug' | https://api.github.com/repos/langchain-ai/langchain/issues/16406/comments | 2 | 2024-01-22T21:12:09Z | 2024-04-29T16:15:51Z | https://github.com/langchain-ai/langchain/issues/16406 | 2,094,760,471 | 16,406 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Impossible to access `system_fingerprint` from OpenAI responses.
see: https://github.com/langchain-ai/langchain/discussions/13170#discussioncomment-8211745 | Expose complete response metadata from chat model via .invoke/.batch/.stream | https://api.github.com/repos/langchain-ai/langchain/issues/16403/comments | 4 | 2024-01-22T19:45:53Z | 2024-06-23T16:09:30Z | https://github.com/langchain-ai/langchain/issues/16403 | 2,094,630,738 | 16,403 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Same as #16295 for `@beta` decorator | preserve inspect.iscoroutinefunction with @beta decorator | https://api.github.com/repos/langchain-ai/langchain/issues/16402/comments | 2 | 2024-01-22T19:35:07Z | 2024-01-31T19:15:39Z | https://github.com/langchain-ai/langchain/issues/16402 | 2,094,615,439 | 16,402 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
OpenAI function-calling doesn't support function names with spaces. Need to update all Tool names to be snake_cased so that they work as OpenAI functions by default. See #16395 for example fix. | Make all Tool names snake_case | https://api.github.com/repos/langchain-ai/langchain/issues/16396/comments | 1 | 2024-01-22T18:12:45Z | 2024-01-26T22:10:10Z | https://github.com/langchain-ai/langchain/issues/16396 | 2,094,478,715 | 16,396 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
# Define the path to the directory containing the PDF files (example_data folder).
directory = "/content/test.pdf"
# Function to load documents from the specified directory.
def load_docs(directory):
# Create an instance of the DirectoryLoader with the provided directory path.
loader = PyPDFLoader(directory)
# Use the loader to load the documents from the directory and store them in ''documents''.
documents = loader.load_and_split()
# Return the loaded documents.
return documents
# Call the split_docs function to break the loaded documents into chunks.
# The chunk_size and chunk_overlap parameters can be adjusted based on specific requirements.
docs = load_docs(directory)
strings = []
for doc in docs:
strings.append(doc.page_content)
bedrock_embeddings = BedrockEmbeddings(model_id=modelId,
client=bedrock_runtime)
embeddings = bedrock_embeddings.embed_documents(strings)
```
### Description
When trying to run this code, the embeddings return as None. I have added the correct info to my AWS account, and no error pops up
### System Info
Google Colab
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Creating embeds from bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/16394/comments | 1 | 2024-01-22T18:04:37Z | 2024-04-30T16:13:20Z | https://github.com/langchain-ai/langchain/issues/16394 | 2,094,466,192 | 16,394 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import GPT4All
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=model_path, callbacks=callbacks)
```
### Description
Previously it was possible to enable streaming the answer of a GPT4all model, but now it does not work anymore.
In the model source there is a `streaming` attribute declared at the class level, but it's not used anywere.
If I edit the source manually to add `streaming` as a valid parameter, I can make it work again by doing GPT4All(model=model_path, callbacks=callbacks, streaming=True)
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
Debian Sid
Python 3.10.4
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Streaming broken for GPT4all | https://api.github.com/repos/langchain-ai/langchain/issues/16389/comments | 2 | 2024-01-22T17:07:32Z | 2024-01-22T17:54:20Z | https://github.com/langchain-ai/langchain/issues/16389 | 2,094,367,590 | 16,389 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
@chain
def answer_call(Any) -> Any:
if read_persist_var("chat_mode_choose") == chatmode[0]:
base_answer = RunnableWithMessageHistory(
prompt_b | llm,
RedisChatMessageHistory,
input_messages_key="input",
history_messages_key="history",
)
return base_answer
### Description
like i say before, i want to extract answer, because i use it in the gradio chatbot, the format not match ( context='xxx' ), i try a lot, re module, .content attribute, noone work. this is really important to me, if you kan help, thanks a lot.
### System Info
Python 3.9.18
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | I can't get only ai answer via RunnableWithMessageHistory ( it always come with context= ) | https://api.github.com/repos/langchain-ai/langchain/issues/16386/comments | 2 | 2024-01-22T16:44:24Z | 2024-01-24T13:28:16Z | https://github.com/langchain-ai/langchain/issues/16386 | 2,094,318,035 | 16,386 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Callback propagation is failing when creating a tool from .invoke signatures:
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L516
```
Tool(
name="some_tool",
func=some_runnable.invoke,
coroutine=some_runnable.ainvoke,
description="cats in the park",
return_direct=True
)
```
1) Callbacks will not be propagated properly because `.invoke` and `.ainvoke` do not have a `callbacks` parameter instead they have `.config`
2) We should instead create a nice way to create a tool from an existing runnable. | Create on-ramp for tools from runnables | https://api.github.com/repos/langchain-ai/langchain/issues/16381/comments | 1 | 2024-01-22T16:16:52Z | 2024-04-30T16:13:20Z | https://github.com/langchain-ai/langchain/issues/16381 | 2,094,267,590 | 16,381 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/callbacks/#when-do-you-want-to-use-each-of-these
### Idea or request for content:
Document how to pass callbacks via .invoke | DOC: Document how to pass callbacks with Runnable methods (e.g., .invoke / .batch) | https://api.github.com/repos/langchain-ai/langchain/issues/16379/comments | 1 | 2024-01-22T16:07:04Z | 2024-04-30T16:15:00Z | https://github.com/langchain-ai/langchain/issues/16379 | 2,094,249,309 | 16,379 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
tools = [get_news, solve_math_problem]
agent = ZeroShotAgent()
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=False,
max_iterations=1
early_stopping_method="generate",
return_intermediate_steps=True,
handle_parsing_errors=True,
)
result = agent_executor.invoke(
input={
"input": user_query,
},
)
```
### Description
We find that if the agent reaches the maximum steps, but it still wants to use tools to solve question, then the final answer will become the input for the next step, rather than a real final answer. This problem cannot be solved just by setting the right parameters.
**e.g.
Input:**
```News Tesla? and root of 18376?```
**Output:**
```
('Action: solve_math_problem\nAction Input: square root of 18376',
[{'input': 'Tesla',
'output': "Tesla CEO Elon Musk has expressed ...",
'tool': 'get_news'}])
```
We set ```max_iteration=1``` just because of easily reproducing this error.
### System Info
We check the function for generating the final answer and printed out the input and output.

**new_inputs:**
```python
{
'agent_scratchpad':
'Thought: The question has two parts: one is about the latest news on Tesla, and the other is about the root of 18376. I can use the get_news tool to find the latest news about Tesla, and the solve_math_problem tool to find the square root of 18376.\n
Action: get_news \nAction Input: Tesla\n
Observation: {
\'input\': \'Tesla\',
\'output\': "Tesla\'s CEO Elon Musk has sparked speculation about his ownership stake in the company after expressing
his reluctance to develop Tesla into a leader in artificial intelligence (AI) and robotics …",
}\n
Thought:\n\nI now need to return a final answer based on the previous steps:',
'stop': ['\nObservation:', '\n\tObservation:']
}
```
**full_output:**
```python
: Action: solve_math_problem Action Input: root of 18376
```
My guess is that in agent_scratchpad prompt has clear information that ` The question has two parts ... ` and doesn’t include the actual `user query News Tesla? and root of 18376?` Therefore, LLM may be confused about the meaning of final answer. Is final answer for initial user query or for previous steps?
Please review this issue. Thank you!
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | early_stopping_method parameter of AgentExecutor doesn’t work in expected way | https://api.github.com/repos/langchain-ai/langchain/issues/16374/comments | 4 | 2024-01-22T15:04:11Z | 2024-02-05T15:16:17Z | https://github.com/langchain-ai/langchain/issues/16374 | 2,094,121,411 | 16,374 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code
'''
from langchain.agents import create_openai_tools_agent
from langchain_community.utilities.bing_search import BingSearchAPIWrapper
from langchain_community.tools.bing_search.tool import BingSearchRun, BingSearchResults
def create_new_bing_search_agent_function_openai(llm):
bing_search = BingSearchAPIWrapper(bing_search_url="xxx", bing_subscription_key="xxx", k=4)
bing_tool = BingSearchResults(num_results=1, api_wrapper=bing_search)
tools_bing = [bing_tool]
prompt = ChatPromptTemplate.from_messages(
[
("system", "Have a conversation with a human. You are a helpful assistant who retrieves information from the Bing Search (the internet)."),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_functions_agent(llm, tools_bing, prompt)
agent_executor = AgentExecutor(tools=tools_bing, agent=agent)
return agent_executor
agent_chain_bing = create_new_bing_search_agent_function_openai(llm)
output = agent_chain_bing.invoke({"input": "What is stock price of Apple?"})
output['output']
'''
Gives this error:
BadRequestError: Error code: 400 - {'error': {'message': "'Bing Search Results JSON' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
The same error appears if I define the agent like this:
agent = create_openai_tools_agent(llm, tools_bing, prompt)
Am I using this agent wrong? Is there some other way how i can use bing search in an agent?
### Description
langchain==0.1.1
### System Info
model GPT4 32k
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'Bing Search Results JSON' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name' with new langchain 0.1 when using BingSearchResults into agent | https://api.github.com/repos/langchain-ai/langchain/issues/16368/comments | 19 | 2024-01-22T12:38:20Z | 2024-01-25T08:19:17Z | https://github.com/langchain-ai/langchain/issues/16368 | 2,093,826,511 | 16,368 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
below is my code:
elif any(file_path.lower().endswith(f".{img_type}") for img_type in image_types):
loader=UnstructuredImageLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
```
### Description
below is my error i am getting while using UnstructuredImage Loader
File "/home/hs/env/lib/python3.8/site-packages/unstructured/partition/pdf.py", line 263, in _partition_pdf_or_image_local
layout = process_file_with_model(
File "/home/hs/env/lib/python3.8/site-packages/unstructured_inference/inference/layout.py", line 377, in process_file_with_model
model = get_model(model_name, **kwargs)
TypeError: get_model() got an unexpected keyword argument 'ocr_languages'
### System Info
dell 7480 latitude
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | getting error while using UnstructuredImageLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16366/comments | 2 | 2024-01-22T11:52:25Z | 2024-01-22T17:51:28Z | https://github.com/langchain-ai/langchain/issues/16366 | 2,093,742,868 | 16,366 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I have different chains such as vectorchain,graphchain & customchain .
i have created rule-based system. when user write query it should pass to agent & know which llchain should we need to select out of all chain.
### Motivation
im building rag app for production.i cant use openai so looking for open source llms mistral etc.
when user write query it should smartly select llmchains .
### Your contribution
love to work on this chain | Calling different LLMchains into agent for opensource models such as mistral lama etc | https://api.github.com/repos/langchain-ai/langchain/issues/16364/comments | 1 | 2024-01-22T11:39:47Z | 2024-04-30T16:21:54Z | https://github.com/langchain-ai/langchain/issues/16364 | 2,093,720,478 | 16,364 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
below is my code
```
elif file_path.lower().endswith(".docx") or file_path.lower().endswith(".doc"):
docx_loader = UnstructuredWordDocumentLoader(file_path, mode="elements")
docx_document = docx_loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
texts = text_splitter.split_documents(documents=docx_document)
print(docx_document,"***************************")
```
Below is the error I am getting:
File "/home/hs/env/lib/python3.8/site-packages/langchain_community/vectorstores/chroma.py", line 742, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/home/hs/env/lib/python3.8/site-packages/langchain_community/vectorstores/chroma.py", line 309, in add_texts
raise ValueError(e.args[0] + "\n\n" + msg)
ValueError: Expected metadata value to be a str, int, float or bool, got ['Deepak Kumar'] which is a <class 'list'>
Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
### Idea or request for content:
_No response_ | getting error while integrating UnstructuredWordDocumentLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16363/comments | 2 | 2024-01-22T11:25:29Z | 2024-01-22T17:52:47Z | https://github.com/langchain-ai/langchain/issues/16363 | 2,093,695,219 | 16,363 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
For the docs [Web scraping](https://python.langchain.com/docs/use_cases/web_scraping).
The below examples mapped shows "Page Not Found" Error
1. [AsyncHtmlLoader](https://python.langchain.com/docs/use_cases/docs/integrations/document_loaders/async_html)
2. [AsyncChromiumLoader](https://python.langchain.com/docs/use_cases/docs/integrations/document_loaders/async_chromium)
3. [HTML2Text](https://python.langchain.com/docs/use_cases/docs/integrations/document_transformers/html2text)
4. [WebResearchRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research) -> Already Mentioned in the ISSUE #16241
Ideally URLS should be
1. [AsyncHtmlLoader](https://python.langchain.com/docs/integrations/document_loaders/async_html)
2. [AsyncChromiumLoader](https://python.langchain.com/docs/integrations/document_loaders/async_chromium)
3. [HTML2Text](https://python.langchain.com/docs/integrations/document_transformers/html2text)
4. Notebook Not found in the docs
### Idea or request for content:
_No response_ | Mismatch in Mapping Notebook URLs in Web scraping Docs | https://api.github.com/repos/langchain-ai/langchain/issues/16361/comments | 1 | 2024-01-22T11:15:31Z | 2024-01-22T17:43:17Z | https://github.com/langchain-ai/langchain/issues/16361 | 2,093,677,375 | 16,361 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
prompt_template = """Use the following pieces of context to answer the question at the end. Try to answer in a structured way. Write your answer in HTML format but do not include ```html ```. Put words in bold that directly answer your question.
If you don't know the answer, just say 'I am sorry I dont know the answer to this question or you dont have access to the files needed to answer the question.' Don't try to make up an answer.
{summaries}
Question: {question}.
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["summaries", "question"]
)
memory = ConversationBufferWindowMemory(
k=5, memory_key="chat_history", return_messages=True, output_key="answer"
)
for i in range(0, int(len(chat_history) / 2)):
memory.save_context(
{"input": chat_history[i * 2]}, {"answer": chat_history[(i * 2) + 1]}
)
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
memory= memory,
chain_type_kwargs={
"prompt": PromptTemplate(
template=PROMPT,
input_variables=["summaries", "question"],
),
},
)
```
### Description
I want to use RetrievalQAWithSourcesChain to generate an answer and the relevant sources from the retriever. However with this code I am getting this error:
. System.Private.CoreLib: Result: Failure
Exception: KeyError: 'template'
prompt.py", line 146, in template_is_valid
values["template"], values["template_format"]
### System Info
langchain==0.1.0
langsmith==0.0.80
langchainhub==0.1.14
langchain-community==0.0.12
openai==1.7.2
azure-identity == 1.13.0
azure-core ==1.28.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | How to use RetrievalQAWithSourcesChain with a custom prompt | https://api.github.com/repos/langchain-ai/langchain/issues/16356/comments | 2 | 2024-01-22T09:33:44Z | 2024-05-15T16:07:01Z | https://github.com/langchain-ai/langchain/issues/16356 | 2,093,482,579 | 16,356 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
'''
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents.agent import AgentOutputParser
from langchain.agents.chat.output_parser import ChatOutputParser
def creat_ai_search_new_agent(embeddings, llm, class_name_rich):
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=xxx,
azure_search_key=xxx,
index_name=xxx,
embedding_function=embeddings.embed_query,
content_key=xxx
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type="hybrid",
k=4,
top=10
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
class Response(BaseModel):
"""Final response to the question being asked"""
answer: str = Field(description="The final answer to respond to the user")
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(
functions=[
# The retriever tool
format_tool_to_openai_function(retriever_tool),
# Response schema
convert_pydantic_to_openai_function(Response),
]
)
try:
agent = (
{
"input": lambda x: x["input"],
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True, return_intermediate_steps=True,
handle_parsing_errors=True,
max_iterations=15,
)
except Exception as e:
print(e)
print("error instanciating the agent")
return agent_executor
'''
It gives you the error: Response is not a valid tool, try one of [Retriever]. and then after going into a loop reaches the agent limit. The final steps of the agent look like this:
Invoking: `Response` with `{'answer': "XXXXXXX", 'sources': [58, 15, 57, 29]}`
Response is not a valid tool, try one of [Retriever].
### Description
langchain == 0.1.1
openai==1.7.0
### System Info
Using model GPT-4 32K
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'Response is not a valid tool, try one of [Retriever].'), when using OpenAIFunctionsAgentOutputParser() | https://api.github.com/repos/langchain-ai/langchain/issues/16355/comments | 4 | 2024-01-22T09:29:42Z | 2024-05-02T16:05:54Z | https://github.com/langchain-ai/langchain/issues/16355 | 2,093,473,398 | 16,355 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
ask_chain = generate_ask_chain()
ner_chain = generate_ner_chain()
reasonable_chain = generate_resonable_chain()
overall_chain = generate_sequen_chain(ner_chain, reasonable_chain, ask_chain) # use SequentialChain
for chunk in overall_chain.stream({"profile": profile, "dialogue": dialogue,
"pair": pair, "question": question, "answer": answer}, return_only_outputs=True):
print(chunk.content, end="", flush=True)
### Description
我希望流式返回SequentialChain中处理后的输出的结果,但是dict好像不能被流式输出
AttributeError: 'dict' object has no attribute 'content'
### System Info
python3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AttributeError: 'dict' object has no attribute 'content' | https://api.github.com/repos/langchain-ai/langchain/issues/16354/comments | 4 | 2024-01-22T09:14:19Z | 2024-04-30T16:30:28Z | https://github.com/langchain-ai/langchain/issues/16354 | 2,093,434,497 | 16,354 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
import os
from langchain_community.chat_models import QianfanChatEndpoint
from langchain_core.language_models.chat_models import HumanMessage, AIMessage
os.environ["QIANFAN_AK"] = "my-ak"
os.environ["QIANFAN_SK"] = "my-sk"
chat = QianfanChatEndpoint(streaming=True)
messages = [HumanMessage(content="你叫小荔,是一个旅游向导,只会根据真实的信息提供攻略。你的攻略或建议必须真实且有效,并且详细描述涉及的地点"), AIMessage(content="明白"), HumanMessage(content="成都三日游")]
print(messages)
try:
for chunk in chat.stream(messages):
print(chunk)
except TypeError as e:
print("")
```
### Description
When using QianfanChatEndpoint with streaming enabled, it only returns two chunk of messages, which is not the full response. I followed the documentation and write the exact same code but only changed the prompt.

By disabling the streaming and using chat.invoke will return the full response.

Seems like a bug of the streaming
### System Info
python version 3.10.11
langchain version 0.1.1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using QianfanChatEndpoint with stream enabled only returns two chunk of messages. Disable stream and using invoke does not have this problem | https://api.github.com/repos/langchain-ai/langchain/issues/16352/comments | 4 | 2024-01-22T08:09:03Z | 2024-05-10T16:09:00Z | https://github.com/langchain-ai/langchain/issues/16352 | 2,093,323,353 | 16,352 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
This code works with langchain 0.1.0 and Azure Search Documents 11.4b9
However WIth Azure Search Documents 11.4.0 I get the error: ImportError: cannot import name 'Vector' from 'azure.search.documents.models'
```
with callbacks.collect_runs() as cb:
embeddings = AzureOpenAIEmbeddings(
azure_deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME,
openai_api_version="2023-05-15",
)
# Init vector store
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=SEARCH_SERVICE_ENPOINT,
azure_search_key=SEARCH_SERVICE_ADMIN_KEY,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
llm = AzureChatOpenAI(
azure_deployment=OPENAI_DEPLOYMENT_ENDPOINT1, openai_api_version="2023-05-15"
)
# Should take `chat_history` and `question` as input variables.
condense_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. If you do not know the answer reply with 'I am sorry'.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(condense_template)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
prompt_template = """Use the following pieces of context to answer the question at the end. Try to answer in a structured way. Write your answer in HTML format but do not include ```html ```. Put words in bold that directly answer your question.
If you don't know the answer, just say 'I am sorry I dont know the answer to this question or you dont have access to the files needed to answer the question.' Don't try to make up an answer.
{context}
Question: {question}.
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
doc_chain = load_qa_chain(llm, chain_type="stuff", prompt=PROMPT)
memory = ConversationBufferWindowMemory(
k=5, memory_key="chat_history", return_messages=True, output_key="answer"
)
for i in range(0, int(len(chat_history) / 2)):
memory.save_context(
{"input": chat_history[i * 2]}, {"answer": chat_history[(i * 2) + 1]}
)
chain = ConversationalRetrievalChain(
retriever=vector_store.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator,
memory=memory,
return_source_documents=True,
)
result = chain({"question": user_question})
run_id = str(cb.traced_runs[0].id)
return result, run_id
```
### Description
I am trying to use langchain with Azure Search
### System Info
I am using the following libraries:
langchain==0.1.0
langsmith==0.0.83
langchainhub==0.1.14
langchain-community==0.0.12
langchain-openai==0.0.3
azure-search-documents==11.4.0
openai==1.7.2
azure-identity == 1.13.0
azure-core ==1.28.0
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ImportError: cannot import name 'Vector' from 'azure.search.documents.models' | https://api.github.com/repos/langchain-ai/langchain/issues/16351/comments | 2 | 2024-01-22T06:55:27Z | 2024-01-22T09:27:46Z | https://github.com/langchain-ai/langchain/issues/16351 | 2,093,219,400 | 16,351 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
test_template = """Rephrase the query and output in json format. Here is an example:
###
query: hello world.
output: {"rephrased_query": "hello my world."}
###
query: {question}
output:"""
test_query_prompt = PromptTemplate(
input_variables=["question"],
template=test_template
)
test_query_prompt.input_variables
```
['"rephrased_query"', 'question']
### Description
I try to initialize a prompt requesting an output in json format. As it is shown in example code. When Json example appears in template, seems like it will automatically generate input_varialbes from template instead of the one I give.
### System Info
langchain-0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | can't initialize the PromptTemplate with input_variables correctly | https://api.github.com/repos/langchain-ai/langchain/issues/16349/comments | 1 | 2024-01-22T05:51:46Z | 2024-01-22T16:20:16Z | https://github.com/langchain-ai/langchain/issues/16349 | 2,093,140,496 | 16,349 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
template = """你是一位律師,態度非常高傲.
Question: {question}
Context: {context}
Answer:
"""
prompt = ChatPromptTemplate.from_template(template)
print(prompt)
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
# embeddings = SentenceTransformer(model_name="maidalun1020/bce-embedding-base_v1")
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 100},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
model_kwargs={"prompt": prompt}
# memory=memory,
),
)
while True:
# qa.load_memory_variables({"chat_history"})
documents = []
question = input("冒險者:")
document = Document(page_content=question, metadata={'source': 'user'})
documents.append(document)
answer = qa.invoke(question)
print(answer)
vectorstore.add_documents([document])
if question == "bye":
break
if __name__ == "__main__":
main()
### Idea or request for content:
Why can't I retrieve the prompt I set for the OpenAI prompt engine? Additionally, how can I incorporate memory so that OpenAI can remember what I say during conversations? | Why can't I retrieve the prompt I set for the OpenAI prompt engine? Additionally, how can I incorporate memory so that OpenAI can remember what I say during conversations? | https://api.github.com/repos/langchain-ai/langchain/issues/16345/comments | 1 | 2024-01-22T01:43:36Z | 2024-04-29T16:12:11Z | https://github.com/langchain-ai/langchain/issues/16345 | 2,092,912,993 | 16,345 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
A simplified example pulled almost straight from [here](https://python.langchain.com/docs/integrations/document_transformers/html2text), but fails on the walmart.com page for some reason.
```
from langchain_community.document_loaders import AsyncHtmlLoader
from langchain_community.document_transformers import Html2TextTransformer
urls = ['https://www.walmart.com/shop/deals']
loader = AsyncHtmlLoader(urls)
docs = loader.load()
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
print(docs_transformed[0].page_content)
```
### Description
* AsyncHtmlLoader fails to load https://www.walmart.com/shop/deals, but works for other urls I tested
* I search for the error, but couldn't find documentation on how I'd avoid the issue with AsyncHtmlLoader
* I would expect AsyncHtmlLoader to never fail to load a webpage due to a technical error. I could see if the request was blocked in some way or another
Error:
```
(crewai) Nicks-Macbook-Pro-4:crewai nroth$ /opt/miniconda3/envs/crewai/bin/python /Users/nroth/workspace/crewai/html2text_example.py
Fetching pages: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 206, in load
asyncio.get_running_loop()
RuntimeError: no running event loop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 965, in start
message, payload = await protocol.read() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/streams.py", line 622, in read
await self._waiter
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_proto.py", line 224, in data_received
messages, upgraded, tail = self._parser.feed_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
File "aiohttp/_http_parser.pyx", line 732, in aiohttp._http_parser.cb_on_header_value
aiohttp.http_exceptions.LineTooLong: 400, message:
Got more than 8190 bytes (9515) when reading Header value is too long.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/nroth/workspace/crewai/html2text_example.py", line 9, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 213, in load
results = asyncio.run(self.fetch_all(self.web_paths))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/base_events.py", line 684, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 189, in fetch_all
return await tqdm_asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/tqdm/asyncio.py", line 79, in gather
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/tasks.py", line 631, in _wait_for_one
return f.result() # May raise f.exception().
^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/tqdm/asyncio.py", line 76, in wrap_awaitable
return i, await f
^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 177, in _fetch_with_rate_limit
return await self._fetch(url)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 148, in _fetch
async with session.get(
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client.py", line 1187, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client.py", line 601, in _request
await resp.start(conn)
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 967, in start
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes (9515) when reading Header value is too long.', url=URL('https://www.walmart.com/shop/deals')
```
### System Info
I was using this after installing the latest version of crewai, so my langchain version might not be the absolute latest.
```
python --version
Python 3.12.1
```
```
pip freeze | grep langchain
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
langchain-openai==0.0.2.post1
```
```
system_profiler SPSoftwareDataType SPHardwareDataType
Software:
System Software Overview:
System Version: macOS 12.5.1 (21G83)
Kernel Version: Darwin 21.6.0
Secure Virtual Memory: Enabled
System Integrity Protection: Enabled
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.3 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async | Header value is too long error when using AsyncHtmlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16343/comments | 3 | 2024-01-21T18:18:14Z | 2024-04-28T16:17:59Z | https://github.com/langchain-ai/langchain/issues/16343 | 2,092,704,220 | 16,343 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I'm attempting to use RAG-Fusion with retriever as OpenSearchRetriever:
```python
retriever = OpenSearchRetriever(...)
...
query_chain = generate_queries | retriever.map() | reciprocal_rank_fusion
```
### Description
It seems the OpenSearchRetriever does not have a `.map` attribute, so it can't be used with RAG Fusion?
### System Info
LangChain 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'OpenSearchRetriever' object has no attribute 'map' | https://api.github.com/repos/langchain-ai/langchain/issues/16342/comments | 1 | 2024-01-21T17:38:31Z | 2024-01-22T15:19:30Z | https://github.com/langchain-ai/langchain/issues/16342 | 2,092,686,610 | 16,342 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
When I try to create a Gemini model using the built-in tools as follows, it results in an error.
```
llm = ChatVertexAI(model_name="gemini-pro")
sqlalchemy_uri = f"bigquery://{gcp_project_id}/{gcp_dataset_id}"
db = SQLDatabase.from_uri(sqlalchemy_uri)
tools = SQLDatabaseToolkit(db=db, llm=llm).get_tools()
llm_with_tools = llm.bind(functions=tools)
llm_with_tools.invoke("list tables")
```
Error Message
```
"name": "ValueError",
"message": "Value not declarable with JSON Schema, field: name='_callbacks_List[langchain_core.callbacks.base.BaseCallbackHandler]' type=BaseCallbackHandler required=True",
```
### Description
I want to use the built-in tools with the model from the langchain_google_vertexai library.
In the Gemini version of ChatVertexAI, when generating text (`_generate()`), it seems to be expected that the Tool bound to the model and given to functions will be converted to VertexAI format using `_format_tools_to_vertex_tool()`.
However, the current code fails to do this.
It seems that the issue might be with the branch` if is instance(tool, Tool) `in the following code.
https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-vertexai/langchain_google_vertexai/functions_utils.py#L77-L89
Similar to the conversion function for OpenAI (`format_tool_to_openai_tool()`), I believe that BaseTool should be used instead of Tool.
### System Info
- Python 3.9.16
- langchain 0.0.354
- langchain-community 0.0.13
- langchain-core 0.1.13
- langchain-google-genai 0.0.4
- langchain-google-vertexai 0.0.2
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | When using the FunctionCalling feature with Gemini, built-in tools cannot be utilized. | https://api.github.com/repos/langchain-ai/langchain/issues/16340/comments | 1 | 2024-01-21T13:21:26Z | 2024-04-28T16:21:10Z | https://github.com/langchain-ai/langchain/issues/16340 | 2,092,583,872 | 16,340 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
``` Python
# Creating Embdeddings of the sentences and storing it into Graph DB
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-base-en-v1.5"
model_kwargs = {"device": "cpu"}
encode_kwargs = {"normalize_embeddings": True}
embeddings = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
```
``` Python
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
``` Python
from neo4j import GraphDatabase
uri = os.environ["NEO4J_URI"]
username = os.environ["NEO4J_USERNAME"]
password = os.environ["NEO4J_PASSWORD"]
driver = GraphDatabase.driver(uri, auth=(username, password))
session = driver.session()
result = session.run("SHOW VECTOR INDEXES")
for record in result:
print(record)
```
``` Python
# Instantiate Neo4j vector from documents
neo4j_vector = Neo4jVector.from_documents(
documents,
HuggingFaceBgeEmbeddings(),
name="graph_qa_index",
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
### Description
``` Python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-26-b09e1b2ff4ef>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_vector = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 768
```
**The embedding model utilized in `HuggingFaceBgeEmbeddings` is denoted as `BAAI/bge-base-en-v1.5`, possessing an embedding dimension of `768`. This specification ostensibly aligns with the vector store index dimension of `768`. Nevertheless, upon execution of the provided code, a dimension mismatch error is encountered despite the apparent alignment.**
### System Info
``` YAML
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Seeking Assistance: Incompatibility of Vector Index Model and Embedding Function Dimensions in Neo4j | https://api.github.com/repos/langchain-ai/langchain/issues/16336/comments | 2 | 2024-01-21T10:01:30Z | 2024-04-30T16:19:37Z | https://github.com/langchain-ai/langchain/issues/16336 | 2,092,511,069 | 16,336 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
-
### Description
https://github.com/langchain-ai/langchain/blob/ef75bb63ce5cc4fb76ba1631ebe582f56103ab7e/libs/langchain/langchain/agents/json_chat/base.py#L151
this seems useless because chat-json depends json to work,not this sort of completion
### System Info
-
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Is `llm.bind(stop=["\nObservation"])` really meanful in `json_chat` agnet? | https://api.github.com/repos/langchain-ai/langchain/issues/16334/comments | 6 | 2024-01-21T08:39:34Z | 2024-04-07T15:16:42Z | https://github.com/langchain-ai/langchain/issues/16334 | 2,092,484,424 | 16,334 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I am trying to follow along with the examples at
https://python.langchain.com/docs/expression_language/cookbook/sql_db
Everything is tracking until `full_chain.invoke({"question": quest})`.
(https://github.com/langchain-ai/langchain/blob/3d23a5eb36045db3b7a05c34947b74bd4909ba3b/docs/docs/expression_language/cookbook/sql_db.ipynb#L162)
the error I get suggests that the `query` being formed for sqlalchemy is not actually the query but some string starting "To answer this question,..."
> sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
> [SQL: To answer this question, we can run a query that retrieves the number of employees from the `Employee` table. Here is an example query:
> ```
> SELECT COUNT(*) FROM Employee;
> ```
(full error stack below)
I also found the examples here
https://python.langchain.com/docs/use_cases/qa_structured/sql
but get basically the same errors from the
`db_chain = SQLDatabaseChain.from_llm(model, db, verbose=True)`
( https://github.com/langchain-ai/langchain/blob/ef75bb63ce5cc4fb76ba1631ebe582f56103ab7e/docs/docs/use_cases/qa_structured/sql.ipynb#L94)
(full error stack below)
possibly related issues: #11870, #15077
any guesses why that would be? Or how to debug further? Thanks!
### Description
See above
## Full error#1
sql1 To answer this question, we can use a SELECT statement to retrieve the number of employees from the `Employee` table. Here's an example query:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is equal to the number of employees.
Answer: There are 3 employees.
Traceback (most recent call last):
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: near "To": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 207, in <module>
main()
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 202, in main
lc_cbSQL2(ollama,quest)
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 175, in lc_cbSQL2
chainResult2 = full_chain.invoke({"question": quest})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1780, in invoke
input = step.invoke(
^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 415, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 981, in _call_with_config
context.run(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 402, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2345, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2345, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3080, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 981, in _call_with_config
context.run(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2956, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 170, in <lambda>
response=lambda x: db.run(x["query"]),
^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 437, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 414, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
[SQL: To answer the question "How many employees are there?" we can use a SELECT statement to retrieve the number of employees from the Employee table. Here's an example query:
```
SELECT COUNT(*) FROM Employee;
```
Explanation:
* `COUNT(*)` is a function that returns the number of rows in a table.
* `FROM Employee` specifies the table to retrieve the count from.
When we run this query, we should see the number of employees in the table. For example, if there are 3 employees in the table, the query will return `3`.]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
## Full error#2
.../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
[1m> Entering new SQLDatabaseChain chain...[0m
How many employees are there?
SQLQuery:[32;1m[1;3mTo answer the question "How many employees are there?", we need to query the `Employee` table. The query would be:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is the number of employees.[0mTraceback (most recent call last):
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: near "To": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 207, in <module>
main()
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 204, in main
lcSQL(ollama,quest)
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 180, in lcSQL
rv = db_chain.run(quest)
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_experimental/sql/base.py", line 201, in _call
raise exc
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_experimental/sql/base.py", line 146, in _call
result = self.database.run(sql_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 437, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 414, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
[SQL: To answer the question "How many employees are there?", we need to query the `Employee` table. The query would be:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is the number of employees.]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
python 3.11.7
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SQL examples from cookbook and use_cases/qa_structured don't work for me? | https://api.github.com/repos/langchain-ai/langchain/issues/16331/comments | 2 | 2024-01-21T03:08:09Z | 2024-04-28T18:13:33Z | https://github.com/langchain-ai/langchain/issues/16331 | 2,092,399,367 | 16,331 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.chat_models import ChatZhipuAI
model = ChatZhipuAI(
model="chatglm_turbo",
api_key="xxxx",
)
print(model.invoke("hello, what today is today?"))
```
### Description
## Problem description
I tried to call the chatglm_turbo model using ChatZhipuAI and found an error. See screenshot below for errors reported
## Reasons
1. the zhipuai library has been upgraded to 2.0.1,version 2.0 is not compatible with previous apis annd it looks like version 1.0 is deprecated
2. the current community code is not updated
### System Info
```shell
> pip list | grep zhipuai
zhipuai 2.0.1
```
<img width="1004" alt="image" src="https://github.com/langchain-ai/langchain/assets/30918004/076b377b-00cb-45aa-9813-ce7f6916204e">
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | community:ChatZhipuAI is currently not working well because the zhipuai library has been upgraded to 2.0.1 | https://api.github.com/repos/langchain-ai/langchain/issues/16330/comments | 2 | 2024-01-21T02:59:19Z | 2024-04-28T16:22:24Z | https://github.com/langchain-ai/langchain/issues/16330 | 2,092,390,036 | 16,330 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code initializes the chatbot instance using ConversationalRetrievalChain with the 'return_source_documents' parameter:
```
def initialize_chatbot(index_name):
chatbot = ChatOpenAI(
openai_api_key=os.environ["OPENAI_API_KEY"],
model='gpt-3.5-turbo',
temperature=0.2
)
embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
vectorstore = Pinecone.from_existing_index(index_name, embeddings)
retriever = vectorstore.as_retriever()
memory = ConversationBufferWindowMemory(
k=10,
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm=chatbot,
retriever=retriever,
memory=memory,
return_source_documents=True
)
return qa
```
The following code runs a query:
```
def chat(query, qa):
response = qa(query)
print(response)
query = "what is the nutrition information for the boiled egg recipe?"
chat(query, chatbot)
```
The error I get:
```
File [langchain/chains/base.py:314], in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[312] raise e
[313]run_manager.on_chain_end(outputs)
--> [314] final_outputs: Dict[str, Any] = self.prep_outputs(
[315] inputs, outputs, return_only_outputs
[316] )
[317] if include_run_info:
[318] final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File [langchain/chains/base.py:410], in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
[408] self._validate_outputs(outputs)
[409] if self.memory is not None:
--> [410] self.memory.save_context(inputs, outputs)
[411] if return_only_outputs:
...
---> [29] raise ValueError(f"One output key expected, got {outputs.keys()}")
[30] output_key = list(outputs.keys())[0]
[31] else:
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
```
### Description
I am trying to use the langchain library to return source documents using the ConversationalRetrievalChain. However I keep getting an error relating to it expecting only one output key. I looked into the code and found I think it is executing the `__call__` function, which was deprecated in langchain version 0.1.0, and it says it only expects one output key. I am using the most recent langchain version that pip allows (`pip install --upgrade langchain`), which is 0.1.1. How can I get this to execute properly?
Additional notes:
- I am using langchain-openai for ChatOpenAI and OpenAIEmbeddings
### System Info
"pip install --upgrade langchain"
Python 3.11.5
Langchain 1.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | return_source_documents does not work for ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/16323/comments | 4 | 2024-01-21T00:14:07Z | 2024-01-21T02:15:01Z | https://github.com/langchain-ai/langchain/issues/16323 | 2,092,284,846 | 16,323 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi I am building a chatbot that uses Vectordb to return the most up-to-date news.
how can I set the chain to retrieve the k documents vectors sorted by publish_date which is populated as a metadata field?
Here is how I define the chain:
` self.chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=self.llm,
chain_type="stuff",
retriever=self.vector_db.db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .4,
"k": 3}),
chain_type_kwargs=self.chain_type_kwargs,
return_source_documents=True,
verbose=True
)`
### Motivation
To retrieve the most up-to-date sources in the response
### Your contribution
Helping expand the library | Sort document option using RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/16320/comments | 3 | 2024-01-20T21:44:51Z | 2024-04-30T16:19:40Z | https://github.com/langchain-ai/langchain/issues/16320 | 2,092,250,536 | 16,320 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/use_cases/graph/graph_falkordb_qa
Could you please provide the logic to Visualize the Graph created using FalkorDB in the documentation. For example llamaindex has specified the below code for visualizing the graph:
```
## create graph
from pyvis.network import Network
g = index.get_networkx_graph()
net = Network(notebook=True, cdn_resources="in_line", directed=True)
net.from_nx(g)
net.show("falkordbgraph_draw.html")
```
### Idea or request for content:
Please provide the logic to visualize graph in the FalkorDBQAChain documentation(https://python.langchain.com/docs/use_cases/graph/graph_falkordb_qa) | How to Visualize the Graph created using FalkorDB ? | https://api.github.com/repos/langchain-ai/langchain/issues/16319/comments | 5 | 2024-01-20T19:23:01Z | 2024-05-01T16:06:34Z | https://github.com/langchain-ai/langchain/issues/16319 | 2,092,211,785 | 16,319 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In the LangChain documentation about SQL use cases, there seems to be an error with the import statement. The class 'create_retriever_tool' does not exist as specified. According to my understanding, the proper usage might require specifying extra_tools as a parameter when calling create_sql_agent, which should accept a sequence of BaseTool objects.
<img width="1073" alt="20240121012842" src="https://github.com/langchain-ai/langchain/assets/43747516/bbfbe969-1a17-4822-a80c-514d44fb7fef">
### Idea or request for content:
_No response_ | DOC: cannot import name 'create_retriever_tool' from 'langchain_community.agent_toolkits' | https://api.github.com/repos/langchain-ai/langchain/issues/16317/comments | 4 | 2024-01-20T17:29:24Z | 2024-04-30T16:13:16Z | https://github.com/langchain-ai/langchain/issues/16317 | 2,092,170,271 | 16,317 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is my issue in brief '''
``` Python
import os
poppler_path = 'C:\\Users\\Mohd Kaif\\Downloads\\poppler-23.08.0\\Library\\bin'
os.environ["PATH"] += os.pathsep + poppler_path
```
``` Python
directory = '/content/drive/MyDrive/History_QA_dataset'
```
``` Python
from pathlib import Path
def load_files(directory):
documents = list(Path(directory).iterdir())
return documents
documents = load_files(directory)
print(len(documents))
```
``` Python
documents
```
``` Python
from langchain_community.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader("/content/drive/MyDrive/History_QA_dataset/ncert_s_modern_india_bipan_chandra_old_edition-1566975158976.pdf")
pages = loader.load()
```
### Description
Rasing this Type Error:
``` Python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
580 env["LD_LIBRARY_PATH"] = poppler_path + ":" + env.get("LD_LIBRARY_PATH", "")
--> 581 proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE)
582
14 frames
FileNotFoundError: [Errno 2] No such file or directory: 'pdfinfo'
During handling of the above exception, another exception occurred:
PDFInfoNotInstalledError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
605
606 except OSError:
--> 607 raise PDFInfoNotInstalledError(
608 "Unable to get page count. Is poppler installed and in PATH?"
609 )
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
```
### System Info
System Information:
Python version:` 3.10.10`
Operating System: `Windows 11`
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? | https://api.github.com/repos/langchain-ai/langchain/issues/16315/comments | 1 | 2024-01-20T15:11:07Z | 2024-04-27T16:24:14Z | https://github.com/langchain-ai/langchain/issues/16315 | 2,092,088,912 | 16,315 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_openai.chat_models import ChatOpenAI
chat = ChatOpenAI()
### Description
I am working on Windows 11 with Python 3.11. I am using Pycharm and I have installed langchain_openai ==0.3.
When I initialize chat = ChatOpenAI. I get the following error:
Traceback (most recent call last):
File "C:\workingfolder\PythonProjects\agents\main.py", line 13, in <module>
chat = ChatOpenAI()
^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\langchain_core\load\serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\pydantic\v1\main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\pydantic\v1\main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\langchain_openai\chat_models\base.py", line 345, in validate_environment
values["client"] = openai.OpenAI(**client_params).chat.completions
^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'OpenAI'
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
asgiref==3.7.2
attrs==23.2.0
backoff==2.2.1
bcrypt==4.1.2
build==1.0.3
cachetools==5.3.2
certifi==2023.11.17
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.22
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.6.3
Deprecated==1.2.14
distro==1.9.0
fastapi==0.109.0
filelock==3.13.1
flatbuffers==23.5.26
frozenlist==1.4.1
fsspec==2023.12.2
google-auth==2.26.2
googleapis-common-protos==1.62.0
greenlet==3.0.3
grpcio==1.60.0
h11==0.14.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.26.0
huggingface-hub==0.20.2
humanfriendly==10.0
idna==3.6
importlib-metadata==6.11.0
importlib-resources==6.1.1
jsonpatch==1.33
jsonpointer==2.4
kubernetes==29.0.0
**langchain==0.0.352
langchain-community==0.0.11
langchain-core==0.1.8
langchain-openai==0.0.3
langsmith==0.0.78**
markdown-it-py==3.0.0
marshmallow==3.20.1
mdurl==0.1.2
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
oauthlib==3.2.2
onnxruntime==1.16.3
openai==0.27.8
opentelemetry-api==1.22.0
opentelemetry-exporter-otlp-proto-common==1.22.0
opentelemetry-exporter-otlp-proto-grpc==1.22.0
opentelemetry-instrumentation==0.43b0
opentelemetry-instrumentation-asgi==0.43b0
opentelemetry-instrumentation-fastapi==0.43b0
opentelemetry-proto==1.22.0
opentelemetry-sdk==1.22.0
opentelemetry-semantic-conventions==0.43b0
opentelemetry-util-http==0.43b0
overrides==7.4.0
packaging==23.2
posthog==3.3.1
protobuf==4.25.2
pulsar-client==3.4.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pyboxen==1.2.0
pydantic==2.5.3
pydantic_core==2.14.6
Pygments==2.17.2
PyPika==0.48.9
pyproject_hooks==1.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.0
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rich==13.7.0
rsa==4.9
six==1.16.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.35.1
sympy==1.12
tenacity==8.2.3
tiktoken==0.5.2
tokenizers==0.15.0
tqdm==4.66.1
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
uvicorn==0.25.0
watchfiles==0.21.0
websocket-client==1.7.0
websockets==12.0
wrapt==1.16.0
yarl==1.9.4
zipp==3.17.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | using ChatOpenAI gives an error AttributeError: module 'openai' has no attribute 'OpenAI' | https://api.github.com/repos/langchain-ai/langchain/issues/16314/comments | 5 | 2024-01-20T12:02:30Z | 2024-05-22T22:08:15Z | https://github.com/langchain-ai/langchain/issues/16314 | 2,092,028,686 | 16,314 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.llms import OpenAI
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import ZapierToolkit
from langchain.utilities.zapier import ZapierNLAWrapper
import os
from dotenv import load_dotenv
load_dotenv()
os.getenv('ZAPIER_NLA_API_KEY')
os.getenv('OPENAI_API_KEY')
llm = OpenAI(temperature=.3)
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.tools, llm, agent="zero-shot-react-description", verbose=True)
for tool in toolkit.tools:
print(tool.name)
print(tool.description)
print("\n\n")
agent.run('Send an email to xxx@gmail.com saying hello from Dr. sss')
### Description
Gmail: Send Email
A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are ['Message_Text', 'Channel'], your instruction should be something like 'send a slack message to the #general channel with the text hello world'. Another example: if the params are ['Calendar', 'Search_Term'], your instruction should be something like 'find the meeting in my personal calendar at 3pm'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say 'not enough information provided in the instruction, missing <param>'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: Gmail: Send Email, and has params: ['Body', 'To', 'Subject', 'Cc']
LinkedIn: Create Share Update
A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are ['Message_Text', 'Channel'], your instruction should be something like 'send a slack message to the #general channel with the text hello world'. Another example: if the params are ['Calendar', 'Search_Term'], your instruction should be something like 'find the meeting in my personal calendar at 3pm'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say 'not enough information provided in the instruction, missing <param>'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: LinkedIn: Create Share Update, and has params: ['Comment', 'Visible_To']
> Entering new AgentExecutor chain...
I need to use the Gmail: Send Email tool to complete this task.
Action: Gmail: Send Email
Action Input: {'Body': 'Hello from Dr. Khala', 'To': 'birbal.srivastava@gmail.com', 'Subject': 'Hello', 'Cc': ''}Traceback (most recent call last):
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/agents/agent.py", line 1095, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 143, in _run
warn_deprecated(
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/_api/deprecation.py", line 295, in warn_deprecated
raise NotImplementedError(
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
### System Info
Macbook pro
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 143, in _run
warn_deprecated(
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/_api/deprecation.py", line 295, in warn_deprecated
raise NotImplementedError(
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases | https://api.github.com/repos/langchain-ai/langchain/issues/16312/comments | 6 | 2024-01-20T10:59:07Z | 2024-07-01T08:01:13Z | https://github.com/langchain-ai/langchain/issues/16312 | 2,092,010,443 | 16,312 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
is there any way to have my open ai tool parameters as mandatory and optional for a dynamic structured tool
```javascript
const structuredTool2 = new DynamicStructuredTool({
name: "get_order_update",
description: "Get the update on a given order id and otp and return the update. Both otp and order id are required location is optional",
schema: z.object({
order_id: z.string().describe("The order id of the order to be tracked"),
otp: z.string().describe("The otp of the order to be tracked"),
location: z.optional(z.string().describe("The location of the order to be tracked")),
}),
func: ({order_id,otp,location}) => {
try{
if(otp != "1234"){
return "Invalid otp";
}
else{
let message = `Your order id ${order_id} is on the way. It will be delivered by 5pm today.`;
if(location !=undefined){
message = message + `Your order is currently at ${location}`;
}
return message;
}
}
catch(e){
return "SYstems are busy at th emoment please try again later";
}
},
});
```
### Idea or request for content:
_No response_ | Improved docs on creating custom tools for an agent | https://api.github.com/repos/langchain-ai/langchain/issues/16310/comments | 1 | 2024-01-20T09:46:10Z | 2024-04-27T16:24:39Z | https://github.com/langchain-ai/langchain/issues/16310 | 2,091,988,971 | 16,310 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
While initializing GooglePalm LLM encountering the below Not Implemented Error.
Using Langchain 0.1.1
### Description

### System Info
Name: langchain
Version: 0.1.1
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: kor, langchain-experimental
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | GooglePalm - NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases | https://api.github.com/repos/langchain-ai/langchain/issues/16308/comments | 2 | 2024-01-20T05:42:39Z | 2024-05-03T16:06:05Z | https://github.com/langchain-ai/langchain/issues/16308 | 2,091,911,843 | 16,308 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
All the chat model responses inheriting `BaseModel` are converted into `dict` using `response.dict()`. This throws the warning in the console
```bash
PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
response = response.dict()
```
### Description
Need to update the package to migrate to the new pydantic version - https://docs.pydantic.dev/2.0/migration/#changes-to-pydanticbasemodel
### System Info
langchain==0.0.336
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | upgrade to pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/16306/comments | 5 | 2024-01-20T05:00:06Z | 2024-06-11T17:17:54Z | https://github.com/langchain-ai/langchain/issues/16306 | 2,091,895,669 | 16,306 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
###Oracle DB Connection - Enable for Oracle DB
DIALECT = 'oracle'
SQL_DRIVER = 'oracledb'
oracle_conn_str=DIALECT + '+' + SQL_DRIVER + '://' + USER_ID + ':' + PASSWORD +'@' + HOST_NAME + ':' + str(PORT) + '/?service_name=' + SERVICE_NAME
db_engine = create_engine(oracle_conn_str)
db = SQLDatabase(db_engine)
###Agent Code
sql_toolkit = SQLDatabaseToolkit(db=self.db, llm=llm)
sql_toolkit.get_tools()
self.agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
prefix=zero_prompt,
extra_tools=[llm_tool],
handle_parsing_errors=True,
agent_executor_kwargs= {
"return_intermediate_steps" : True
}
)```
### Description
When using the SQL agent on Azure SQL Database, I provide the Database name in the connection string. This allows the SQL agent to successfully retrieve the table names and their schema.
However, when working with the SQL agent on Oracle Database, I provide the UserID, Password, Server name, Host, and Port in the connection string (provided code in example). The connection is established successfully, but the SQL agent encounters an issue in recognizing the tables. I suspect that the problem may arise from my tables being located under a specific schema (let's call it ABC_SCHEMA).
Upon investigation, it seems that when the SQL agent enters the executor chain with the "**Action: sql_db_list_tables**," it fails to list the tables under ABC_SCHEMA. As a result, an error is generated, specifically: **ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass handle_parse LLM output: Observation needed.**
In simple terms, the SQL agent is facing difficulty in identifying tables under a particular schema in Oracle Database, leading to the mentioned error. Can someone please help to fix this issue.
### System Info
langchain==0.0.348
oracledb==2.0.1
SQLAlchemy==2.0.25
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SQL Agent is unable to recognize the tables and its schema when connected to a Oracle DB. | https://api.github.com/repos/langchain-ai/langchain/issues/16294/comments | 2 | 2024-01-19T20:47:10Z | 2024-04-27T16:33:47Z | https://github.com/langchain-ai/langchain/issues/16294 | 2,091,393,967 | 16,294 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain.schema import HumanMessage, AIMessage
llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm([
AIMessage(role="model", content="Hi"),
HumanMessage(role="user", content="Tell me a joke")
]) # gives the error
```
### Description
* Trying to use Langchain's Gemini integration to quickly process a chat history that starts with an AIMessage, as given by the example [here](https://github.com/langchain-ai/streamlit-agent/blob/main/streamlit_agent/basic_streaming.py).
* The model should return a response with no errors because the last message I have given as an input has the role of ``user``, but instead it gives the error: "ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 Please ensure that multiturn requests ends with a user role or a function response."
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
langchain-google-genai==0.0.6
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Gemini integration fails when a list of messages starting with AIMessage is given as input | https://api.github.com/repos/langchain-ai/langchain/issues/16288/comments | 6 | 2024-01-19T17:33:40Z | 2024-04-27T16:39:43Z | https://github.com/langchain-ai/langchain/issues/16288 | 2,090,986,643 | 16,288 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.