issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.292
OS Windows10
Python 3.11
### Who can help?
Probably @hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Example code for Q/A chain with "Map Re-Rank" following the official tutorials:
```
precise_chat_model = ChatOpenAI(
model_name='gpt-3.5-turbo',
temperature=0,
openai_api_key=OPENAI_API_KEY
)
qa_chain: MapRerankDocumentsChain = load_qa_chain(
llm=precise_chat_model,
chain_type='map_rerank',
verbose=True,
return_intermediate_steps=True
)
question = 'Question'
query = {'input_documents': pages, 'question': question}
answer = qa_chain(query, return_only_outputs=False)
```
Full example with PDF that raises Exception all the time: https://gist.github.com/ton77v/eb5b90e72b1652ebccee86ac80b1e01f
Every time I use this chain with any page the UserWarning appears:
```
UserWarning: The apply_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
```
And for the specific documents (usually when 10+ pages) the ValueError is raised upon finishing the chain.
* Gist above raises this error all the time!
```
File "...site-packages\langchain\output_parsers\regex.py", line 35, in parse
raise ValueError(f"Could not parse output: {text}")
ValueError: Could not parse output: Code execution in Ethereum....
```
### Expected behavior
I expect to get an answer without any Exceptions and Warnings | MapRerankDocumentsChain UserWarning & ValueError | https://api.github.com/repos/langchain-ai/langchain/issues/10670/comments | 7 | 2023-09-16T09:01:39Z | 2024-02-13T16:12:12Z | https://github.com/langchain-ai/langchain/issues/10670 | 1,899,364,157 | 10,670 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The effect of tools with return_direct=False for CHAT_CONVERSATIONAL_REACT_AGENT only causes the LLM to generate a final answer based on tool observation, but never in another tool invocation.
The same is not happening with CONVERSATIONAL_REACT_AGENT which seems able to generate new tool queries after the first one.
Is this a feature that can be fixed simply acting on the agent policy prompt, or is there a better way to enable such feature?
Thank you very much.
### Suggestion:
_No response_ | CHAT_CONVERSATIONAL_REACT_AGENT never uses more than 1 tool per turn. | https://api.github.com/repos/langchain-ai/langchain/issues/10669/comments | 4 | 2023-09-16T08:50:55Z | 2023-12-25T16:06:50Z | https://github.com/langchain-ai/langchain/issues/10669 | 1,899,361,074 | 10,669 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
I noticed that the `AzureOpenAI` is missing from the latest release. Now we kind of have to create our own custom class. Is this the direction of the project?
Thank you
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import AzureOpenAI
no logner availiable
no possibility to add engine or deployment id, nor the possibility to add extra headers
### Expected behavior
from langchain.llms import AzureOpenAI
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
embeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name") | AzureOpenAI is missing | https://api.github.com/repos/langchain-ai/langchain/issues/10664/comments | 2 | 2023-09-15T23:52:55Z | 2023-12-25T16:06:55Z | https://github.com/langchain-ai/langchain/issues/10664 | 1,899,214,053 | 10,664 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain == 292
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
```
agent_analytics_node = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
reduce_k_below_max_tokens=True,
max_execution_time = 20,
early_stopping_method="generate",
)
tool_analytics_node = Tool(
name='Analytics Node',
func=agent_analytics_node.run)
tools = [tool_analytics_node]
chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
memory=memory,
return_intermediate_steps=True,
handle_parsing_errors=True,
verbose=True,
)
with st.chat_message("assistant"):
st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
response = executor(prompt, callbacks=[st_cb])
```
here output from the agent: ```
> Entering new AgentExecutor chain...
Thought: The question seems to be asking for the sentiment polarity of the 'survey_comment' column in the dataframe. The sentiment polarity is a measure that lies between -1 and 1. Negative values indicate negative sentiment and positive values indicate positive sentiment. The TextBlob library in Python can be used to calculate sentiment polarity. However, before applying the TextBlob function, we need to ensure that the TextBlob library is imported. Also, the 'dropna()' function is used to remove any NaN values in the 'survey_comment' column before applying the TextBlob function.
Action: python_repl_ast
Action Input: import TextBlob
Observation: ModuleNotFoundError: No module named 'TextBlob'
Thought:The TextBlob library is not imported. I need to import it from textblob module.
Action: python_repl_ast
Action Input: from textblob import TextBlob
Observation:
Thought:Now that the TextBlob library is imported, I can apply it to the 'survey_comment' column to calculate the sentiment polarity.
Action: python_repl_ast
Action Input: df['survey_comment'].dropna().apply(lambda x: TextBlob(x).sentiment.polarity)
Observation: NameError: name 'TextBlob' is not defined
```
### Expected behavior
agent should be able to install python packages. | AgentExecutor and ModuleNotFoundError/NameError | https://api.github.com/repos/langchain-ai/langchain/issues/10661/comments | 2 | 2023-09-15T21:55:52Z | 2023-12-25T16:06:59Z | https://github.com/langchain-ai/langchain/issues/10661 | 1,899,121,900 | 10,661 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain == 292
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
```
agent_analytics_node = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
reduce_k_below_max_tokens=True,
max_execution_time = 20,
early_stopping_method="generate",
)
tool_analytics_node = Tool(
name='Analytics Node',
func=agent_analytics_node.run)
tools = [tool_analytics_node]
chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
memory=memory,
return_intermediate_steps=True,
handle_parsing_errors=True,
verbose=True,
)
with st.chat_message("assistant"):
st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
response = executor(prompt, callbacks=[st_cb])
```
here output from the agent:```
> Entering new AgentExecutor chain...
Thought: The question seems to be asking for the sentiment polarity of the 'survey_comment' column in the dataframe. The sentiment polarity is a measure that lies between -1 and 1. Negative values indicate negative sentiment and positive values indicate positive sentiment. The TextBlob library in Python can be used to calculate sentiment polarity. However, before applying the TextBlob function, we need to ensure that the TextBlob library is imported. Also, the 'dropna()' function is used to remove any NaN values in the 'survey_comment' column before applying the TextBlob function.
Action: python_repl_ast
Action Input: import TextBlob
Observation: ModuleNotFoundError: No module named 'TextBlob'
Thought:The TextBlob library is not imported. I need to import it from textblob module.
Action: python_repl_ast
Action Input: from textblob import TextBlob
Observation:
Thought:Now that the TextBlob library is imported, I can apply it to the 'survey_comment' column to calculate the sentiment polarity.
Action: python_repl_ast
Action Input: df['survey_comment'].dropna().apply(lambda x: TextBlob(x).sentiment.polarity)
Observation: NameError: name 'TextBlob' is not defined
```
### Expected behavior
agent should be able to install python packages | python_repl_ast and package import (ModuleNotFoundError and NameError) | https://api.github.com/repos/langchain-ai/langchain/issues/10660/comments | 4 | 2023-09-15T21:54:31Z | 2024-01-17T03:02:33Z | https://github.com/langchain-ai/langchain/issues/10660 | 1,899,120,362 | 10,660 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.287, MACOS
### Who can help?
**TLDR: Where are the tools in prompts ?**
Hi everyone, I am experimenting with the AgentTypes and I found its not showing everything in the prompts.
My langchain.debug =True and I am expecting to see every detail about my prompts.
However when I use `agent=AgentType.OPENAI_FUNCTIONS` I dont actually see the full prompt that is given to the OpenAI.
Agent Configurations:
```
# There is only one tool.
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to search internet for question. You should ask targeted questions"
)]
# Initialize the agent
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",
openai_api_key=os.getenv("OPENAPI_SECRET_KEY"))
# The systemMessage is simple
system_message = SystemMessage(
content="Your name is BOTIFY and try to answer the question, you can use the tools.")
agent_kwargs = {
"system_message": system_message,
}
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs, verbose=True)
```
Example 1:
```
response = agent.run("whats the lyrics of Ezhel Pofuduk")
```
Results with debug verbose:
```[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "whats the lyrics of Ezhel Pofuduk"
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Your name is BOTIFY and try to answer the question, you can use the tools..\nHuman: whats the lyrics of Ezhel Pofuduk"
]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.89s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online.",
"generation_info": {
"finish_reason": "stop"
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online.",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 186,
"completion_tokens": 33,
"total_tokens": 219
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor] [1.89s] Exiting Chain run with output:
{
"output": "Sorry, I don't have access to the lyrics of specific songs. You can search for the lyrics of \"Ezhel Pofuduk\" online."
```
Questions:
**1)Where are the tools in this prompt?**
**2)How can you force to use one of the tools as a last resort?**
Btw I know that it has the tools because it sometimes uses.
Example 2:
```
response = agent.run("whats the lyrics of Ezhel Pofuduk")
```
Result:
```
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "NVDIA Share price?"
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Your name is BOTIFY and try to answer the question, you can use the tools.\nHuman: NVDIA Share price?"
]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.44s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "function_call"
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "Search",
"arguments": "{\n \"__arg1\": \"NVIDIA share price\"\n}"
}
}
}
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 180,
"completion_tokens": 18,
"total_tokens": 198
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input:
"NVIDIA share price"
[tool/end] [1:chain:AgentExecutor > 3:tool:Search] [1.56s] Exiting Tool run with output:
"439,89 -15,92 (%3,49)"
[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Your name is BOTIFY and try to answer the question, you can use the tools.\nHuman: NVDIA Share price?\nAI: {'name': 'Search', 'arguments': '{\\n \"__arg1\": \"NVIDIA share price\"\\n}'}\nFunction: 439,89 -15,92 (%3,49)"
]
}
[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [2.12s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "NVIDIA share price is $439.89, down $15.92 (3.49%).",
"generation_info": {
"finish_reason": "stop"
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "NVIDIA share price is $439.89, down $15.92 (3.49%).",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 217,
"completion_tokens": 21,
"total_tokens": 238
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor] [5.12s] Exiting Chain run with output:
{
"output": "NVIDIA share price is $439.89, down $15.92 (3.49%)."
}
```
How can see my tools in the prompt. This is needed because I would to create my custom Agent so I dont give the default prompts that is used in the each agent type.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
#################
langchain.debug = True
tools = [
# Tool(name="Weather", func=weather_service.get_response, description="..."),
# Tool(name="Finance", func=finance_service.get_response, description="..."),
Tool(
name="Search",
func=search.run,
description="useful for when you need to search internet for question. You should ask targeted questions"
),
]
# Initialize the agent
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",
openai_api_key=os.getenv("OPENAPI_SECRET_KEY"))
system_message = SystemMessage(
content="Your name is BOTIFY and try to answer the question, you can use the tools")
agent_kwargs = {
"system_message": system_message,
}
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs, verbose=True)
response = agent.run("NVDIA Share price?")
```
### Expected behavior
I was expecting to see the tools in my prompts as well. | AgentType.OPENAI_FUNCTIONS doesnt show Tools in the prompts. | https://api.github.com/repos/langchain-ai/langchain/issues/10652/comments | 2 | 2023-09-15T18:53:25Z | 2023-12-25T16:07:09Z | https://github.com/langchain-ai/langchain/issues/10652 | 1,898,923,060 | 10,652 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
this is the function
```
from datetime import datetime
from typing import Optional, Union
from os import environ
from gcsa.google_calendar import GoogleCalendar
from gcsa.recurrence import Recurrence, YEARLY, DAILY, WEEKLY, MONTHLY
from gcsa.event import Event
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
def add_event_to_calender(
summary: str,
start: datetime,
end: Union[datetime, None],
) -> None:
GOOGLE_EMAIL = environ.get('GOOGLE_EMAIL')
CREDENTIALS_PATH = environ.get('CREDENTIALS_PATH')
calendar = GoogleCalendar(
GOOGLE_EMAIL,
credentials_path=CREDENTIALS_PATH
)
date_time_format = '%Y-%m-%dT%H:%M:%S'
event = Event(
summary=summary,
start=datetime.strptime(start,date_time_format),
end=datetime.strptime(end,date_time_format)
)
calendar.add_event(event)
GOOGLE_EMAIL = environ.get('GOOGLE_EMAIL')
CREDENTIALS_PATH = environ.get('CREDENTIALS_PATH')
calendar = GoogleCalendar(
GOOGLE_EMAIL,
credentials_path=CREDENTIALS_PATH
)
date_time_format = '%Y-%m-%dT%H:%M:%S'
event = Event(
summary=summary,
start=datetime.strptime(start,date_time_format),
end=datetime.strptime(end,date_time_format)
)
calendar.add_event(event)
```
### Suggestion:
_No response_ | Issue: I am trying to turn this function into a tool how should i do it? | https://api.github.com/repos/langchain-ai/langchain/issues/10647/comments | 4 | 2023-09-15T15:09:55Z | 2023-09-27T18:06:15Z | https://github.com/langchain-ai/langchain/issues/10647 | 1,898,618,653 | 10,647 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using VertexAI model to parse data in a document. Since the documents are large, I am trying to increase the max_output_tokens parameter for "chat-bison-32k" model. I am not able to change this parameter and my output gets truncated after a certain token limit is reached. Is there a way to increase the output token limit?
The output also has a " ```JSON" tag at the beginning which is not desired @
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
model = ChatVertexAI(model_name=model_name,
max_output_tokens = 2400,
temperature=0.01)
example_gen_chain = LLMChain(llm=chat, prompt=prompt)
def generate_examples(generator, data):
return generator.apply_and_parse(data)
# Loop through each text to parse it
for i, item in enumerate(texts, start=1):
text = item
new_example = generate_examples(
example_gen_chain, [{"doc": text}]
)
### Expected behavior
The output gets truncated when the token limit is reached.
```JSON
{
"sections": [
{
"SectionNumber": "1",
"SectionName": "Product",
"Body": "Body of the document.",
},
{
" | Issue : Unable to set max_output_tokens for VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/10644/comments | 6 | 2023-09-15T13:48:00Z | 2023-12-25T16:07:14Z | https://github.com/langchain-ai/langchain/issues/10644 | 1,898,471,801 | 10,644 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version:0.0.291
Platform: linux
python version: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use `chat_models.QianfanEndpoint` with the Message Below:
```python
[
SystemMessage(content="you are an AI Assistant...."),
HumanMessage(content="who are you")
]
```
2. then raise the `TypeError`
### Expected behavior
The SystemMessage could be handled correctly. | chat_models.QianfanEndpoint Not Compatiable with SystemMessage | https://api.github.com/repos/langchain-ai/langchain/issues/10643/comments | 1 | 2023-09-15T13:20:23Z | 2023-09-20T06:24:28Z | https://github.com/langchain-ai/langchain/issues/10643 | 1,898,424,717 | 10,643 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Error message:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
Format specifier missing precision (type=value_error)
My prompt looks like this:
I want you to generate results in json format like :{"key1":... , "key2":.... , "key3":... ,... }
### Suggestion:
_No response_ | When I use a prompt with "{", I get an error | https://api.github.com/repos/langchain-ai/langchain/issues/10639/comments | 4 | 2023-09-15T11:43:33Z | 2024-06-25T19:50:57Z | https://github.com/langchain-ai/langchain/issues/10639 | 1,898,256,493 | 10,639 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
## Description
I use the Chinook database as an example.

I will create a AI customer service system.
The user provides the trackid and question.
In addition to providing answers, the system will also provide track, album and artist information for the trackid.
For examples:
[Question]
[Answer]
[Fixed information]
Q: Help me check the selling price of trackid 1024.
A: The selling price of trackid 1024 is $0.99.
- Track ID: 1024
- Song: Wind Up
- Album: The Colour And The Shape
- Artist: Foo Fighters
## Build chain
```
from langchain.chat_models import ChatOpenAI
from langchain.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
db = SQLDatabase.from_uri(
"sqlite:///Chinook.db",
include_tables=["Track", "Album", "Artist"],
)
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo", verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, use_query_checker=True, verbose=True)
```
## Case 1
+ Input
```
db_chain.run("Help me check the selling price of trackid 1024.")
```
+ Output
```
> Entering new SQLDatabaseChain chain...
Help me check the selling price of trackid 1024.
SQLQuery:SELECT "UnitPrice" FROM "Track" WHERE "TrackId" = 1024;
SQLResult: [(0.99,)]
Answer:Final answer here: The selling price of trackid 1024 is $0.99.
> Finished chain.
```
+ Explain
```
Just ask for answers.
Get the right answer.
```
## Case 2
+ Input
```
db_chain.run(
"Help me check the selling price of trackid 1024, and use markdown items to list track.id, track.name, albums.title, and artist.name."
)
```
+ Output
```
> Entering new SQLDatabaseChain chain...
Help me check the selling price of trackid 1024, and use markdown items to list track.id, track.name, albums.title, and artist.name.
SQLQuery:SELECT "Track"."TrackId", "Track"."Name", "Album"."Title", "Artist"."Name"
FROM "Track"
JOIN "Album" ON "Track"."AlbumId" = "Album"."AlbumId"
JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId"
WHERE "Track"."TrackId" = 1024
SQLResult: [(1024, 'Wind Up', 'The Colour And The Shape', 'Foo Fighters')]
Answer:The selling price of trackid 1024 is not provided in the given tables.
> Finished chain.
```
+ Explain
```
Ask for answers and fixed information at the same time.
LLM will pay attention to the fixed information.
Forget the most important question of asking about price.
```
### Suggestion:
Hope `SQLDatabaseChain` supports returning fixed infomation for specific relational columns. | Issue: Asks SQLDatabaseChain to return specific columns. Let the main question fail. | https://api.github.com/repos/langchain-ai/langchain/issues/10635/comments | 2 | 2023-09-15T10:34:28Z | 2023-12-25T16:07:19Z | https://github.com/langchain-ai/langchain/issues/10635 | 1,898,155,765 | 10,635 |
[
"langchain-ai",
"langchain"
] | ### Feature request
SagemakerEndpoint should be capable of assuming cross account role or have a way to inject the boto3 session
### Motivation
SagemakerEndpoint currently can run with credentials available but to call sagemaker endpoints in different account there is no way to inject boto3 session or role information which can assumed internally.
### Your contribution
Will try to raise a PR and help to test it. | Sagemaker Endpoint cross account capability | https://api.github.com/repos/langchain-ai/langchain/issues/10634/comments | 2 | 2023-09-15T10:14:51Z | 2023-12-25T16:07:24Z | https://github.com/langchain-ai/langchain/issues/10634 | 1,898,126,719 | 10,634 |
[
"langchain-ai",
"langchain"
] | ### System Info
If this error occurs, it is recommended to set the logic for requesting again.


### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
希望有更好的代码
### Expected behavior
- Code optimization | age power error | https://api.github.com/repos/langchain-ai/langchain/issues/10633/comments | 3 | 2023-09-15T09:56:55Z | 2023-12-25T16:07:30Z | https://github.com/langchain-ai/langchain/issues/10633 | 1,898,099,422 | 10,633 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.291
Python 3.9.6
Platform = Unix
I built an chatbot using langchain and GPT 3.5-turbo as LLM. I am running into issues of the bot not being able to appropriately response to social nuances (e.g. "Thank you.", "That's all, goodbye", etc.). Instead of picking up on social cues, it start providing info from the context files. Example conversation:
```
Human - Good morning
AI - Good morning, how can I help you?
Human - Actually, nothing. Goodbye.
AI - *Starts talking about information in the context files*
```
I have went through the code and I found out that you are calling the LLM twice - once to generate a proper question based on the history and then second time to provide an answer for the user.
The issue is related to the first call, in which the LLM generates incorrect question. I say "Goodbye" and the `generations` object from the first call returns something like "What can our company do for you?".
Regarding my code. I am using FAISS to store vectors and I am using the default implementation (4 documents being retrieved). I am not using LangChain in-build memory, because it doesn't allow to maintain multiple conversations with multiple users. I implemented it myself in the same way as `ConversationBufferMemory` is implemented - an array of HumanMessage and AIMessage. And it is working, it remembers topics from the past.
I tried modifying my prompt many times, to being very specific and also to the very simplest:
```
QA_PROMPT = """
You are a helpful assistant that is supposed to help and maintain polite conversation.
<<<{context}>>>
Question: {question}
Helpful answer:
"""
```
The code is simple:
```
qa_prompt_template = PromptTemplate(input_variables=['context', 'question'], template=QA_PROMPT)
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, openai_api_key=OPENAI_API_KEY, max_tokens=512)
vectorstore = FAISS.from_documents(documents, embeddings)
qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), combine_docs_chain_kwargs={'prompt': qa_prompt_template})
...
response = qa({'question': question, 'chat_history': chat_history})
```
Also, I have found out, that if I always send completely empty chat history, the chatbot answers properly. So it has to do something with the history or with the context files.
Can somebody please help me understand why does the model formulate the question incorrectly?
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code provided in the description. Not sure if you can reproduce the behaviour without the context files.
### Expected behavior
The LLM is supposed to response like it normally would. That means a person-like conversation with social cues and responding to what it was actually asked. | LangChain incorrectly interpreting question | https://api.github.com/repos/langchain-ai/langchain/issues/10632/comments | 2 | 2023-09-15T09:14:50Z | 2023-11-01T11:25:36Z | https://github.com/langchain-ai/langchain/issues/10632 | 1,898,034,012 | 10,632 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am trying to make a chatbot with a LLM based on LLaMa2.
When I use the memory (ConversationBufferWindowMemory), it creates a default prompt like:
"""
Human: input
AI: output
Human: input
"""
However, with LLaMa2 I need to create a prompt like:
"""
[INST] {input} [/INST]
{output}
[INST] {input} [/INST]
"""
I discovered that I can change the "Human" and "AI" prefixes, but I can’t delete the ":", so I am getting:
"""
: [INST] {input} [/INST]
: {output}
: [INST] {input} [/INST]
"""
Is there any way I can modify the whole prefix?
Thanks
### Suggestion:
_No response_ | Issue: Remove "AI:" and "Human:" prefixes in memory history | https://api.github.com/repos/langchain-ai/langchain/issues/10630/comments | 5 | 2023-09-15T08:47:07Z | 2024-02-11T16:14:27Z | https://github.com/langchain-ai/langchain/issues/10630 | 1,897,986,132 | 10,630 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
llm = AzureOpenAI(
deployment_name = "gpt35_0301",
model_name = "gpt-35-turbo",
max_tokens = 1000,
top_p = 0,
temperature = 0
)
db = SQLDatabase.from_databricks(catalog = "hive_metastore", schema = "AISchema")
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose = False)
tools = [
Tool(
name = "SQL Database Chain",
func=db_chain.run,
description="Useful when you need to answer questions that need to form a query and get result from database"
)
]
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
stop=["New input:"])
print(agent_chain.run(input="Hi, nice to meet you!"))
```
Hi everyone,
I'm trying to build my own conversational chatbot. When I run the code above, I got the following output:
```
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Hi there! Nice to meet you too. How can I assist you today?
New input: Can you tell me a joke?
Thought: Do I need to use a tool? No
AI: Sure, here's a joke for you: Why did the tomato turn red? Because it saw the salad dressing!
New input: Can you tell me another joke?
Thought: Do I need to use a tool? No
AI: Of course! Here's another one: Why did the scarecrow win an award? Because he was outstanding in his field!
New input: Can you tell me a third joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why don't scientists trust atoms? Because they make up everything!
New input: Can you tell me a fourth joke?
Thought: Do I need to use a tool? No
AI: Absolutely! Here's another one: Why did the chicken cross the playground? To get to the other slide!
New input: Can you tell me a fifth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the cookie go to the doctor? Because it was feeling crumbly!
New input: Can you tell me a sixth joke?
Thought: Do I need to use a tool? No
AI: Of course! Here's another one: Why did the banana go to the doctor? Because it wasn't peeling well!
New input: Can you tell me a seventh joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the coffee file a police report? Because it got mugged!
New input: Can you tell me an eighth joke?
Thought: Do I need to use a tool? No
AI: Absolutely! Here's another one: Why did the belt go to jail? For holding up the pants!
New input: Can you tell me a ninth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the tomato turn red? Because it saw the salad dressing!
New input: Can you tell me a tenth joke?
Thought: Do I need to use a tool? No
AI: Of course! Here's another one: Why did the scarecrow win an award? Because he was outstanding in his field!
New input: Can you tell me an eleventh joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the chicken cross the playground? To get to the other slide!
New input: Can you tell me a twelfth joke?
Thought: Do I need to use a tool? No
AI: Absolutely! Here's another one: Why did the cookie go to the doctor? Because it was feeling crumbly!
New input: Can you tell me a thirteenth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the banana go to the doctor? Because it wasn't peeling well!
New input: Can you tell me a fourteenth joke?
Thought: Do I need to use a tool? No
AI: Absolutely! Here's another one: Why did the coffee file a police report? Because it got mugged!
New input: Can you tell me a fifteenth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the belt go to jail? For holding up the pants!
New input: Can you tell me a sixteenth joke?
Thought: Do I need to use a tool? No
AI: Of course! Here's another one: Why did the tomato turn red? Because it saw the salad dressing!
New input: Can you tell me a seventeenth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the scarecrow win an award? Because he was outstanding in his field!
New input: Can you tell me an eighteenth joke?
Thought: Do I need to use a tool? No
AI: Absolutely! Here's another one: Why did the chicken cross the playground? To get to the other slide!
New input: Can you tell me a nineteenth joke?
Thought: Do I need to use a tool? No
AI: Sure thing! Here's one more: Why did the cookie go to the doctor? Because it was feeling crumbly!
New input: Can you tell me a twentieth joke?
Thought: Do I need to use a tool? No
AI: Of course! Here's another one: Why did the banana go to the doctor? Because it wasn't
> Finished chain.
Of course! Here's another one: Why did the banana go to the doctor? Because it wasn't
```
May I know how can I stop the agent from keep generating new input? I already use the stop parameter, but seems like it doesn't work.
I follow the instruction from Langchain documentation [here](https://python.langchain.com/docs/modules/agents/agent_types/chat_conversation_agent)
Based on the documentation, the output shouldn't return so many New inputs and responses. Any help or advise will be greatly appreciated!
### Suggestion:
_No response_ | Issue: How to stop the agent chain from continuing generate new input in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/10629/comments | 3 | 2023-09-15T08:46:25Z | 2024-02-07T16:25:33Z | https://github.com/langchain-ai/langchain/issues/10629 | 1,897,985,077 | 10,629 |
[
"langchain-ai",
"langchain"
] | hi team,
Can I use the multiple LLM in agent? Use different model by action.
Because I found gpt-4 took too much token in my agent, I just want gpt-4 to handle some action and gpt-3 to handle other action to reduce the token usage. Is it workable? | Use different LLM in agent | https://api.github.com/repos/langchain-ai/langchain/issues/10626/comments | 4 | 2023-09-15T07:43:32Z | 2023-12-25T16:07:34Z | https://github.com/langchain-ai/langchain/issues/10626 | 1,897,890,989 | 10,626 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.291
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am getting parsing error ( ` raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output:`) if I initialized an agent as:
```
chat_agent = ConversationalAgent.from_llm_and_tools(llm=llm, tools=tools)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
memory=memory,
return_intermediate_steps=True,
handle_parsing_errors=True,
verbose=True,
)
```
But no error if I use ConversationalChatAgent instead:
```
chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
memory=memory,
return_intermediate_steps=True,
handle_parsing_errors=True,
verbose=True,
)
```
### Expected behavior
Why do we have two same agents and one does not work? | ValueError: Could not parse LLM output: difference between ConversationalAgent and ConversationalChatAgent | https://api.github.com/repos/langchain-ai/langchain/issues/10624/comments | 2 | 2023-09-15T07:03:59Z | 2023-12-25T16:07:39Z | https://github.com/langchain-ai/langchain/issues/10624 | 1,897,836,389 | 10,624 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: local development on MacOS Ventura
Python version: 3.10.12
langchain.__version__: 0.0.288
faiss.__version__: 1.7.4
chromadb.__version__: 0.4.10
openai.__version__: 0.28.0
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Reproducible example**
I tried to reproduce an example from this page: https://python.langchain.com/docs/integrations/vectorstores/faiss
The reproducible example (with path to the file https://github.com/hwchase17/chat-your-data/blob/master/state_of_the_union.txt adjusted) can be found below.
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
import os
# Get documents
loader = TextLoader("../src/data/raw_files/state_of_the_union.txt") # path adjusted
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# Prepare embedding function
headers = {"x-api-key": os.environ["OPENAI_API_KEY"]}
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", headers=headers)
# Try to get vectordb with FAISS
db = FAISS.from_documents(docs, embeddings)
# Try to get vectordb with Chroma
db = Chroma.from_documents(docs, embeddings)
```
**Error**
The problem is, that I get an `AttributeError: data` error for both `db = FAISS.from_documents(docs, embeddings)` and `db = Chroma.from_documents(docs, embeddings)`
The traceback is as follows:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/openai/openai_object.py:59, in OpenAIObject.__getattr__(self, k)
58 try:
---> 59 return self[k]
60 except KeyError as err:
KeyError: 'data'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[14], line 1
----> 1 db = Chroma.from_documents(docs, embeddings)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:637, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
635 texts = [doc.page_content for doc in documents]
636 metadatas = [doc.metadata for doc in documents]
--> 637 return cls.from_texts(
638 texts=texts,
639 embedding=embedding,
640 metadatas=metadatas,
641 ids=ids,
642 collection_name=collection_name,
643 persist_directory=persist_directory,
644 client_settings=client_settings,
645 client=client,
646 collection_metadata=collection_metadata,
647 **kwargs,
648 )
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:601, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
573 """Create a Chroma vectorstore from a raw documents.
574
575 If a persist_directory is specified, the collection will be persisted there.
(...)
590 Chroma: Chroma vectorstore.
591 """
592 chroma_collection = cls(
593 collection_name=collection_name,
594 embedding_function=embedding,
(...)
599 **kwargs,
600 )
--> 601 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
602 return chroma_collection
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:188, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
186 texts = list(texts)
187 if self._embedding_function is not None:
--> 188 embeddings = self._embedding_function.embed_documents(texts)
189 if metadatas:
190 # fill metadatas with empty dicts if somebody
191 # did not specify metadata for all texts
192 length_diff = len(texts) - len(metadatas)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/embeddings/openai.py:483, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
471 """Call out to OpenAI's embedding endpoint for embedding search docs.
472
473 Args:
(...)
479 List of embeddings, one for each text.
480 """
481 # NOTE: to keep things simple, we assume the list may contain texts longer
482 # than the maximum context and use length-safe embedding function.
--> 483 return self._get_len_safe_embeddings(texts, engine=self.deployment)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/embeddings/openai.py:367, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
364 _iter = range(0, len(tokens), _chunk_size)
366 for i in _iter:
--> 367 response = embed_with_retry(
368 self,
369 input=tokens[i : i + _chunk_size],
370 **self._invocation_params,
371 )
372 batched_embeddings.extend(r["embedding"] for r in response["data"])
374 results: List[List[List[float]]] = [[] for _ in range(len(texts))]
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/embeddings/openai.py:107, in embed_with_retry(embeddings, **kwargs)
104 response = embeddings.client.create(**kwargs)
105 return _check_response(response, skip_empty=embeddings.skip_empty)
--> 107 return _embed_with_retry(**kwargs)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/langchain/embeddings/openai.py:104, in embed_with_retry.<locals>._embed_with_retry(**kwargs)
102 @retry_decorator
103 def _embed_with_retry(**kwargs: Any) -> Any:
--> 104 response = embeddings.client.create(**kwargs)
105 return _check_response(response, skip_empty=embeddings.skip_empty)
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/openai/api_resources/embedding.py:38, in Embedding.create(cls, *args, **kwargs)
35 # If a user specifies base64, we'll just return the encoded string.
36 # This is only for the default case.
37 if not user_provided_encoding_format:
---> 38 for data in response.data:
39
40 # If an engine isn't using this optimization, don't do anything
41 if type(data["embedding"]) == str:
42 assert_has_numpy()
File ~/mambaforge/envs/streamlit-chatbot/lib/python3.10/site-packages/openai/openai_object.py:61, in OpenAIObject.__getattr__(self, k)
59 return self[k]
60 except KeyError as err:
---> 61 raise AttributeError(*err.args)
AttributeError: data
```
### Expected behavior
The function should complete without an error. | FAISS.from_documents(docs, embeddings) and Chroma.from_documents(docs, embeddings) result in `AttributeError: data`. | https://api.github.com/repos/langchain-ai/langchain/issues/10622/comments | 14 | 2023-09-15T06:36:52Z | 2024-06-21T16:37:56Z | https://github.com/langchain-ai/langchain/issues/10622 | 1,897,803,767 | 10,622 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
- Python Version: [Python 3.8]
**Issue:** When I used ConversationBufferMemory then it returns the response out of the context. when I remove memory functionality from my code it works fine.
**CODE:**
`from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from langchain.llms import OpenAI
import pinecone
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.chains.question_answering import load_qa_chain
class MemoryConfig:
def __init__(self):
self.template = """You are a chatbot having a conversation with a human. If you don't know the answer, you will respond with "I don't know.
{context}
{chat_history}
Human: {human_input}
Chatbot: """
self.prompt = PromptTemplate(
input_variables=["chat_history", "human_input", "context"], template=self.template
)
app_settings = MemoryConfig()
app = FastAPI()
user_sessions = {}
class ExportRequest(BaseModel):
query: str
categoryName: str
@app.post("/chat")
def chat(request: ExportRequest):
query = request.query
categoryName = request.categoryName
index_name = categoryName
openai_api_key = "sk-xxxx"
PINECONE_API_KEY = "xxxxx"
PINECONE_API_ENV = "us-west4-gcp-free"
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_API_ENV
)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
index_name = index_name
vectorstore = Pinecone.from_existing_index(index_name, embeddings)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input", return_messages=False)
# Get or create a session for the user
user_session = user_sessions.get(categoryName, {})
print("user_session 1", user_session)
if "chat_history" in user_session:
for entry in user_session['chat_history']:
user_message = entry['human_input']
ai_message = entry['chatbot_response']
memory.chat_memory.add_user_message(user_message)
memory.chat_memory.add_ai_message(ai_message)
# Initialize the conversation history for this session
if "chat_history" not in user_session:
user_session["chat_history"] = []
# Load the conversation history from the session
chat_history = user_session["chat_history"]
chain = load_qa_chain(
OpenAI(temperature=0, openai_api_key=openai_api_key), chain_type="stuff",memory=memory, prompt=app_settings.prompt
)
try:
docs = vectorstore.similarity_search(query)
output = chain.run(input_documents=docs, human_input=query)
# Append the latest user input and chatbot response to the conversation history
chat_history.append({"human_input": query, "chatbot_response": output})
# MEMORY LOAD
except Exception as e:
return HTTPException(status_code=400, detail="An error occurred: " + str(e))
# Save the updated conversation history in the session
user_session["chat_history"] = chat_history
user_sessions[categoryName] = user_session
# memory.clear()
return {"status": 200, "data": {"result": output, "MEMORY":memory}}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
`
**Query: 1**
{
"query": "Who is the PM of India?",
"categoryName": "langchaintest"
}
_"result": " I don't know",_
**Query: 2 (Hit API 2nd time with same question)**
{
"query": "Who is the PM of India?",
"categoryName": "langchaintest"
}
_"result": " The Prime Minister of India is Narendra Modi.",_
**NOTE:** my Pincone DB doesn't have any context related to _"The Prime Minister of India is Narendra Modi."_
I want to response only those query which exist in pinecone db.
Please let me know if there's any additional information or troubleshooting steps needed. Thank you for your attention to this matter.
### Suggestion:
_No response_ | Issue: Issue with ConversationBufferMemory in FastAPI code | https://api.github.com/repos/langchain-ai/langchain/issues/10621/comments | 6 | 2023-09-15T06:27:51Z | 2023-12-25T16:07:45Z | https://github.com/langchain-ai/langchain/issues/10621 | 1,897,793,321 | 10,621 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.291
python3.9
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os
import openai
openai.api_type = "azure"
openai.api_base = os.getenv("OPENAI_API_BASE")
openai.api_version = "version"
openai.api_key = os.getenv("OPENAI_API_KEY")
DEPLOYMENT_NAME = 'deployment name
from langchain.chat_models import AzureChatOpenAI
llm = AzureChatOpenAI(
openai_api_base=os.getenv("OPENAI_API_BASE"),
openai_api_version="version",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=os.getenv("OPENAI_API_KEY"),
openai_api_type="azure",
temperature=0.0
)
result = llm("Father of computer")
print(result)
```
### Expected behavior
Expecting the answer | TypeError: Got unknown type F | https://api.github.com/repos/langchain-ai/langchain/issues/10618/comments | 3 | 2023-09-15T04:09:53Z | 2023-09-15T09:06:10Z | https://github.com/langchain-ai/langchain/issues/10618 | 1,897,674,474 | 10,618 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be great to see **thought instruction** be implemented as an alternative to chain of thought (CoT) prompting.
**Thought instruction** is proposed as an alternative to chain of thought (CoT) prompting for a more nuanced approach to software development. It involves explicitly addressing specific problem-solving thoughts in instructions, akin to solving subtasks in a sequential manner. The method includes role swapping to inquire about unimplemented methods or explain feedback messages caused by bugs. This process fosters a clearer understanding of the existing code and identifies specific gaps that need addressing. By doing so, **thought instruction** aims to mitigate code hallucinations and enable a more accurate, context-aware approach to code completion, resulting in more reliable and comprehensive code outputs.
### Motivation
See ChatDev ([source code](https://github.com/OpenBMB/ChatDev/tree/main) and [paper](https://arxiv.org/pdf/2307.07924)) for inspiration.
### Your contribution
Idea | Thought Instruction (Alternative to CoT) | https://api.github.com/repos/langchain-ai/langchain/issues/10610/comments | 1 | 2023-09-15T00:34:54Z | 2024-01-25T14:17:25Z | https://github.com/langchain-ai/langchain/issues/10610 | 1,897,522,792 | 10,610 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi ,
I'm trying to see how I can put a system message for my chain to tell him that for example "Your name is XX".
I've tried a lot of things and saw a lot of issues resolved and documentations but they never worked... Any help will be appreciated. here is the code of my chain.ts:
`
import {OpenAI} from "langchain/llms/openai";
import {pinecone} from "@/utils/pinecone-client";
import {PineconeStore} from "langchain/vectorstores/pinecone";
import {OpenAIEmbeddings} from "langchain/embeddings/openai";
import {ConversationalRetrievalQAChain} from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";
import { ChatOpenAI } from "langchain/chat_models/openai";
async function initChain() {
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0,
});
const pineconeIndex = pinecone.Index('canada');
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings({}),
{
pineconeIndex: pineconeIndex,
textKey: 'text',
},
);
return ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{returnSourceDocuments: true}
);
return ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{returnSourceDocuments: true},
);
}
export const chain = await initChain()`
### Suggestion:
_No response_ | Issue: I cannot seem to find how to make a System role message in my chain. | https://api.github.com/repos/langchain-ai/langchain/issues/10608/comments | 2 | 2023-09-14T23:49:19Z | 2023-12-25T16:07:49Z | https://github.com/langchain-ai/langchain/issues/10608 | 1,897,492,962 | 10,608 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
As noted here: #10462 and #6819
I've realized I'm thousands of miles away from having the skills to fix this and make a PR (I'm not a Pro Dev) in my attempt to update the `SelfQueryRetriever`. However, I think this will be a great learning opportunity, with help from someone who knows what they're doing (@agola11).
After taking a close look at the `SelfQueryRetriever` [source](https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html#SelfQueryRetriever), I noticed that what needs to be updated is this part from the `_get_relevant_documents` function:
```
structured_query = cast(
StructuredQuery,
self.llm_chain.predict_and_parse(
callbacks=run_manager.get_child(), **inputs
),
)
```
I even ran `SelfQueryRetriever` (in my ignorance) with just `self.llm_chain.predict` to see what it did, but I got the JSON as the output and the vectorstore complaining it was expecting a tuple:
```
in RedisTranslator.visit_structured_query(self, structured_query)
91 def visit_structured_query(
92 self, structured_query: StructuredQuery
93 ) -> Tuple[str, dict]:
---> 94 if structured_query.filter is None:
95 kwargs = {}
96 else:
AttributeError: 'str' object has no attribute 'filter'
```
I also took a look at the `predict_and_parse` method in the `LLMChain` [source](https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html#LLMChain.predict_and_parse). And here's where I knew I was biting way more than I could (ever) chew.
### Suggestion:
Can someone please guide me to replace and update the `_get_relevant_documents` function?
I think I need to find a way to convert the JSON to the required tuple, but I can't figure out how. Am I on the right track? | Issue: Help fixing "predict_and_parse" deprecation from SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/10606/comments | 3 | 2023-09-14T22:37:24Z | 2023-12-25T16:07:54Z | https://github.com/langchain-ai/langchain/issues/10606 | 1,897,437,974 | 10,606 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langain 0.288
Windows 11
Python 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
called using the below code, where model_n_ctx is set to 1024
llm = LlamaCpp(model_path=model_path, max_tokens=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=model_verbose, echo=True)
when executing inference getting error inputs token exceed 512
### Expected behavior
called using the below code, where model_n_ctx is set to 1024
llm = LlamaCpp(model_path=model_path, max_tokens=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=model_verbose, echo=True)
when executing inference getting error inputs token exceed 512 | Llama - n_ctx defaults to 512 even if overide passed during invocation | https://api.github.com/repos/langchain-ai/langchain/issues/10590/comments | 2 | 2023-09-14T17:17:33Z | 2023-12-21T16:05:59Z | https://github.com/langchain-ai/langchain/issues/10590 | 1,896,999,033 | 10,590 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | Add Google Cloud Document AI integration | https://api.github.com/repos/langchain-ai/langchain/issues/10589/comments | 2 | 2023-09-14T16:57:14Z | 2023-10-09T15:05:54Z | https://github.com/langchain-ai/langchain/issues/10589 | 1,896,971,125 | 10,589 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.288
python==3.10
This bug is reproducible on older langchain version (0.0.240) and different os (Windows, Debian).
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.tools.python.tool import PythonAstREPLTool
query = """
import pandas as pd
import random
import string
def generate_random_text():
return ''.join(random.choices(string.ascii_letters + string.digits, k=128))
df = pd.DataFrame({
'Column1': [generate_random_text() for _ in range(1000)],
'Column2': [generate_random_text() for _ in range(1000)],
'Column3': [generate_random_text() for _ in range(1000)]
})
df
"""
ast_repl = PythonAstREPLTool()
ast_repl(query)
>>> "NameError: name 'generate_random_text' is not defined"
```
### Expected behavior
I expect it to return a df. | PythonAstREPLTool won't execute code with functions/lambdas | https://api.github.com/repos/langchain-ai/langchain/issues/10583/comments | 3 | 2023-09-14T14:22:45Z | 2023-12-25T16:08:00Z | https://github.com/langchain-ai/langchain/issues/10583 | 1,896,687,080 | 10,583 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am building an agent and need a tool that can get give the agent access to the current datetime.
### Suggestion:
_No response_ | Issue: an agent that can get the current time | https://api.github.com/repos/langchain-ai/langchain/issues/10582/comments | 5 | 2023-09-14T14:14:19Z | 2023-09-27T17:30:33Z | https://github.com/langchain-ai/langchain/issues/10582 | 1,896,670,788 | 10,582 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.288
python = 3.8.0
### Who can help?
@hwchase17
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm sorry, I'm not very familiar with this field,
but I don't quite understand how the description of this function differs from its actual operation.
```
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
"""
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search_with_score"
)
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
embedded_query = self._embedding.embed_query(query)
if not self._by_text:
vector = {"vector": embedded_query}
result = (
query_obj.with_near_vector(vector)
.with_limit(k)
.with_additional("vector")
.do()
)
else:
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs_and_scores = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
score = np.dot(res["_additional"]["vector"], embedded_query)
docs_and_scores.append((Document(page_content=text, metadata=res), score))
return docs_and_scores
```
### Expected behavior
`score = np.dot(res["_additional"]["vector"], embedded_query)`
As you can see, the description mentions that the `score` corresponds to `cosine distance`, but the `code` seems to only calculate the `dot product`. Am I missing something?
And Here is some definition from Weaviate:
[https://weaviate.io/blog/distance-metrics-in-vector-search#cosine-similarity](url)

Thanks for your kind help!
| Is score return from similarity_search_with_score in Weaviate is really cosine distance? | https://api.github.com/repos/langchain-ai/langchain/issues/10581/comments | 4 | 2023-09-14T14:14:17Z | 2023-12-25T16:08:05Z | https://github.com/langchain-ai/langchain/issues/10581 | 1,896,670,748 | 10,581 |
[
"langchain-ai",
"langchain"
] | ### System Info
'OS_NAME': 'DEBIAN_10'
Langchain version : '0.0.288'
python : 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This issue only occurs when region != "global". The retriever works well when region is set to "global"
Steps to reproduce :
1. Create a Enterprise search App with region = 'us'

3. Import langchain version 0.0.288
```
import langchain
from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever
PROJECT_ID = "<PROJECT_ID>" # Set to your Project ID
SEARCH_ENGINE_ID = "<SEARCH_ENGINE_ID>"#"# Set to your data store ID
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID,
search_engine_id=SEARCH_ENGINE_ID,
max_documents=3 ,
location_id = "us"
)
retriever.get_relevant_documents("What is capital of India?")
```
4. Code errors out with below
```
---------------------------------------------------------------------------
_InactiveRpcError Traceback (most recent call last)
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:72, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
71 try:
---> 72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
File /opt/conda/envs/python310/lib/python3.10/site-packages/grpc/_channel.py:1161, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1155 (
1156 state,
1157 call,
1158 ) = self._blocking(
1159 request, timeout, metadata, credentials, wait_for_ready, compression
1160 )
-> 1161 return _end_unary_response_blocking(state, call, False, None)
File /opt/conda/envs/python310/lib/python3.10/site-packages/grpc/_channel.py:1004, in _end_unary_response_blocking(state, call, with_call, deadline)
1003 else:
-> 1004 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.253.120.95:443 {created_time:"2023-09-14T12:55:00.327037809+00:00", grpc_status:5, grpc_message:"DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found."}"
>
The above exception was the direct cause of the following exception:
NotFound Traceback (most recent call last)
Cell In[365], line 1
----> 1 retriever.get_relevant_documents("What is capital of India?")
File ~/.local/lib/python3.10/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File ~/.local/lib/python3.10/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File ~/.local/lib/python3.10/site-packages/langchain/retrievers/google_cloud_enterprise_search.py:254, in GoogleCloudEnterpriseSearchRetriever._get_relevant_documents(self, query, run_manager)
251 search_request = self._create_search_request(query)
253 try:
--> 254 response = self._client.search(search_request)
255 except InvalidArgument as e:
256 raise type(e)(
257 e.message + " This might be due to engine_data_type not set correctly."
258 )
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/cloud/discoveryengine_v1beta/services/search_service/client.py:577, in SearchServiceClient.search(self, request, retry, timeout, metadata)
570 metadata = tuple(metadata) + (
571 gapic_v1.routing_header.to_grpc_metadata(
572 (("serving_config", request.serving_config),)
573 ),
574 )
576 # Send the request.
--> 577 response = rpc(
578 request,
579 retry=retry,
580 timeout=timeout,
581 metadata=metadata,
582 )
584 # This method is paged; wrap the response in a pager, which provides
585 # an `__iter__` convenience method.
586 response = pagers.SearchPager(
587 method=rpc,
588 request=request,
589 response=response,
590 metadata=metadata,
591 )
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:113, in _GapicCallable.__call__(self, timeout, retry, *args, **kwargs)
110 metadata.extend(self._metadata)
111 kwargs["metadata"] = metadata
--> 113 return wrapped_func(*args, **kwargs)
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:74, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
NotFound: 404 DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found.
```
### Expected behavior
Code should return three relevant documents from Enterprise Search | GoogleCloudEnterpriseSearchRetriever fails to where location is "us" | https://api.github.com/repos/langchain-ai/langchain/issues/10580/comments | 3 | 2023-09-14T13:11:37Z | 2023-11-01T05:59:16Z | https://github.com/langchain-ai/langchain/issues/10580 | 1,896,548,466 | 10,580 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.285
langsmith version 0.0.28
Python version 3.11.2
### Who can help?
@hwchase17
### Information
- [X ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I followed tutorial in RAG cookbook with "With Memory and returning source documents"
Steps for changing behaviour
1. use FAISS.from documents and save to files
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)
vectorstore.save_local("data/faiss_index")
2. load from file
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.load_local("data/faiss_index", embeddings)
retriever = vectorstore.as_retriever()
3. error raised "Document' object has no attribute '_lc_kwargs" in step 5
final_inputs = {
"context": lambda x: _combine_documents(x["docs"]),
"question": itemgetter("question")
}
here is the screen shot when using langsmith
<img width="543" alt="RAG" src="https://github.com/langchain-ai/langchain/assets/105797032/038b1b98-4dde-47a1-a79b-4061014c05a2">
### Expected behavior
Expected behaviour is LLM give result without error | Document' object has no attribute '_lc_kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/10579/comments | 5 | 2023-09-14T13:04:51Z | 2024-01-30T00:55:18Z | https://github.com/langchain-ai/langchain/issues/10579 | 1,896,536,091 | 10,579 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have a question&answer over docs chatbot application, that uses the RetrievalQAWithSourcesChain and ChatPromptTemplate. In langchain version 0.0.238 it used to return sources but this seems to be broken in the releases since then.
Python version: Python 3.11.4
LangChain version: 0.0.287
Example response with missing sources:
> Entering new RetrievalQAWithSourcesChain chain...
> Finished chain.
{'question': 'what is sql injection', 'answer': 'SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. By manipulating the input data, an attacker can execute their own malicious SQL queries, which can lead to unauthorized access, data theft, or modification of the database. This vulnerability can be exploited to view sensitive data, modify or delete data, or even take control of the database server. SQL injection is a serious issue that can result in high-profile data breaches and compromises of user accounts. It is important for developers to implement proper input validation and parameterized queries to prevent SQL injection attacks.\n\n', 'sources': ''}
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import pickle
import gradio as gr
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chat_models import PromptLayerChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
pkl_file_path = "faiss_store.pkl"
event = {"question": "what is sql injection"}
system_template = """
Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer."
If you don't know the answer, just say "Hmm..., I'm not sure.", don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
The "SOURCES" part should be a reference to the source of the document from which you got your answer.
Example of your response should be:
The answer is foo
SOURCES:
1. abc
2. xyz
Begin!
----------------
{summaries}
"""
def get_chain(store: FAISS, prompt_template: ChatPromptTemplate):
return RetrievalQAWithSourcesChain.from_chain_type(
PromptLayerChatOpenAI(
pl_tags=["burpbot"],
temperature=0,
),
chain_type="stuff",
retriever=store.as_retriever(),
chain_type_kwargs={"prompt": prompt_template},
reduce_k_below_max_tokens=True,
verbose=True,
)
def create_prompt_template() -> ChatPromptTemplate:
return ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
def load_remote_faiss_store() -> FAISS:
with open(pkl_file_path, "rb") as f:
return pickle.load(f)
def main() -> dict:
prompt_template = create_prompt_template()
store: FAISS = load_remote_faiss_store()
chain = get_chain(store, prompt_template)
result = chain(event)
print(result)
```
### Expected behavior
expected output:
>{'question': 'what is sql injection', 'answer': 'SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. By manipulating the input data, an attacker can execute their own malicious SQL queries, which can lead to unauthorized access, data theft, or modification of the database. This vulnerability can be exploited to view sensitive data, modify or delete data, or even take control of the database server. SQL injection is a serious issue that can result in high-profile data breaches and compromises of user accounts. It is important for developers to implement proper input validation and parameterized queries to prevent SQL injection attacks.\n\n', 'sources': 'https://example.net/web-security/sql-injection'}
| The RetrievalQAWithSourcesChain doesn't return SOURCES. | https://api.github.com/repos/langchain-ai/langchain/issues/10575/comments | 5 | 2023-09-14T10:01:45Z | 2024-02-17T16:07:23Z | https://github.com/langchain-ai/langchain/issues/10575 | 1,896,207,622 | 10,575 |
[
"langchain-ai",
"langchain"
] | ### System Info
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
if f"{self.ai_prefix}:" in text:
return AgentFinish(
{"output": text.split(f"{self.ai_prefix}:")[-1].strip()}, text
)
regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, text)
if not match:
raise OutputParserException(f"Could not parse LLM output: `{text}`")
action = match.group(1)
action_input = match.group(2)
return AgentAction(action.strip(), action_input.strip(" ").strip('"'), text)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. init an agent
2. ask the agent a simple question which it can solve without using any tools
### Expected behavior
DONT RAISE ERROR | agent got "No I need to use a tool? No" response from llmm,which CANNOT be parsed | https://api.github.com/repos/langchain-ai/langchain/issues/10572/comments | 2 | 2023-09-14T05:33:02Z | 2023-12-15T05:47:20Z | https://github.com/langchain-ai/langchain/issues/10572 | 1,895,728,355 | 10,572 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
<img width="399" alt="WX20230914-113935@2x" src="https://github.com/langchain-ai/langchain/assets/34183928/1d61724a-152f-4ad2-8197-0dfc0fd44f98">
### Idea or request for content:
_No response_ | Link in the Readme is invalid. | https://api.github.com/repos/langchain-ai/langchain/issues/10569/comments | 3 | 2023-09-14T03:40:51Z | 2023-12-27T16:05:23Z | https://github.com/langchain-ai/langchain/issues/10569 | 1,895,600,673 | 10,569 |
[
"langchain-ai",
"langchain"
] | ### System Info
Unresolved reference 'QianfanLLMEndpoint'
Name: langchain
Version: 0.0.288
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint
### Expected behavior
I hope to use Qianfan model but I can't import it, even though i have update my langchain, it can't work. | Can not use QianfanLLMEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/10567/comments | 6 | 2023-09-14T02:32:37Z | 2023-12-26T16:05:47Z | https://github.com/langchain-ai/langchain/issues/10567 | 1,895,548,600 | 10,567 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Attempting to make a google calendar agent, however it keeps making a field that should be a datetime object a string object.
the prompt:
```prefix = """Date format: datetime(2023, 5, 2, 10, 0, 0)
Based on this event description: "Joey birthday tomorrow at 7 pm",
output a json of the following parameters:
Today's datetime on UTC time datetime(2023, 5, 2, 10, 0, 0), it's Tuesday and timezone
of the user is -5, take into account the timezone of the user and today's date.
1. summary
2. start
3. end
4. location
5. description
6. user_timezone
event_summary:
{{
"summary": "Joey birthday",
"start": "datetime(2023, 5, 3, 19, 0, 0)",
"end": "datetime(2023, 5, 3, 20, 0, 0)",
"location": "",
"description": "",
"user_timezone": "America/New_York"
}}
Date format: datetime(YYYY, MM, DD, hh, mm, ss)
Based on this event description: "Create a meeting for 5 pm on Saturday with Joey",
output a json of the following parameters:
Today's datetime on UTC time datetime(2023, 5, 4, 10, 0, 0), it's Thursday and timezone
of the user is -5, take into account the timezone of the user and today's date.
1. summary
2. start
3. end
4. location
5. description
6. user_timezone
event_summary:
{{
"summary": "Meeting with Joey",
"start": "datetime(2023, 5, 6, 17, 0, 0)",
"end": "datetime(2023, 5, 6, 18, 0, 0)",
"location": "",
"description": "",
"user_timezone": "America/New_York"
}}
"""```
the tool
```
class CalnederEventTool(BaseTool):
"""A tool used to create events on google calendar."""
name = "custom_search"
description = "a tool used to create events on google calendar"
def _run(
self,
summary: str,
start: datetime,
end: Union[datetime, None],
recurrence: Optional[str] = None, # Changed from Optional[Recurrence] to Optional[str]
run_manager: Optional['CallbackManagerForToolRun'] = None,
) -> str:
GOOGLE_EMAIL = environ.get('GOOGLE_EMAIL')
CREDENTIALS_PATH = environ.get('CREDENTIALS_PATH')
calendar = GoogleCalendar(
GOOGLE_EMAIL,
credentials_path=CREDENTIALS_PATH
)
event = Event(summary=summary, start=start, end=end)
calendar.add_event(event)
async def _arun(
self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
```
```
> Entering new AgentExecutor chain...
Action:
```
{
"action": "custom_search",
"action_input": {
"summary": "Going to the bar",
"start": "2020-09-01T17:00:00",
"end": "2020-09-01T18:00:00",
"recurrence": ""
}
}
```
### Suggestion:
_No response_ | Issue: Agent keeps using the wrong type | https://api.github.com/repos/langchain-ai/langchain/issues/10566/comments | 10 | 2023-09-14T02:22:18Z | 2023-09-27T20:09:53Z | https://github.com/langchain-ai/langchain/issues/10566 | 1,895,540,285 | 10,566 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)
I'm receiving this error when I try to call the above:(I'm following this doc https://python.langchain.com/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router)
```
ValidationError Traceback (most recent call last)
Cell In[7], line 1
----> 1 chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/router/multi_retrieval_qa.py:66, in MultiRetrievalQAChain.from_retrievers(cls, llm, retriever_infos, default_retriever, default_prompt, default_chain, **kwargs)
64 prompt = r_info.get("prompt")
65 retriever = r_info["retriever"]
---> 66 chain = RetrievalQA.from_llm(llm, prompt=prompt, retriever=retriever)
67 name = r_info["name"]
68 destination_chains[name] = chain
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:84, in BaseRetrievalQA.from_llm(cls, llm, prompt, callbacks, **kwargs)
74 document_prompt = PromptTemplate(
75 input_variables=["page_content"], template="Context:\n{page_content}"
76 )
77 combine_documents_chain = StuffDocumentsChain(
78 llm_chain=llm_chain,
79 document_variable_name="context",
80 document_prompt=document_prompt,
81 callbacks=callbacks,
82 )
---> 84 return cls(
85 combine_documents_chain=combine_documents_chain,
86 callbacks=callbacks,
87 **kwargs,
88 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/load/serializable.py:75, in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File ~/anaconda3/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RetrievalQA
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
```
### Suggestion:
_No response_ | Issue: Dynamically select from multiple retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/10561/comments | 1 | 2023-09-13T21:34:13Z | 2023-09-13T22:29:56Z | https://github.com/langchain-ai/langchain/issues/10561 | 1,895,297,545 | 10,561 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Update RetrievalQA chain so that custom prompts can accept parameters other than input_documents and question.
Current functionality is limited by call to StuffDocumentsChain:
answer = self.combine_documents_chain.run(
input_documents=docs, question=question, callbacks=_run_manager.get_child()
)
Any additional parameters required aren't passed, including chat history.
2 line code update required:
inputs['input_documents'] = docs
answer = self.combine_documents_chain.run(
inputs, callbacks=_run_manager.get_child()
)
### Motivation
Improve flexibility of the RetrievalQA chain enabling system message to be customised, chat history to be passed so GPT can reference back to previous answers. Customise language in answer in QA chain etc.
### Your contribution
Will submit PR with above change in-line with contributing guidelines. | RetrievalQA custom prompt to accept prompts other than context and question e.g. language for use in Sequential Chain | https://api.github.com/repos/langchain-ai/langchain/issues/10557/comments | 3 | 2023-09-13T18:59:17Z | 2024-02-11T16:14:37Z | https://github.com/langchain-ai/langchain/issues/10557 | 1,895,083,788 | 10,557 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.281
Platform: Centos
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi,
I have two vector stores:
splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
splits_1 = splitter.split_documents(docs_1)
splits_2 = splitter.split_documents(docs_2)
store1 = Chroma.from_documents(documents=splits_1, embedding=HuggingFaceEmbeddings())
store2 = Chroma.from_documents(documents=splits_2, embedding=HuggingFaceEmbeddings())
Then I use store2 to do similarity search, it returns results from splits_1, that's very wired. Can someone please help?
Thanks
Tom
### Expected behavior
Different vector store should use its own pool to do the similarity search | LangChain's Chroma similarity_search return results from other db | https://api.github.com/repos/langchain-ai/langchain/issues/10555/comments | 7 | 2023-09-13T17:42:19Z | 2024-05-17T16:06:33Z | https://github.com/langchain-ai/langchain/issues/10555 | 1,894,977,351 | 10,555 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import pandas as pd
import pandas_gpt
df = pd.read_csv('aisc-shapes-database-v16.0.csv', index_col=0, header=0, usecols = ["A:F"], names = [
"Type", "EDI_Std_Nomenclature", "AISC_Manual_Label", "T_F", "W", "Area"])
df.ask('what is the area of W12X12?')
Need help getting this file to read with pandas.read_csv
[aisc-shapes-database-v16.0.csv](https://github.com/langchain-ai/langchain/files/12600284/aisc-shapes-database-v16.0.csv)
### Suggestion:
This is my error message
File "c:\Users\camer\import pandas as pd.py", line 4, in <module>
df = pd.read_csv('aisc-shapes-database-v16.0.csv', index_col=0, header=0, usecols = ["A:F"], names = [
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 948, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 611, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 1448, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 1723, in _make_engine
return mapping[engine](f, **self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "parsers.pyx", line 579, in pandas._libs.parsers.TextReader.__cinit__
File "parsers.pyx", line 668, in pandas._libs.parsers.TextReader._get_header
File "parsers.pyx", line 879, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 890, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2050, in pandas._libs.parsers.raise_parser_error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 703: invalid start byte | Issue: Parsing issue | https://api.github.com/repos/langchain-ai/langchain/issues/10554/comments | 2 | 2023-09-13T17:23:30Z | 2023-12-20T16:05:06Z | https://github.com/langchain-ai/langchain/issues/10554 | 1,894,953,189 | 10,554 |
[
"langchain-ai",
"langchain"
] | I've been searching for a large-context LLM with a relatively low parameter count suitable for local execution on multiple T4 GPUs or a single A100. My primary goal is to summarize extensive financial reports. While I came across FinGPT v1, it seems it isn't hosted on HuggingFace.
However, I did find chatglm-6b, which serves as the foundation for FinGPT v1. This model is accessible on HuggingFace, but I'm facing issues loading it.
Here's a snippet that successfully loads and uses the model outside Langchain:
```
from transformers import AutoModel, AutoTokenizer
model_name = "THUDM/chatglm2-6b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# model = AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda()
# 按需修改,目前只支持 4/8 bit 量化
# model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda()
import torch
has_cuda = torch.cuda.is_available()
# has_cuda = False # force cpu
if has_cuda:
#model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4",trust_remote_code=True).half().cuda() # 3.92
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda()
else:
model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4",trust_remote_code=True).half() # float()
response, history = model.chat(tokenizer, f"Summarize this in a few words: {a}", history=[])
```
But, when I try the following, to use in Langchain:
```
from langchain.llms import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(
model_id="THUDM/chatglm-6b",
task="text-generation",
model_kwargs={"temperature": 0, "max_length": 64},
)
```
I encounter this error:
```
ValueError: Tokenizer class ChatGLMTokenizer does not exist or is not currently imported.
```
From what I gather, the ChatGLM model cannot be passed directly to HuggingFace's pipeline. While the Langchain documentation does mention using ChatGLM as a local model, it seems to primarily focus on using it via an API endpoint:
```
endpoint_url = "http://127.0.0.1:8000"
# direct access endpoint in a proxied environment
# os.environ['NO_PROXY'] = '127.0.0.1'
llm = ChatGLM(
endpoint_url=endpoint_url,
max_token=80000,
history=[["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"]],
top_p=0.9,
model_kwargs={"sample_model_args": False},
)
```
Would anyone have insights on how to correctly load ChatGLM for tasks within Langchain?
### Suggestion:
_No response_ | Issue: Can I load THUDM/chatglm-6b? | https://api.github.com/repos/langchain-ai/langchain/issues/10553/comments | 5 | 2023-09-13T16:32:12Z | 2024-02-17T16:07:28Z | https://github.com/langchain-ai/langchain/issues/10553 | 1,894,880,317 | 10,553 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The `index` [API Reference document](https://api.python.langchain.com/en/latest/indexes/langchain.indexes._api.index.html) that is linked in the [Indexing documentation](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart) returns a 404 error.
### Idea or request for content:
Please include detailed documentation for `index` to use correctly `SQLRecordManager`. | DOC: inexistent documentation for index | https://api.github.com/repos/langchain-ai/langchain/issues/10552/comments | 2 | 2023-09-13T16:04:16Z | 2023-12-20T16:05:11Z | https://github.com/langchain-ai/langchain/issues/10552 | 1,894,837,821 | 10,552 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain_version: "0.0.287"
library: "langchain"
library_version: "0.0.287"
platform: "Linux-6.1.0-12-amd64-x86_64-with-glibc2.36"
py_implementation: "CPython"
runtime: "python"
runtime_version: "3.11.2"
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
After following the [indexing instructions](https://python.langchain.com/docs/modules/data_connection/indexing), `index` stores the documents in a Redis vectorstore, but it does so outside the vectorstore's index.
```
import os, time, json, openai
from langchain.vectorstores.redis import Redis
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from datetime import datetime
from pathlib import Path
openai.api_key = os.environ['OPENAI_API_KEY']
VECTORS_INDEX_NAME = 'Vectors'
COLLECTION_NAME = 'DocsDB'
NAMESPACE = f"Redis/{COLLECTION_NAME}"
REDIS_URL = "redis://10.0.1.21:6379"
embeddings = OpenAIEmbeddings()
record_manager = SQLRecordManager(NAMESPACE, db_url="sqlite:///cache_Redis.sql")
record_manager.create_schema()
rds_vectorstore = Redis.from_existing_index(
embeddings,
index_name=VECTORS_INDEX_NAME,
redis_url=REDIS_URL,
schema='Redis_schema.yaml'
)
index(
document,
record_manager,
rds_vectorstore,
cleanup = "full", # None: for first document load; "incremental": for following documents
source_id_key = "title",
)
```
When exploring the Redis vectorstore, all `documents` loaded outside the specified `VECTORS_INDEX_NAME`.
When `documents` are loaded to the vectorstore without `RecordManager` `index`, they are created inside the specified `VECTORS_INDEX_NAME` when using the following code:
```
rds = Redis.from_documents(
document,
embeddings,
index_name=VECTORS_INDEX_NAME,
redis_url=REDIS_URL,
index_schema='Redis_schema.yaml'
)
```
### Expected behavior
`Documents` loaded into a Redis vectorstore using `index` `RecordManager` should be created inside the vectorstore's index. | SQLRecordManager index adds documents outside existing Redis vectorstore index | https://api.github.com/repos/langchain-ai/langchain/issues/10551/comments | 3 | 2023-09-13T15:58:42Z | 2024-01-30T00:46:01Z | https://github.com/langchain-ai/langchain/issues/10551 | 1,894,829,054 | 10,551 |
[
"langchain-ai",
"langchain"
] | ### Feature request
MMR search_type is not implemented for Google Vertex AI Matching Engine Vector Store (new name of Matching Engine- Vector Search).
I am getting the error `NotImplementedError`
Below is the code that I had used
`retriever = me.as_retriever(
search_type="mmr",
search_kwargs={
"k": 10,
"search_distance": 0.6,
"fetch_k": 15,
"lambda_mult": 0.7 }}`
Please implement it at the earliest, request the team if they can provide ETA too
### Motivation
I am working for a client where they are using only Google Vertex AI components for creating LLM chatbot agents using various unstructured document types. We are not getting optimal results with the default `search_type="similarity"` , we understand that results can improve a lot with MMR search. Hence kindly requesting the team to add the `search_type="mmr"` feature
### Your contribution
Can provide feedback on the new feature performance | MMR search_type not implemented for Google Vertex AI Matching Engine Vector Store (new name of Matching Engine- Vector Search) | https://api.github.com/repos/langchain-ai/langchain/issues/10550/comments | 1 | 2023-09-13T15:58:41Z | 2024-03-16T16:04:41Z | https://github.com/langchain-ai/langchain/issues/10550 | 1,894,829,007 | 10,550 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
# The code for my model for sentiment analysis (this works, the problem is in the next part of my code)
from datasets import load_dataset,Dataset
from sentence_transformers.losses import CosineSimilarityLoss
from setfit import SetFitModel, SetFitTrainer, sample_dataset
from transformers import pipeline
import pandas as pd
import langchain
# df = pd.read_csv("C:/Users/sanja/OneDrive/Desktop/Trillo InternShip/train.csv",encoding='ISO-8859-1')
df = pd.read_csv("C:/Users/sanja/OneDrive/Desktop/Trillo InternShip/train-utf-8.csv")
# Create a mapping from string labels to integer labels
label_mapping = {"negative": 0, "neutral": 1, "positive": 2} # Customize this mapping as needed
# Apply the mapping to the "sentiment" column
df['label'] = df['label'].map(label_mapping)
# Specify the columns for text (input) and label (output)
text_column = "selected_text"
label_column = "label"
# Assuming you have already preprocessed and tokenized your text data
dataset = Dataset.from_pandas(df)
num_samples_per_class = 8
# Simulate the few-shot regime by sampling 8 examples per class
train_dataset = sample_dataset(dataset, label_column=label_column, num_samples=num_samples_per_class)
eval_dataset = dataset # Assuming you want to evaluate on the same DataFrame
# Load a SetFit model from Hub
model = SetFitModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2")
# Create trainer
trainer1 = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=16,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for contrastive learning
column_mapping={text_column: "text", label_column: "label"} # Map dataset columns to text/label expected by trainer
)
# Train and evaluate
trainer1.train()
metrics = trainer1.evaluate()
# Pushing model to hub
trainer1.push_to_hub("Sanjay1234/Trillo-Project")
# But here I get a problem when I do transformation,
from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain
from sentence_transformers.losses import CosineSimilarityLoss
from setfit import SetFitModel, SetFitTrainer, sample_dataset
from transformers import pipeline
def transform_func(text):
shortened_text = "\n\n".join(text.split("\n\n")[:3])
return shortened_text
transform_chain = TransformChain(
input_variables=["text"], output_variables=["output_text"], transform=transform_func
)
# I get a problem here
llm_chain = LLMChain(
llm={"llm": "Sanjay1234/Trillo-Project"}, # Provide the llm parameter as a dictionary
prompt={"prompt": "Summarize this text:"}
)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
text = "This is a long text. I want to transform it to only the first 3 paragraphs."
transformed_text = sequential_chain.run(text)
print(transformed_text)
I get the following error.-
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[26], line 1
----> 1 llm_chain = LLMChain(
2 llm={"llm": "Sanjay1234/Trillo-Project"}, # Provide the llm parameter as a dictionary
3 prompt={"prompt": "Summarize this text:"}
4 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for LLMChain
prompt
Can't instantiate abstract class BasePromptTemplate with abstract methods format, format_prompt (type=type_error)
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
### Suggestion:
_No response_ | Issue: Not sure whether my transformation using the model I created was correct, as I am getting an error. | https://api.github.com/repos/langchain-ai/langchain/issues/10549/comments | 5 | 2023-09-13T15:50:31Z | 2023-12-20T16:05:16Z | https://github.com/langchain-ai/langchain/issues/10549 | 1,894,815,399 | 10,549 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
imports
import langchain
import os
from apikey import apikey
import openai
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain import OpenAI
from langchain.document_loaders import UnstructuredFileLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
import streamlit as st
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.text_splitter import CharacterTextSplitter
import nltk
nltk.download("punkt")
#loading file
loader = UnstructuredFileLoader("aisc-shapes-database-v16.0.csv","aisc-shapes-database-v16.0_a1085.pdf")
documents = loader.load()
len(documents)
text_splitter = CharacterTextSplitter(chunk_size =1000000000, chunk_overlap = 0)
text = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()#(openai_api_key = os.environ['OPENAI_API_KEY'])
doc_search = Chroma.from_documents(text, embeddings)
chain = RetrievalQA.from_chain_type(llm =OpenAI(), chain_type="stuff", retriever=doc_search.as_retriever(search_kwargs={"k":1}))
query = "What is the area of wide flange W44X408"
result = chain.run(query)
print(result)
model.save('CIVEN-GPT')
### Suggestion:
It runs without the csv. file so im assuming it is that however I would like to be able to include the data in the file in the training. | Issue: Want to get this to run, im suspecting that the csv. file is causing the problem | https://api.github.com/repos/langchain-ai/langchain/issues/10544/comments | 2 | 2023-09-13T14:15:08Z | 2023-12-20T16:05:20Z | https://github.com/langchain-ai/langchain/issues/10544 | 1,894,630,727 | 10,544 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to intercept the input prompt and the output of a chain, so I added a custom callback to the chain (derived from _BaseCallbackHandler_), but the input prompt seems quite tricky to retrieve.
The _on_chain_start_ method has the information hidden in the "serialized" variable, but accessing it is quite cumbersome. I let you judge by yourself:
```
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain starts running."""
print(serialized["kwargs"]["prompt"]["kwargs"]["messages"][0]["kwargs"]["prompt"]["kwargs"]["template"])
```
Note that the format of _serialized_ changes from time to time for a reason I ignore, and it doesn't seem to be documented. This makes it unusable. Moreover, the "template" value is not the final prompt passed to the LLM after replacement of variables.
As for the _on_text_ method, it contains a formatted and colored text:
> Prompt after formatting:
> Human: prompt in green
Are there simpler ways to retrieve the input prompt from a callback handler?
### Motivation
Showing both input and output could help debugging and it may be desirable to customize the outputs given by the _verbose_ mode.
### Your contribution
Maybe simply add the input message in the parameters of the _on_chain_start_ method, regardless of the way it has been generated. | Get input prompt in a callback handler | https://api.github.com/repos/langchain-ai/langchain/issues/10542/comments | 3 | 2023-09-13T13:42:49Z | 2024-05-07T16:04:58Z | https://github.com/langchain-ai/langchain/issues/10542 | 1,894,566,864 | 10,542 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
While trying to load a GPTQ model through a HuggingFace Pipeline and then run an agent on it, the inference time is really slow.
```
# Load configuration from the model to avoid warnings
generation_config = GenerationConfig.from_pretrained(model_name_or_path)
# Create a pipeline for text generation
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
do_sample=True,
repetition_penalty=1.15,
generation_config=generation_config,
use_cache=False
)
local_llm = HuggingFacePipeline(pipeline=pipe)
logging.info("Local LLM Loaded")
```
The model is getting loaded on GPU

However the inference is really slow. I am waiting around 10 minutes for one iteration to complete.
`agent = create_csv_agent(
local_llm,
"titanic.csv",
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True
)`
`agent.run("What is the total number of rows in titanic.csv")
`
Also, I get an error message -` Observation: len() is not a valid tool, try one of [python_repl_ast].` How to enable all tools so that the agent can use them?
### Suggestion:
No suggestion, require help. | Issue: Agents using GPTQ models from huggingface is really slow. | https://api.github.com/repos/langchain-ai/langchain/issues/10541/comments | 2 | 2023-09-13T13:26:28Z | 2023-12-20T16:05:26Z | https://github.com/langchain-ai/langchain/issues/10541 | 1,894,532,433 | 10,541 |
[
"langchain-ai",
"langchain"
] | ### Feature request
New features to support Baudu's Qianfan
### Motivation
I believe that Baidu's recently launched LLM platform, Qianfan, which offers a range of APIs, will soon become widely adopted. It would be beneficial to consider incorporating features that facilitate seamless integration between Langchain and Qianfan, making it easier for developers to build applications.
### Your contribution
https://github.com/langchain-ai/langchain/pull/10496 | Will langchain be able to support Baidu Qianfan in the future? | https://api.github.com/repos/langchain-ai/langchain/issues/10539/comments | 2 | 2023-09-13T12:55:51Z | 2023-09-28T01:19:12Z | https://github.com/langchain-ai/langchain/issues/10539 | 1,894,471,329 | 10,539 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.287
In output_parsers there is a `SimpleJsonOutputParser` defined (json.py). This looks very reasonable for easily getting answers back in structured a format. However, the class does not work as it does not specify the method `get_format_instructions`and thus calling the parse method raises a `NotImplementedError`. In addition, there is no documentation and the class is not imported into the `__init__.py` of the directory.
Is this intended behavior ? I am ok to submit a small patch - for my case the class comes very handy and has less complexity than the approach via Pydantic.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce
1. from langchain.output_parsers.json import SimpleJsonOutputParser
2. output_parser = SimpleJsonOutputParser()
3. format_instructions = output_parser.get_format_instructions()
### Expected behavior
SimpleJsonOutputParser works like any other output parser. | SimpleJsonOutputParser not working | https://api.github.com/repos/langchain-ai/langchain/issues/10538/comments | 2 | 2023-09-13T12:50:53Z | 2023-12-20T16:05:31Z | https://github.com/langchain-ai/langchain/issues/10538 | 1,894,462,413 | 10,538 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there an agent toolkit for google calendar?
### Suggestion:
_No response_ | Issue: google calendar agent | https://api.github.com/repos/langchain-ai/langchain/issues/10536/comments | 1 | 2023-09-13T11:46:22Z | 2023-09-14T02:37:27Z | https://github.com/langchain-ai/langchain/issues/10536 | 1,894,354,179 | 10,536 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
1. I have downloaded original LangSmith walkthrough notebook and modified it to run AzureOpenAI llm instead of OpenAI
2. After successful run of the first example I went to Langsmith, selected first LLM call and opened it in the Playground.
3. I have filled up OpenAI key and hit 'Start'
Here is the error I get:
Error: Invalid namespace: $ -> {"id":["langchain","chat_models","azure_openai","AzureChatOpenAI"],"lc":1,"type":"constructor","kwargs":{"temperature":0,"openai_api_key":{"id":["xxx"],"lc":1,"type":"secret"},"deployment_name":"chat-gpt","openai_api_base":"yyy","openai_api_type":"azure","openai_api_version":"2023-03-15-preview"}}
I have played with different ways of setting OPEN_API_KEY but none of them works, the same error is consistently displayed.
So it is a bug or Azure Open AI does not work by design in the Playground?
### Suggestion:
_No response_ | Issue: Is LangSmith playground compatible with Azure OpenAI? | https://api.github.com/repos/langchain-ai/langchain/issues/10533/comments | 14 | 2023-09-13T10:48:45Z | 2024-02-07T17:12:48Z | https://github.com/langchain-ai/langchain/issues/10533 | 1,894,264,876 | 10,533 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain deplopment on sagemaker
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [x] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from typing import Dict
import json
class HFContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_dict = {
"input": {
"question": prompt,
"context": model_kwargs
}
}
input_str = json.dumps(input_dict)
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = output.read().decode('utf-8')
res = json.loads(response_json)
# Stripping away the input prompt from the returned response
ans = res[0]['generated_text'][self.len_prompt:]
ans = ans[:ans.rfind("Human")].strip()
return ans
# Example parameters
parameters = {
'do_sample': True,
'top_p': 0.3,
'max_new_tokens': 1024,
'temperature': 0.6,
'watermark': True
}
llm = SagemakerEndpoint(
endpoint_name="huggingface-pytorch-inference-**********",
region_name="us-east-1",
model_kwargs=parameters,
content_handler=HFContentHandler(),
)
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
memory = ConversationBufferMemory()
# Creating a chain with buffer memory to keep track of conversations
chain = ConversationChain(llm=llm, memory=memory)
chain.predict({"input": {"question": "this is test", "context": "this is answer"}})
### Expected behavior
there is error in content handler please help to correct it.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[87], line 8
5 # Creating a chain with buffer memory to keep track of conversations
6 chain = ConversationChain(llm=llm, memory=memory)
----> 8 chain.predict({"input": {"question": "this is test", "context": "this is answer"}})
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/llm.py:255, in LLMChain.predict(self, callbacks, **kwargs)
240 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
241 """Format prompt with kwargs and pass to LLM.
242
243 Args:
(...)
253 completion = llm.predict(adjective="funny")
254 """
--> 255 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:268, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
232 def __call__(
233 self,
234 inputs: Union[Dict[str, Any], Any],
(...)
241 include_run_info: bool = False,
242 ) -> Dict[str, Any]:
243 """Execute the chain.
244
245 Args:
(...)
266 `Chain.output_keys`.
267 """
--> 268 inputs = self.prep_inputs(inputs)
269 callback_manager = CallbackManager.configure(
270 callbacks,
271 self.callbacks,
(...)
276 self.metadata,
277 )
278 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:425, in Chain.prep_inputs(self, inputs)
423 external_context = self.memory.load_memory_variables(inputs)
424 inputs = dict(inputs, **external_context)
--> 425 self._validate_inputs(inputs)
426 return inputs
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:179, in Chain._validate_inputs(self, inputs)
177 missing_keys = set(self.input_keys).difference(inputs)
178 if missing_keys:
--> 179 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'input'} | ValueError: Missing some input keys: {'input'} | https://api.github.com/repos/langchain-ai/langchain/issues/10531/comments | 7 | 2023-09-13T09:36:13Z | 2024-05-22T16:07:17Z | https://github.com/langchain-ai/langchain/issues/10531 | 1,894,137,855 | 10,531 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers is unreachable.
### Suggestion:
_No response_ | Can not access to url: https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers | https://api.github.com/repos/langchain-ai/langchain/issues/10530/comments | 3 | 2023-09-13T09:30:31Z | 2023-12-25T16:08:20Z | https://github.com/langchain-ai/langchain/issues/10530 | 1,894,127,953 | 10,530 |
[
"langchain-ai",
"langchain"
] | hi team,
In langchain agent, any recommendations to compress the content? Hoping to reduce the token usage.
Setting max token was not working to reduce the token usage. | compress content when using gpt-4 | https://api.github.com/repos/langchain-ai/langchain/issues/10529/comments | 2 | 2023-09-13T09:20:29Z | 2023-12-20T16:05:41Z | https://github.com/langchain-ai/langchain/issues/10529 | 1,894,107,226 | 10,529 |
[
"langchain-ai",
"langchain"
] | ### Feature request
As of today, if a tool crashes, the whole agent or chain crashes. From a user point-of-view, it is understandable that a specific tool is not available.
### Motivation
The user experience should be maintained if a dependency is broken. Plus, catching by default tool error can enhance the software reliability.
### Relates
- https://github.com/langchain-ai/langchain/issues/8348 | Handle by default `ToolException` | https://api.github.com/repos/langchain-ai/langchain/issues/10528/comments | 2 | 2023-09-13T09:05:35Z | 2024-02-06T16:30:01Z | https://github.com/langchain-ai/langchain/issues/10528 | 1,894,078,446 | 10,528 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there any way in langchain to fetch documents from multiple vectorstores, and then combine them to ask the question.
### Suggestion:
_No response_ | Issue: How to retrieve and search from multiple collections or directories? | https://api.github.com/repos/langchain-ai/langchain/issues/10526/comments | 2 | 2023-09-13T07:03:24Z | 2023-12-20T16:05:46Z | https://github.com/langchain-ai/langchain/issues/10526 | 1,893,881,183 | 10,526 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version = 0.0.281
python = 3.11
opensearch-py = 2.3.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to do metadata based filtering alongside the query execution using `OpensearchVectorSearch.similarity_search()`. But when I use `metadata_field` and `metadata_filter`, the search doesn't seems to take that into account and still returns results outside of those filters.
Here is myr code:
`response = es.similarity_search( query = "<sample query text>", K =4, metadata_field = "title", metadata_filter = {"match":{"title": "<sample doc title>}}, )`
Here `es` is the `OpenSearchVectorSearch` object for `index1`
The output structure is like this:
`[Document(page_content = ' ', metadata={'vector_field' : [], 'text' : ' ', 'metadata' : {'source' : ' ', 'title' : ' ' }})]`
Here the title I see is not the title I specified in my query.
Steps to reproduce:
1. Create an Opensearch index with multiple documents.
2. Run similarity_search() query with a metadata_field and/or metadata_filter
### Expected behavior
The query should be run against the specified `metadata_field` and `metadata_filter` and in output, I should only see the correct document name I specified in `metadata_field` and `metadata_filter` | Opensearch metadata_field and metadata_filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/10524/comments | 7 | 2023-09-13T05:51:57Z | 2024-04-23T19:05:30Z | https://github.com/langchain-ai/langchain/issues/10524 | 1,893,792,079 | 10,524 |
[
"langchain-ai",
"langchain"
] | ### Feature request
An input for conversational chains to be able to limit their context to a set number of chats
### Motivation
I am in the process of building a document analysis tool using langchain but when the chat chain becomes too long, I just get an error stating that the limit for the no of openai tokens has been reached because the context keeps becoming longer and longer. is there some way i could limit the context to only a certain no of messages and not take all of them in.
### Your contribution
No I am very new to using langchain and having a hard time understanding the codebase. so i am afraid their is nothing i could do to help. | only use past x messages | https://api.github.com/repos/langchain-ai/langchain/issues/10521/comments | 2 | 2023-09-13T02:42:51Z | 2023-12-20T16:05:51Z | https://github.com/langchain-ai/langchain/issues/10521 | 1,893,622,488 | 10,521 |
[
"langchain-ai",
"langchain"
] | hi team,
I am using the Azure openai gpt4-32k as llm in langchain. I implemented openai plugin by agent, but the cost is increasing at an incredible rate. I think the agent would ask gpt4 modal to understand the plugin openapi json that make the token usage increasing. any recommendations to reduce the token usage in agent?
thanks | Reduce azure openai token usage | https://api.github.com/repos/langchain-ai/langchain/issues/10520/comments | 2 | 2023-09-13T01:23:29Z | 2023-12-20T16:05:56Z | https://github.com/langchain-ai/langchain/issues/10520 | 1,893,566,283 | 10,520 |
[
"langchain-ai",
"langchain"
] | ### Feature request
BaseStringMessagePromptTemplate.from_template supports the template_format variable, while BaseStringMessagePromptTemplate.from_template_file does not.
### Motivation
All supported template formats (including Jinja2) should be supported by all template loaders equally.
### Your contribution
I'm not experienced enough with the Langchain codebase to submit PRs at this time. | Support jinja2 template format when using ChatPromptTemplate.from_template_file | https://api.github.com/repos/langchain-ai/langchain/issues/10519/comments | 7 | 2023-09-13T01:10:23Z | 2024-02-09T16:21:28Z | https://github.com/langchain-ai/langchain/issues/10519 | 1,893,555,356 | 10,519 |
[
"langchain-ai",
"langchain"
] | ### System Info
Device name LAPTOP-3BD5HR1V
Processor AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx 2.10 GHz
Installed RAM 20.0 GB (17.9 GB usable)
Device ID F8ACB5C8-80FB-46C6-AE6D-33AD019A5728
Product ID 00325-82110-59554-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
Edition Windows 11 Home
Version 22H2
Installed on 10/5/2022
OS build 22621.2134
Serial number PF2WCKPH
Experience Windows Feature Experience Pack 1000.22659.1000.0
Python 3.11.2
langchain 0.0.272
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have created a Python CLI tool called 'dir-diary' that uses Chain.run to make API calls. The tool is built on `click`. When I run the tool from a terminal window with a Python virtual environment activated, the tool works okay. It also appears to work okay from both Linux-based and Windows-based Github Actions runners. But when I run it from a vanilla Windows terminal on my own machine, langchain fails to authenticate with Azure DevOps after several retries.
There's a whole lot of text returned with the error. The most helpful bits are:
.APIError: HTTP code 203 from API.
'Microsoft Internet Explorer's Enhanced Security Configuration is currently enabled on your environment. This enhanced level of security prevents our web integration experiences from displaying or performing correctly. To continue with your operation please disable this configuration or contact your administrator'
'Unable to complete authentication for user due to looping logins'
'Traceback (most recent call last):
File "C:\Users\chris\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 755, in _interpret_response_line
data = json.loads(rbody)
^^^^^^^^^^^^^^^^^
File "C:\Users\chris\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 335, in loads
raise JSONDecodeError("Unexpected UTF-8 BOM (decode using utf-8-sig)",
json.decoder.JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0)'
Steps to reproduce:
I haven't fully figured out the secret to reproducing this yet. Obviously, if it works on a Windows runner, then it's not really a Windows problem. There must be something problematic about my local setup that I can't identify. FWIW, here are my steps:
1. run command `pip install -U dir-diary` from a Windows terminal
2. go to any code project folder
3. run command `summarize`
I have tried running as administrator and turning down the Internet security level through Internet Options in Control Panel, but neither of those solutions fixed the problem.
### Expected behavior
It's supposed to successfully query the API to summarize the project folder. | APIError: HTTP code 203 from API when running from a Click CLI app on a local Windows terminal | https://api.github.com/repos/langchain-ai/langchain/issues/10511/comments | 2 | 2023-09-12T20:33:16Z | 2023-12-19T00:47:23Z | https://github.com/langchain-ai/langchain/issues/10511 | 1,893,219,675 | 10,511 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.12
Google Colab
Elasticsearch Cloud 8.9.2
Langchain - latest
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Load list of documents
2. Setup ElasticsearchStore of Langchain, with appropriate ES cloud credentials
3. Successfully create index with custom embedding model (HF embedding model, deployed on colab)
4. Deploy ELSER model and run it (with default model id).
5. Try creating index with SparseVectorRetrievalStrategy (ELSER) over the same list of documents.
6. Tried to change timeout, but didn't effect the outcome.
7. NOTE: It does start uploading docs and docs count is increasing, but it stops after about 10 sec. I tried to run the ELSER model on 3 nodes, but nothing changed.
### Expected behavior
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode([https://-------.us-central1.gcp.cloud.es.io:443](https:/------.us-central1.gcp.cloud.es.io/))> has failed for 1 times in a row, putting on 1 second timeout
---------------------------------------------------------------------------
ConnectionTimeout Traceback (most recent call last)
[<ipython-input-92----------cb>](https://-------colab.googleusercontent.com/outputframe.html?vrz=colab_20230911-060143_RC00_564310758#) in <cell line: 1>()
----> 1 elastic_elser_search = ElasticsearchStore.from_documents(
2 documents=split_texts,
3 es_cloud_id="cloudid",
4 index_name="search-tmd-elser",
5 es_user="elastic",
10 frames
[/usr/local/lib/python3.10/dist-packages/elastic_transport/_node/_http_urllib3.py](https://---XXXX-----0-colab.googleusercontent.com/outputframe.html?vrz=colab_---XXXX-----#) in perform_request(self, method, target, body, headers, request_timeout)
197 exception=err,
198 )
--> 199 raise err from None
200
201 meta = ApiResponseMeta(
ConnectionTimeout: Connection timed out | Elasticsearch ELSER Timeout | https://api.github.com/repos/langchain-ai/langchain/issues/10506/comments | 5 | 2023-09-12T19:32:37Z | 2024-01-30T00:41:10Z | https://github.com/langchain-ai/langchain/issues/10506 | 1,893,131,951 | 10,506 |
[
"langchain-ai",
"langchain"
] | ### Feature request
## Description
Currently, the SQLDatabaseChain class is designed to optionally return intermediate steps taken during the SQL command generation and execution. These intermediate steps are helpful in understanding the processing flow, especially during debugging or for logging purposes. However, these intermediate steps do not store the SQL results obtained at various steps, which could offer deeper insights and can aid in further optimizations or analyses.
This feature request proposes to enhance the SQLDatabaseChain class to save SQL results from intermediate steps into a dictionary, akin to how SQL commands are currently stored. This would not only facilitate a more comprehensive view of each step but also potentially help in identifying and fixing issues or optimizing the process further.
### Motivation
#### Insightful Debugging:
Storing SQL results in intermediate steps will facilitate deeper insights during debugging, helping to pinpoint the exact step where a potential issue might be occurring.
#### Enhanced Logging:
Logging the SQL results at each step can help in creating a more detailed log, which can be instrumental in analyzing the performance and identifying optimization opportunities.
Improved Analysis and Optimization: With the SQL results available at each step, it becomes feasible to analyze the results at different stages, which can be used to further optimize the SQL queries or the process flow.
### Your contribution
I propose to contribute to implementing this feature by:
#### Code Adaptation:
Modifying the _call_ method in the SQLDatabaseChain class to include SQL results in the intermediate steps dictionary, similar to how sql_cmd is currently being saved.
#### Testing:
Developing appropriate unit tests to ensure the correct functioning of the new feature, and that it does not break the existing functionality.
#### Documentation:
Updating the documentation to include details of the new feature, illustrating how to use it and how it can benefit the users.
#### Optimization:
Once implemented, analyzing the stored results to propose further optimizations or enhancements to the Langchain project.
## Proposed Changes
In the _call_ method within the SQLDatabaseChain class:
Amend the intermediate steps dictionary to include a new key, say sql_result, where the SQL results at different stages would be saved.
During the SQL execution step, save the SQL result into the sql_result key in the intermediate steps dictionary, similar to how sql_cmd is being saved currently.
```
if not self.use_query_checker:
...
intermediate_steps.append({"sql_cmd": sql_cmd, "sql_result": str(result)}) # Save sql result here
else:
...
intermediate_steps.append({"sql_cmd": checked_sql_command, "sql_result": str(result)}) # Save sql result here
```
I believe that this contribution would be a valuable addition to the Langchain project, and I am eager to collaborate with the team to make it a reality.
Looking forward to hearing your thoughts on this proposal. | Enhance SQLDatabaseChain with SQL Results in Intermediate Steps Dictionary | https://api.github.com/repos/langchain-ai/langchain/issues/10500/comments | 2 | 2023-09-12T15:11:12Z | 2023-12-19T00:47:27Z | https://github.com/langchain-ai/langchain/issues/10500 | 1,892,729,571 | 10,500 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I run the code I don't get any errors but I also don't get any output in the terminal or output area either? Can you help?

### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/10497/comments | 4 | 2023-09-12T14:15:52Z | 2023-12-19T00:47:33Z | https://github.com/langchain-ai/langchain/issues/10497 | 1,892,622,763 | 10,497 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version = 0.0.286
Python=3.8.8
MacOs
I am working on a **ReAct agent with Memory and Tools** that should stop and ask a human for input.
I worked off this article in the documentation: https://python.langchain.com/docs/modules/memory/agent_with_memory
On Jupyter Notebook it works well when the agent stops and picks up the "Observation" from the human.
Now I am trying to bring this over to Streamlit and am struggling with having the agent wait for the observation.
As one can see in the video, the output is brought over into the right streamlit container, yet doesn't stop to get the human feedback.
I am using a custom output parser and the recommended StreamlitCallbackHandler.
https://github.com/langchain-ai/langchain/assets/416379/ed57834a-2a72-4938-b901-519f0748dd95
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My output parser looks like this:
```
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
print(llm_output)
if "Final Answer:" in llm_output:
print("Agent should finish")
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split(
"Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
print("Parsing Action Input")
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output},
log=llm_output,
)
# raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
#This can't be agent finish because otherwise the agent stops working.
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
```
### Expected behavior
The agent should wait for streamlit to create an input_chat and use this as the feedback from the "human" tool | Observation: Human is not a valid tool, try one of [human, Search, Calculator] | https://api.github.com/repos/langchain-ai/langchain/issues/10494/comments | 3 | 2023-09-12T13:57:04Z | 2023-12-19T00:47:38Z | https://github.com/langchain-ai/langchain/issues/10494 | 1,892,585,572 | 10,494 |
[
"langchain-ai",
"langchain"
] | ### System Info
Using LangChain 0.0.276
Python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Construct a FlareChain instance like this and run it:
```
myllm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")
flare = FlareChain.from_llm(
llm=myllm,
retriever=vectorstore.as_retriever(),
max_generation_len=164,
min_prob=0.3,
)
result = flare.run(querytext)
```
When I inspect during debugging, the specified LLM model was set on `flare.question_generator_chain.llm.model_name` but NOT `flare.response_chain.llm.model_name`,
which is still the default value.
### Expected behavior
I'm expecting `flare.response_chain.llm.model_name` to return `gpt-3.5-turbo-16k`, not `text-davinci-003` | FlareChain's response_chain not picking up specified LLM model | https://api.github.com/repos/langchain-ai/langchain/issues/10493/comments | 9 | 2023-09-12T13:09:18Z | 2024-01-15T16:57:52Z | https://github.com/langchain-ai/langchain/issues/10493 | 1,892,491,559 | 10,493 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am getting this error when using langchain vectorstores similarity search on local machine. `pinecone.core.client.exceptions.ApiTypeError: Invalid type for variable 'namespace'. Required value type is str and passed type was NoneType at ['namespace']`. But it is working fine on Google Colab.
### Suggestion:
_No response_ | Issue: pinecone.core.client.exceptions.ApiTypeError: Invalid type for variable 'namespace'. Required value type is str and passed type was NoneType at ['namespace'] | https://api.github.com/repos/langchain-ai/langchain/issues/10489/comments | 2 | 2023-09-12T11:02:56Z | 2023-09-13T06:20:43Z | https://github.com/langchain-ai/langchain/issues/10489 | 1,892,270,542 | 10,489 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain:0.0.286
python:3.10.10
redis:5.0.0b4
### Who can help?
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
rds = Redis.from_texts(
texts,
embeddings,
metadatas=metadata,
redis_url="XXXXX",
index_name="XXXX"
)
The following exception occurred:
AttributeError: 'RedisCluster' object has no attribute 'module_list'
The version of my redis package is 5.0.0b4.
An error occurred in the following code:
langchain\Lib\site-packages\langchain\utilities\redis.py
def check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
"""Check if the correct Redis modules are installed."""
-> installed_modules = client.module_list()
### Expected behavior
redis init success | Redis vector init error | https://api.github.com/repos/langchain-ai/langchain/issues/10487/comments | 14 | 2023-09-12T09:59:40Z | 2023-12-26T16:06:02Z | https://github.com/langchain-ai/langchain/issues/10487 | 1,892,138,220 | 10,487 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | Add device to GPT4All | https://api.github.com/repos/langchain-ai/langchain/issues/10486/comments | 0 | 2023-09-12T09:02:19Z | 2023-10-04T00:37:32Z | https://github.com/langchain-ai/langchain/issues/10486 | 1,892,030,554 | 10,486 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Langchain is still using the deprecated huggingface_hub `InferenceApi` in the latest version. the `InferenceApi` will be removed from version '0.19.0'.
```
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:127: FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '0.19.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client.
warnings.warn(warning_message, FutureWarning)
```
### Suggestion:
It it recommended to use the new `InferenceClient` in huggingface_hub. | Issue: Use huggingface_hub InferenceClient instead of InferenceAPI | https://api.github.com/repos/langchain-ai/langchain/issues/10483/comments | 3 | 2023-09-12T08:37:39Z | 2024-03-29T16:06:25Z | https://github.com/langchain-ai/langchain/issues/10483 | 1,891,974,960 | 10,483 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi Team,
I have a fixed elasticsearch version 7.6 which i cannot upgrade. could you please share me some details about which version of langchain supports mentioned version.
Problem with the latest langchain i have faced, similarity search or normal search says that KNN is not available. "Unexpected keyword argument called 'knn'".
if possible please share a sample code to connect with the existing elastic search and create an index to update the Elasticsearch data to Lang chain supported data format or document format.
### Suggestion:
_No response_ | Issue: Which version of langchain supports the elasticsearch 7.6 | https://api.github.com/repos/langchain-ai/langchain/issues/10481/comments | 22 | 2023-09-12T07:49:46Z | 2024-03-26T16:05:36Z | https://github.com/langchain-ai/langchain/issues/10481 | 1,891,889,704 | 10,481 |
[
"langchain-ai",
"langchain"
] | ### System Info
python == 3.11
langchain == 0.0.286
windows 10
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import AzureChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_pandas_dataframe_agent
llm = AzureChatOpenAI(
deployment_name = "gpt-4",
model_name = "gpt-4",
openai_api_key = '...',
openai_api_version = "2023-08-01-preview",
openai_api_base = '...',
temperature = 0
)
df = pd.DataFrame({
'Feature1': np.random.rand(1000000),
'Feature2': np.random.rand(1000000),
'Class': np.random.choice(['Class1', 'Class2', 'Class3'], 1000000)
})
agent = create_pandas_dataframe_agent(
llm,
df,
verbose=False,
agent_type=AgentType.OPENAI_FUNCTIONS,
reduce_k_below_max_tokens=True,
max_execution_time = 1,
)
agent.run('print 100 first rows in dataframe')
```
### Expected behavior
The `max_execution_time` is set to 1, indicating that the query should run for one second before stopping. However, it currently runs for approximately 10 seconds before stopping. This is a simple example, but in the case of the actual dataframe that I have (which contains a lot of textual data), the agent runs for around one minute before I receive the results. At the same time, if the query doesn't request a large amount of data from the model to output, the agent would stop in one second. For instance, if my query is agent.run('give some examples of delays mention?'), the results would not be returned because the max_execution_time is 1, and it needs roughly three seconds to output the results. Therefore, this troubleshooting indicates that there's an issue with the `max_execution_time` when the requested output is too lengthy. | max_execution_time does not work for some queries in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/10479/comments | 3 | 2023-09-12T07:24:52Z | 2023-12-19T00:47:52Z | https://github.com/langchain-ai/langchain/issues/10479 | 1,891,850,817 | 10,479 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
@router.post('/web-page')
def web_page_embedding(model: WebPageEmbedding):
try:
data = download_page(model.page)
return {'success': True}
except Exception as e:
return Response(str(e))
def download_page(url: str):
loader = AsyncChromiumLoader(urls=[url])
docs = loader.load()
return docs
```
I am trying to download the page content using the above FastAPI code. But I am facing this `NotImplementedError` error
```
Task exception was never retrieved
future: <Task finished name='Task-6' coro=<Connection.run() done, defined at E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_connection.py:264> exception=NotImplementedError()>
Traceback (most recent call last):
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_connection.py", line 271, in run
await self._transport.connect()
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_transport.py", line 135, in connect
raise exc
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_transport.py", line 123, in connect
self._proc = await asyncio.create_subprocess_exec(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\subprocess.py", line 218, in create_subprocess_exec
transport, protocol = await loop.subprocess_exec(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 1694, in subprocess_exec
transport = await self._make_subprocess_transport(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 502, in _make_subprocess_transport
raise NotImplementedError
NotImplementedError
```
I have also tried with with async await which directly call the async method of the loader and this also not working
```
@router.post('/web-page-1')
async def web_page_embedding_async(model: WebPageEmbedding):
try:
data = await download_page_async(model.page)
return {'success': True}
except Exception as e:
return Response(str(e))
async def download_page_async(url: str):
loader = AsyncChromiumLoader(urls=[url])
# docs = loader.load()
docs = await loader.ascrape_playwright(url)
return docs
```
But If I try to download the page in a python script it working as expected (both async and non-async)
```
if __name__ == '__main__':
try:
url = 'https://python.langchain.com/docs/integrations/document_loaders/async_chromium'
# d = download_page(url) # working
d = asyncio.run(download_page_async(url)) # also working
print(len(d))
except Exception as e:
print(e)
```
Packages:
- langchain==0.0.284
- playwright==1.37.0
- fastapi==0.103.1
- uvicorn==0.23.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Please run the code
### Expected behavior
Loader should work in FastAPI environment | AsyncChromiumLoader not working with FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/10475/comments | 10 | 2023-09-12T05:03:16Z | 2024-04-04T15:35:52Z | https://github.com/langchain-ai/langchain/issues/10475 | 1,891,676,241 | 10,475 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.286
Python version: 3.11.2
Platform: MacOS Ventura 13.5.1 M1 chip
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following LangChain's documentation for [ Weaviate Self-Query Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/weaviate_self_query),
I get the following Warning:
```
/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
```
and the following errors
```
ValueError: Received disallowed comparator gte. Allowed comparators are [<Comparator.EQ: 'eq'>]
...
... stack trace
...
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 52, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Parsing text
``json
{
"query": "natural disasters",
"filter": "and(gte(\"published_at\", \"2022-10-01\"), lte(\"published_at\", \"2022-10-07\"))"
}
``
raised following error:
Received disallowed comparator gte. Allowed comparators are [<Comparator.EQ: 'eq'>]
```
The following code led to the errors
```
import os, openai, weaviate, logging
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.weaviate import WeaviateTranslator
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
client = weaviate.Client(
url = WEAVIATE_URL,
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "article_body"
)
metadata_field_info = [ # Shortened for brevity
AttributeInfo(
name="published_at",
description="Date article was published",
type="date",
),
AttributeInfo(
name="weblink",
description="The URL where the document was taken from.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords from the piece of text.",
type="string",
),
]
logging.basicConfig()
logging.getLogger('langchain.retrievers.self_query').setLevel(logging.INFO)
document_content_description = "News articles"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
weaviate,
document_content_description,
metadata_field_info,
enable_limit = True,
verbose=True,
)
returned_docs_selfq = retriever.get_relevant_documents(question)
```
### Expected behavior
No warnings or errors, or documentation stating what output parser replicates the existing functionality. Specifically picking up date range filters from user queries | Error when using Self Query Retriever with Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/10474/comments | 2 | 2023-09-12T04:46:10Z | 2023-12-19T00:47:57Z | https://github.com/langchain-ai/langchain/issues/10474 | 1,891,662,372 | 10,474 |
[
"langchain-ai",
"langchain"
] | Currently, there is no support for agents that have both:
1) Conversational history
2) Structured tool chat (functions with multiple inputs/parameters)
#3700 mentions this as well but it was not resolved, `AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION` is zero_shot, and essentially has [no memory](https://stackoverflow.com/questions/76906469/langchain-zero-shot-react-agent-uses-memory-or-not). The langchain docs for [structured tool chat](https://python.langchain.com/docs/modules/agents/agent_types/structured_chat) the agent have a sense of memory through creating one massive input prompt. Still, this agent was performing much worse as #3700 mentions and other agents do not support multi input tools, even after creating [custom tools](https://python.langchain.com/docs/modules/agents/tools/custom_tools).
MY SOLUTION:
1) Use ConversationBufferMemory to keep track of chat history.
2) Convert these messages to a format OpenAI wants for their API.
3) Use the OpenAI chat completion endpoint, that has support for function calling
Usage: `chatgpt_function_response(user_prompt)`
- Dynamo db and session id stuff comes from the [docs](https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history)
- `memory.py` handles getting the chat history for a particular session (can be interpreted as a user). We use ConversationBufferMemory as we usually would and add a helper method to convert the ConversationBufferMemory to a [format that OpenAI wants](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)
- `core.py` handles the main functionality with a user prompt. We add the user's prompt to the message history, and get the message history in the OpenAI format. We use the chat completion endpoint as normal, and add the function response call to the message history as an AI message.
- `functions.py` is also how we would normally use the chat completions API, also described [here](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)
`memory.py`
```
import logging
from typing import List
import boto3
from langchain.memory import ConversationBufferMemory
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.schema.messages import SystemMessage
from langchain.adapters.openai import convert_message_to_dict
TABLE_NAME = "your table name"
# if using dynamodb
session = boto3.session.Session(
aws_access_key_id="",
aws_secret_access_key="",
region_name="",
)
def get_memory(session_id: str):
"""Get a conversation buffer with chathistory saved to dynamodb
Returns:
ConversationBufferMemory: A memory object with chat history saved to dynamodb
"""
# Define the necessary components with the dynamodb endpoint
message_history = DynamoDBChatMessageHistory(
table_name=TABLE_NAME,
session_id=session_id,
boto3_session=session,
)
# if you want to add a system prompt
if len(message_history.messages) == 0:
message_history.add_message(SystemMessage(content="whatever system prompt"))
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=message_history, return_messages=True
)
logging.info(f"Memory: {memory}")
return memory
def convert_message_buffer_to_openai(memory: ConversationBufferMemory) -> List[dict]:
"""Convert a message buffer to a list of messages that OpenAI can understand
Args:
memory (ConversationBufferMemory): A memory object with chat history saved to dynamodb
Returns:
List[dict]: A list of messages that OpenAI can understand
"""
messages = []
for message in memory.buffer_as_messages:
messages.append(convert_message_to_dict(message))
return messages
```
`core.py`
```
def _handle_function_call(response: dict) -> str:
response_message = response["message"]
function_name = response_message["function_call"]["name"]
function_to_call = function_names[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = function_to_call(**function_args)
return function_response
def chatgpt_response(prompt, model=MODEL, session_id: str = SESSION_ID) -> str:
memory = get_memory(session_id)
memory.chat_memory.add_user_message(prompt)
messages = convert_message_buffer_to_openai(memory)
logging.info(f"Memory: {messages}")
response = openai.ChatCompletion.create(
model=model,
messages=messages,
)
answer = response["choices"][0]["message"]["content"]
memory.chat_memory.add_ai_message(answer)
return answer
def chatgpt_function_response(
prompt: str,
functions=function_descriptions,
model=MODEL,
session_id: str = SESSION_ID,
) -> str:
memory = get_memory(session_id)
memory.chat_memory.add_user_message(prompt)
messages = convert_message_buffer_to_openai(memory)
logging.info(f"Memory for function response: {messages}")
response = openai.ChatCompletion.create(
model=model,
messages=messages,
functions=functions,
)["choices"][0]
if response["finish_reason"] == "function_call":
answer = _handle_function_call(response)
else:
answer = response["message"]["content"]
memory.chat_memory.add_ai_message(answer)
return answer
```
`functions.py`
```
def create_reminder(
task: str, days: int, hours: int, minutes: int
) -> str:
return 'whatever'
function_names = {
"create_reminder": create_reminder,
}
function_descriptions = [
{
"name": "create_reminder",
"description": "This function handles the logic for creating a reminder for a "
"generic task at a given date and time.",
"parameters": {
"type": "object",
"properties": {
"task": {
"type": "string",
"description": "The task to be reminded of, such as 'clean the "
"house'",
},
"days": {
"type": "integer",
"description": "The number of days from now to be reminded",
},
"hours": {
"type": "integer",
"description": "The number of hours from now to be reminded",
},
"minutes": {
"type": "integer",
"description": "The number of minutes from now to be reminded",
},
},
"required": ["task", "days", "hours", "minutes"],
},
},
]
``` | How to add structured tools / functions with multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/10473/comments | 11 | 2023-09-12T04:31:04Z | 2024-03-18T16:05:29Z | https://github.com/langchain-ai/langchain/issues/10473 | 1,891,650,757 | 10,473 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.279
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The key issue causing the error is the import statement of **BaseModel**. In the official example, the package is imported as **from pydantic import BaseModel, Field**, but in the langchain source code at _langchain\chains\openai_functions\qa_with_structure.py_, it's imported as **from langchain.pydantic_v1 import BaseModel, Field**. The inconsistency between these two package names results in an error when executing create_qa_with_structure_chain().
Below is an error example.
``` python
import os
from typing import List
from langchain import PromptTemplate
from langchain.chains.openai_functions import create_qa_with_structure_chain
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.schema import SystemMessage, HumanMessage
from pydantic import BaseModel, Field
os.environ["OPENAI_API_KEY"] = "xxxx"
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
class CustomResponseSchema(BaseModel):
"""An answer to the question being asked, with sources."""
answer: str = Field(..., description="Answer to the question that was asked")
countries_referenced: List[str] = Field(
..., description="All of the countries mentioned in the sources"
)
sources: List[str] = Field(
..., description="List of sources used to answer the question"
)
doc_prompt = PromptTemplate(
template="Content: {page_content}\nSource: {source}",
input_variables=["page_content", "source"],
)
prompt_messages = [
SystemMessage(
content=(
"You are a world class algorithm to answer "
"questions in a specific format."
)
),
HumanMessage(content="Answer question using the following context"),
HumanMessagePromptTemplate.from_template("{context}"),
HumanMessagePromptTemplate.from_template("Question: {question}"),
HumanMessage(
content="Tips: Make sure to answer in the correct format. Return all of the countries mentioned in the "
"sources in uppercase characters. "
),
]
chain_prompt = ChatPromptTemplate(messages=prompt_messages)
qa_chain_pydantic = create_qa_with_structure_chain(
llm, CustomResponseSchema, output_parser="pydantic", prompt=chain_prompt
)
query = "What did he say about russia"
qa_chain_pydantic.run({"question": query, "context": query})
```
### Expected behavior
It is hoped that the package names can be standardized | The exception 'Must provide a pydantic class for schema when output_parser is 'pydantic'.' is caused by the inconsistent package name of BaseModel | https://api.github.com/repos/langchain-ai/langchain/issues/10472/comments | 2 | 2023-09-12T03:35:02Z | 2023-12-19T00:48:02Z | https://github.com/langchain-ai/langchain/issues/10472 | 1,891,606,072 | 10,472 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, Unstructured loaders allow users to process elements when loading the document. This is done by applying user-specified `post_processors` to each element. These post processing functions are str -> str callables.
When using Unstructured loaders, allow element processing using `(Element) -> Element` or `(Element) -> str` callables.
### Motivation
A user using `UnstructuredPDFLoader` wants to take advantage of the inferred table structure when processing elements. They can't use the `post_processors` argument to access `element.metadata.text_as_html` because the input to each `post_processors` callable is a string:
>I'm finding that the mode='elements' option already does str(element) to every element, so I can't really use element.metadata.text_as_html
They evaluated this workaround:
```
class CustomPDFLoader(UnstructuredPDFLoader):
def __init__(
self,
*args,
pre_processors: list[Callable[[elmt.Element], str]] | None,
**kwargs,
) -> None:
super().__init__(*args, **kwargs)
self.pre_processors = pre_processors
def _pre_process_elements(self, elements: list[elmt.Element]) -> elmt.Element:
for element in elements:
for cleaner in self.pre_processors:
element.text = cleaner(element)
def load(self) -> str:
if self.mode != "single":
raise ValueError(f"mode of {self.mode} not supported.")
elements = self._get_elements()
self._pre_process_elements(elements)
metadata = self._get_metadata()
text = "\n\n".join([str(el) for el in elements])
docs = [Document(page_content=text, metadata=metadata)]
return docs
```
The intent is for the `_pre_process_elements` method above to replace the call to `_post_process_elements` in the second line of the [original load function](https://github.com/langchain-ai/langchain/blob/737b75d278a0eef8b3b9002feadba69ffe50e1b1/libs/langchain/langchain/document_loaders/unstructured.py#L87). Using this workaround would require copying the rest of the `load` method's code in the subclass, too.
### Your contribution
The team at Unstructured can investigate this request and submit a PR if needed. | Make entire element accessible for processing when loading with Unstructured loaders | https://api.github.com/repos/langchain-ai/langchain/issues/10471/comments | 1 | 2023-09-12T02:02:20Z | 2023-12-19T00:48:07Z | https://github.com/langchain-ai/langchain/issues/10471 | 1,891,540,586 | 10,471 |
[
"langchain-ai",
"langchain"
] | ### System Info
It looks like BedrockChat was removed from the chat_models/__init__.py when ChatKonko was added in this commit: https://github.com/langchain-ai/langchain/pull/10267/commits/280c1e465c4b89c6313fcc2c0679e3756b8566f9#diff-04148cb9262d722a69b81a119e1f8120515532263a1807239f60f00d9ff2a755
I'm guessing this was accidental, because the BedrockChat class definitions still exist.
@agola11 @hwchase17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
I expect `from langchain.chat_models import BedrockChat` to work | BedrockChat model mistakenly removed in latest version? | https://api.github.com/repos/langchain-ai/langchain/issues/10468/comments | 4 | 2023-09-12T00:32:49Z | 2023-10-03T19:51:12Z | https://github.com/langchain-ai/langchain/issues/10468 | 1,891,477,538 | 10,468 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi. I've a vectorstore which has embeddings from chunks of documents. I've used FAISS to create my vector_db. As metadata I've 'document_id', 'chunk_id', 'source'.
But now I want to run a summarizer to extract a summary for each document_id and put it as a new metadata for each chunk.
How can I do it?
The only way I've found out was to process everything all over again, but now extracting the summary as a new step from the pipeline...but that's not ideal....
### Suggestion:
_No response_ | Issue: Add new metadata to document_ids already saved in vectorstore (FAISS) | https://api.github.com/repos/langchain-ai/langchain/issues/10463/comments | 3 | 2023-09-11T20:55:58Z | 2023-12-19T00:48:13Z | https://github.com/langchain-ai/langchain/issues/10463 | 1,891,252,685 | 10,463 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.286
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following LangChain's documentation for[ Weaviate Self-Query Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/weaviate_self_query),
I get the following Warning:
```
/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
```
The following code led to the warning, although retrieving documents as expected:
```
import os, openai, weaviate, logging
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.weaviate import WeaviateTranslator
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
client = weaviate.Client(
url = WEAVIATE_URL,
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "text",
by_text = False,
embedding = embeddings,
)
metadata_field_info = [ # Shortened for brevity
AttributeInfo(
name="text",
description="This is the main content of text.",
type="string",
),
AttributeInfo(
name="source",
description="The URL where the document was taken from.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords from the piece of text.",
type="string",
),
]
logging.basicConfig()
logging.getLogger('langchain.retrievers.self_query').setLevel(logging.INFO)
document_content_description = "Collection of Laws and Code documents, including the Labor Code and related Laws."
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
weaviate,
document_content_description,
metadata_field_info,
enable_limit = True,
verbose=True,
)
returned_docs_selfq = retriever.get_relevant_documents(question)
```
### Expected behavior
No Warnings and/or updated documentation instructing how to pass the output parser to LLMChain | User Warning when using Self Query Retriever with Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/10462/comments | 2 | 2023-09-11T20:04:55Z | 2023-12-18T23:45:57Z | https://github.com/langchain-ai/langchain/issues/10462 | 1,891,181,081 | 10,462 |
[
"langchain-ai",
"langchain"
] | I am trying to trace my LangChain runs by using LangChain Tracing Native Support on my local host, I created a session named agent_workflow and tried to receive the runs on it but it didn't work.
The problem is that whenever I run the RetrievalQA chain it gives me the following error:
`Error in LangChainTracerV1.on_chain_end callback: Unknown run type: retriever`
This is the code snippet specifying the problem:
```
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "agent_workflow"
embed = OpenAIEmbeddings(
model=self.embedding_model_name
)
vectorStore = Chroma.from_documents(texts,embed)
def retrieval(self,question):
qa = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever= vectorStore.as_retriever(k=1),
verbose=True,
chain_type_kwargs={
"verbose":True,
"prompt":prompt,
"memory": memory,
}
)
with get_openai_callback() as cb:
response = qa.run({"query":question})
return qa.run({"query":question})
```
How can I solve this? I saw a tutorial where it worked with initialized_agent instead of RetrievalQA but don't know whether this is the case or not.
| Issue: Error in LangChainTracerV1.on_chain_end callback: Unknown run type: | https://api.github.com/repos/langchain-ai/langchain/issues/10460/comments | 5 | 2023-09-11T19:17:54Z | 2023-12-20T16:06:11Z | https://github.com/langchain-ai/langchain/issues/10460 | 1,891,117,462 | 10,460 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The following raises a `ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected maximum item count: 1, found: 2, please reformat your input and try again.`:
```
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.llms.bedrock import Bedrock
llm = Bedrock(
client=bedrock_client,
model_id="ai21.j2-ultra",
model_kwargs={"temperature": 0.9, "maxTokens": 500, "topP": 1, "stopSequences": ["\\n\\nHuman:", "\n\nAI:"]
})
prompt_template = PromptTemplate(template="{history}Human:I want to know how to write a story.\nAssistant: What genre do you want to write the story in?\n\nHuman: {input}", input_variables=['history', 'input'])
conversation = ConversationChain(
llm=llm, verbose=True, memory=ConversationBufferMemory(),prompt=prompt_template
)
conversation.predict(input="I want to write a horror story.")
```
This code works when only one stop sequence is passed.
The issue seems to be coming from within the Bedrock `invoke_model` call as I tried the same thing in Bedrock playground and received the same error.
### Suggestion:
Bedrock team needs to be contacted for this one. | Issue: Cannot pass more than one stop sequence to AI21 Bedrock model | https://api.github.com/repos/langchain-ai/langchain/issues/10456/comments | 2 | 2023-09-11T17:07:23Z | 2023-12-18T23:46:09Z | https://github.com/langchain-ai/langchain/issues/10456 | 1,890,923,908 | 10,456 |
[
"langchain-ai",
"langchain"
] | ### Feature request
While other model parameters for Anthropic are provided as class variables, `stop_sequence` does not for `_AnthropicCommon` class, so you can only send `stop` in the `generate` call. And `generate` manually adds the stop sequences to the parameters before the call to Anthropic.
I suggest having `stop` as a class level parameters so it can be supplied during the creation of the `ChatAnthropic` class for example, like:
```
ChatAnthropic(
anthropic_api_key=api_token,
model=model,
temperature=temperature,
top_k=top_k,
top_p=top_p,
default_request_timeout=default_request_timeout,
max_tokens_to_sample=max_tokens_to_sample,
verbose=verbose,
stop=stop_sequences,
)
```
The changes required for this will be adding the class variable to the `_AnthropicCommon` class and changing the `_default_params` property like so:
```
@property
def _default_params(self) -> Mapping[str, Any]:
"""Get the default parameters for calling Anthropic API."""
d = {
"max_tokens_to_sample": self.max_tokens_to_sample,
"model": self.model,
}
if self.temperature is not None:
d["temperature"] = self.temperature
...
if self.stop_sequences is not None:
d["stop_sequences"] = self.stop_sequences
```
This would enable the addition of stop sequences directly to the model call through the creation of the chat-model object while still keeping the current functionality to also pass it in the generate call for `ConversationChain` if the user so desires (also, under what cases would a user pass stop in the generate call if its already available as a class variable?). This is especially useful because `ConversationalRetrievalChain` doesn't provide `stop` in its own call so addition of this would also enable keeping the behaviour similar across the different chains for a model.
So with `ConversationalRetrievalChain`, now the LLM would have the stop sequences already present which you can't currently pass like for `ConversationChain`:
```
ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=knowledge_base.retriever,
chain_type=chain_type,
verbose=verbose,
memory=conversation_memory,
return_source_documents=True
)
```
I would be happy to create a PR for this, just wanted to see some feedback/support, and see if someone has any counter points to this suggestion.
### Motivation
Using stop sequences for `ChatAnthropic` with `ConversationChain` and `ConversationRetrievalChain` causes issues.
### Your contribution
Yes, I'd be happy to create a PR for this. | stop sequences as a parameter for ChatAnthropic cannot be added | https://api.github.com/repos/langchain-ai/langchain/issues/10455/comments | 2 | 2023-09-11T16:42:36Z | 2023-12-19T00:48:23Z | https://github.com/langchain-ai/langchain/issues/10455 | 1,890,888,136 | 10,455 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain uses max_elements parameter to build hnsw index. But since 0.3.2 version of pg_embedding it is not exists.
The error is:
`Failed to create HNSW extension or index: (psycopg2.errors.InvalidParameterValue) unrecognized parameter "maxelements"`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create Neon DB as an example in their cloud
### Expected behavior
PGEmbedding.from_embeddings.create_hnsw_index should run migration without errors | hnsw in Postgres via Neon extention return error | https://api.github.com/repos/langchain-ai/langchain/issues/10454/comments | 2 | 2023-09-11T16:27:02Z | 2023-12-18T23:46:18Z | https://github.com/langchain-ai/langchain/issues/10454 | 1,890,863,082 | 10,454 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
HI, I try to use RedisChatMessageHistory but there is an error:
Error 97 connecting to localhost:6379. Address family not supported by protocol
However, another URL is defined:
```
REDIS_URL = f"redis://default:mypassword@redis-17697.c304.europe-west1-2.gce.cloud.redislabs.com:17697/0"
history = RedisChatMessageHistory(session_id='2', url=REDIS_URL, key_prefix='LILOK')
```
The Redis server is external, the VPC is disabled for the Lambda.
**Full error:**
```
[ERROR] ConnectionError: Error 97 connecting to localhost:6379. Address family not supported by protocol.
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 45, in lambda_handler
history.add_user_message(text)
File "/opt/python/langchain/schema/chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
File "/opt/python/langchain/memory/chat_message_histories/redis.py", line 56, in add_message
self.redis_client.lpush(self.key, json.dumps(_message_to_dict(message)))
File "/opt/python/redis/commands/core.py", line 2734, in lpush
return self.execute_command("LPUSH", name, *values)
File "/opt/python/redis/client.py", line 505, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/opt/python/redis/connection.py", line 1073, in get_connection
connection.connect()
File "/opt/python/redis/connection.py", line 265, in connect
raise ConnectionError(self._error_message(e))
```
**Full code:**
```
import os
import json
import requests
from langchain.memory import RedisChatMessageHistory
from langchain import OpenAI
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
TELEGRAM_TOKEN = 'mytoken'
TELEGRAM_URL = f"https://api.telegram.org/bot{TELEGRAM_TOKEN}/"
def lambda_handler(event, context):
REDIS_URL = f"redis://default:mypassword@redis-17697.c304.europe-west1-2.gce.cloud.redislabs.com:17697/0"
history = RedisChatMessageHistory(session_id='2', url=REDIS_URL, key_prefix='LILOK')
llm = OpenAI(model_name='text-davinci-003',
temperature=0,
max_tokens = 256)
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=memory
)
history = RedisChatMessageHistory("foo")
# Log the received event for debugging
print("Received event: ", json.dumps(event, indent=4))
message = json.loads(event['body'])
# Check if 'message' key exists in the event
if 'message' in message:
chat_id = message['message']['chat']['id']
text = message['message'].get('text', '')
if text == '/start':
send_telegram_message(chat_id, "Hi!")
else:
history.add_user_message(text)
result = conversation.predict(input=history.messages)
history.add_ai_message(result)
send_telegram_message(chat_id, result)
else:
print("No 'message' key found in the received event")
return {
'statusCode': 400,
'body': json.dumps("Bad Request: No 'message' key")
}
return {
'statusCode': 200
}
def send_telegram_message(chat_id, message):
url = TELEGRAM_URL + f"sendMessage?chat_id={chat_id}&text={message}"
requests.get(url)
```
Please advise
### Suggestion:
_No response_ | Error 97 connecting to localhost:6379. Address family not supported by protocol | https://api.github.com/repos/langchain-ai/langchain/issues/10453/comments | 4 | 2023-09-11T16:06:18Z | 2023-09-11T23:23:39Z | https://github.com/langchain-ai/langchain/issues/10453 | 1,890,830,662 | 10,453 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.285
Platform: OSX Ventura (apple silicon)
Python version: 3.11
### Who can help?
@gregnr since it looks like you added the [Supabase example code](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/supabase_self_query)
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create fresh conda env with python 3.11
2. Install JupyterLap and create notebook
3. Follow the steps in the [Supabase example code](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/supabase_self_query) tutorial
On the step to:
```
vectorstore = SupabaseVectorStore.from_documents(
docs,
embeddings,
client=supabase,
table_name="documents",
query_name="match_documents"
)
```
it fails with error `JSONDecodeError: Expecting value: line 1 column 1 (char 0)`:
<details>
<summary>Stacktrace</summary>
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 vectorstore = SupabaseVectorStore.from_documents(
2 docs,
3 embeddings,
4 client=supabase,
5 table_name="documents",
6 query_name="match_documents"
7 )
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/base.py:417, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
415 texts = [d.page_content for d in documents]
416 metadatas = [d.metadata for d in documents]
--> 417 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/supabase.py:147, in SupabaseVectorStore.from_texts(cls, texts, embedding, metadatas, client, table_name, query_name, ids, **kwargs)
145 ids = [str(uuid.uuid4()) for _ in texts]
146 docs = cls._texts_to_documents(texts, metadatas)
--> 147 cls._add_vectors(client, table_name, embeddings, docs, ids)
149 return cls(
150 client=client,
151 embedding=embedding,
152 table_name=table_name,
153 query_name=query_name,
154 )
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/supabase.py:323, in SupabaseVectorStore._add_vectors(client, table_name, vectors, documents, ids)
320 for i in range(0, len(rows), chunk_size):
321 chunk = rows[i : i + chunk_size]
--> 323 result = client.from_(table_name).upsert(chunk).execute() # type: ignore
325 if len(result.data) == 0:
326 raise Exception("Error inserting: No rows added")
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/postgrest/_sync/request_builder.py:62, in SyncQueryRequestBuilder.execute(self)
53 r = self.session.request(
54 self.http_method,
55 self.path,
(...)
58 headers=self.headers,
59 )
61 try:
---> 62 return APIResponse.from_http_request_response(r)
63 except ValidationError as e:
64 raise APIError(r.json()) from e
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/postgrest/base_request_builder.py:154, in APIResponse.from_http_request_response(cls, request_response)
150 @classmethod
151 def from_http_request_response(
152 cls: Type[APIResponse], request_response: RequestResponse
153 ) -> APIResponse:
--> 154 data = request_response.json()
155 count = cls._get_count_from_http_request_response(request_response)
156 return cls(data=data, count=count)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/httpx/_models.py:756, in Response.json(self, **kwargs)
754 if encoding is not None:
755 return jsonlib.loads(self.content.decode(encoding), **kwargs)
--> 756 return jsonlib.loads(self.text, **kwargs)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
</details>
It appears that Supabase is returning a 201 response code, with an empty body in the response. Then the posgrest library is trying to parse the json with `data = request_response.json()`, but that fails due to the empty body.
Are there some extra headers that should be added to the supabase client to tell it return a response body?
### Expected behavior
No error when invoking `SupabaseVectorStore.from_documents()` | Error creating Supabase vector store when running self-query example code | https://api.github.com/repos/langchain-ai/langchain/issues/10447/comments | 6 | 2023-09-11T14:21:18Z | 2023-09-12T07:04:17Z | https://github.com/langchain-ai/langchain/issues/10447 | 1,890,633,505 | 10,447 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Similarly to `memory=ConversationSummaryBufferMemory(llm=llm, max_token_limit=n)` passed in `initialize_agent`, there should be a possibility to pass `ConversationSummaryBufferMemory` like-object which would summarize the `intermediate_steps` in the agent if the `agent_scratchpad` created from the `intermediate_steps` exceeds `n` tokens
### Motivation
Agents can run out of the context window when solving a complex problem with tools.
### Your contribution
I can't commit to anything for now. | Summarize agent_scratchpad when it exceeds n tokens | https://api.github.com/repos/langchain-ai/langchain/issues/10446/comments | 12 | 2023-09-11T14:16:50Z | 2024-04-01T20:03:40Z | https://github.com/langchain-ai/langchain/issues/10446 | 1,890,624,612 | 10,446 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
It does not list `tiktoken` as a dependency, and while trying to run the code to create the `SupabaseVectorStore.from_documents()`, I got this error:
```
ImportError: Could not import tiktoken python package. This is needed in order to for OpenAIEmbeddings. Please install it with `pip install tiktoken`.
```
### Idea or request for content:
Add a new dependency to `pip install tiktoken`
cc @gregnr | DOC: Supabase Vector self-querying | https://api.github.com/repos/langchain-ai/langchain/issues/10444/comments | 2 | 2023-09-11T13:20:54Z | 2023-09-12T07:01:13Z | https://github.com/langchain-ai/langchain/issues/10444 | 1,890,500,153 | 10,444 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am getting following error after a period of inactivity, However, the issue resolves itself when I restart the server and run the same query.
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).
How can I fix this issue?
### Suggestion:
_No response_ | Issue: Request timeout | https://api.github.com/repos/langchain-ai/langchain/issues/10443/comments | 3 | 2023-09-11T12:12:49Z | 2024-02-11T16:14:56Z | https://github.com/langchain-ai/langchain/issues/10443 | 1,890,374,565 | 10,443 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.285
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following the instructions [here](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart),
`from langchain.indexes import SQLRecordManager, index` returns the following warning:
```
/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:38: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base()
```
LangChain's [indexes documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.indexes) doesn't include `SQLRecordManager`. Additionally, `RecordManager` [documentation ](https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html#langchain-indexes-base-recordmanager)doesn't mention it can be used with SQLite.
### Expected behavior
No warnings. | Warning using SQLRecordManager | https://api.github.com/repos/langchain-ai/langchain/issues/10439/comments | 2 | 2023-09-11T08:32:27Z | 2024-02-02T04:10:52Z | https://github.com/langchain-ai/langchain/issues/10439 | 1,889,969,284 | 10,439 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The current [Weaviate documentation](https://python.langchain.com/docs/integrations/providers/weaviate) in LangChain doesn't include instructions for setting up Weaviate's Schema to integrate it properly with LangChain. This will prevent any future issues like this one: #10424
### Idea or request for content:
Include in the documentation a reference to [Weaviate Auto-Schema](https://weaviate.io/developers/weaviate/config-refs/schema#auto-schema), explaining this is the default behavior when a `Document` is loaded to a Weaviate vectorstore. Also, give examples of how the Schema JSON file can be adjusted to work without problems with LangChain. | DOC: Include instructions for Weaviate Schema Configuration | https://api.github.com/repos/langchain-ai/langchain/issues/10438/comments | 2 | 2023-09-11T08:03:25Z | 2023-12-25T16:08:34Z | https://github.com/langchain-ai/langchain/issues/10438 | 1,889,911,805 | 10,438 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello
I am using langchain's babyagi here I need to to create a custom tool
in this custom tool in function logic i need to do some operations based on file how can I do it
### Suggestion:
_No response_ | Issue: babyagi agent custom tool file operation usage | https://api.github.com/repos/langchain-ai/langchain/issues/10437/comments | 3 | 2023-09-11T06:42:00Z | 2023-12-25T16:08:40Z | https://github.com/langchain-ai/langchain/issues/10437 | 1,889,781,015 | 10,437 |
[
"langchain-ai",
"langchain"
] | ### System Info
- langchain v0.0.285
- transformers v4.32.1
- Windows10 Pro (virtual machine, running on a Server with several virtual machines!)
- 32 - 100GB Ram
- AMD Epyc
- 2x Nvidia RTX4090
- Python 3.10
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hey guys,
I think there is a problem with "HuggingFaceInstructEmbeddings".
When using:
```
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl", cache_folder="testing")
vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
```
or
```
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
```
or
```
embeddings = HuggingFaceInstructEmbeddings(model_name="intfloat/multilingual-e5-large", model_kwargs={"device": "cuda:0"})
db = Chroma.from_documents(documents=texts, embedding=embeddings, collection_name="snakes", persist_directory="db")
```
In my opinion, the problem always seems to occur in the 2nd line from each example - when `embedding=embeddings` is used. Shortly after printing "512 Tokens used" (or similar Text) Then the complete server breaks down and is switched off.
Sometimes the System can run the task and pastes errors like "can't find the HUGGINGFACEHUB_API_TOKEN". But if i run the Code again (without having changed anything) **_the Server_** (not only my Virtual Machine) switches off :(
We can't find any Error message in the Windows system logs, and no Error Message on the Server
### Expected behavior
Running the Code. Maybe the Problem is by using it on Virtual Machines?
I don't know, but always switching off the whole server is a big Problem for our company - i hope you can help me :) | Use "HuggingFaceInstructEmbeddings" --> powering down the whole Server with all running VMs :( | https://api.github.com/repos/langchain-ai/langchain/issues/10436/comments | 8 | 2023-09-11T04:58:51Z | 2023-09-13T17:35:43Z | https://github.com/langchain-ai/langchain/issues/10436 | 1,889,662,620 | 10,436 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi
Currently, min_seconds and max_seconds of create_base_retry_decorator are hard-coded values. Can you please make these parameters configurable so that we can pass these values from AzureChatOpenAI similar to max_retries
eg: llm = AzureChatOpenAI(
deployment_name=deployment_name,
model_name=model_name,
max_tokens=max_tokens,
temperature=0,
max_retries=7,
min_seconds=20,
max_seconds=60
)
### Motivation
Setting these values will help with RateLimiterror. Currently, these parameters need to be updated in the library files, which is impractical to set up in all deployed environments.
### Your contribution
NA | keep min_seconds and max_seconds of create_base_retry_decorator configurable | https://api.github.com/repos/langchain-ai/langchain/issues/10435/comments | 3 | 2023-09-11T04:56:47Z | 2024-02-20T16:08:26Z | https://github.com/langchain-ai/langchain/issues/10435 | 1,889,660,615 | 10,435 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using OpenAIFunctionsAgent with langchain-0.0.285, parse tool input occurs frequently when provided an input
Could not parse tool input: {'name': 'AI_tool', 'arguments': 'What is a pre-trained chatbot?'} because the arguments is not valid JSON.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
retriever = db.as_retriever() # Milvus
tool = create_retriever_tool(
retriever,
"document_search_tool",
"useful for answering questions related to XXXXXXXX."
)
tool_sales = create_retriever_tool(
retriever,
"sales_tool",
"useful for answering questions related to buying or subscribing XXXXXXXX."
)
tool_support = create_retriever_tool(
retriever,
"support_tool",
"useful for when you need to answer questions related to support humans on XXXXXXXX."
)
tools = [tool, tool_sales]
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613", temperature=0.3)
system_message = SystemMessage(
content=(
"You are a digital team member of XXXXXXXX Organization, specialising in XXXXXXXX."
"Always respond and act as an office manager of XXXXXXXX, never referring to the XXXXXXXX "
"as an external or separate entity. "
"* Please answer questions directly from the context, and strive for brevity, keeping answers under 30 words."
"* Convey information in a manner that's both professional and empathetic, embodying the values of XXXXXXXX."
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")]
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=6)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
)
result = agent({"input": question, "chat_history": chat_history})
answer = str(result["output"])
print(answer)
### Expected behavior
i need to remove the error | Could not parse tool input: {'name': 'AI_tool', 'arguments': 'What is a pre-trained chatbot?'} because the arguments is not valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/10433/comments | 5 | 2023-09-11T03:44:09Z | 2024-01-03T09:32:01Z | https://github.com/langchain-ai/langchain/issues/10433 | 1,889,593,320 | 10,433 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Could you add an implementation of BaseChatModel using CTransformers?
### Motivation
I prefer to use a local model instead of an API. the LLM works, but I need the wrapper for it
### Your contribution
My failed attempt
```
from pydantic import BaseModel, Field
from typing import Any, List, Optional
from ctransformers import AutoModelForCausalLM, LLM
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.chat_models.base import SimpleChatModel
from langchain.schema import BaseMessage, HumanMessage
class CTransformersChatModel(SimpleChatModel, BaseModel):
ctransformers_model: LLM = Field(default_factory=AutoModelForCausalLM)
def __init__(self, model_path: str, model_type: Optional[str] = "llama", **kwargs: Any):
super().__init__(**kwargs)
self.ctransformers_model = AutoModelForCausalLM.from_pretrained(model_path)
def _call(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
# Convert messages to string prompt
prompt = " ".join([message.content for message in messages if isinstance(message, HumanMessage)])
return self.ctransformers_model(prompt, stop=stop, run_manager=run_manager, **kwargs)
@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "ctransformers_chat_model"
``` | BaseChatModel implementation using CTransformers | https://api.github.com/repos/langchain-ai/langchain/issues/10427/comments | 2 | 2023-09-10T21:14:33Z | 2023-12-18T23:46:32Z | https://github.com/langchain-ai/langchain/issues/10427 | 1,889,328,945 | 10,427 |
[
"langchain-ai",
"langchain"
] | ### System Info
As far as I tried, this reproduced in many versions, including the latest `langchain==0.0.285`
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using the following code
```
llm = ChatOpenAI(model_name="gpt-4", temperature=0, verbose=True) # sometimes with streaming=True
# example of one tool thats being used
loader = PyPDFLoader(insurance_file)
pages = loader.load_and_split()
faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())
health_insurance_retriever = faiss_index.as_retriever()
tool = create_retriever_tool(health_insurance_retriever, "health_insurance_plan",
"XXX Description")
agent_executor = create_conversational_retrieval_agent(
llm, [tool1, tool2], verbose=True, system_message="...")
agent_executor("Some question that requires usage of retrieval tools")
```
The results often (statistically, but reproduces pretty frequently) is returned with some references such as the following
```I'm sorry to hear that you're experiencing back pain. Let's look into your health insurance plan to see what coverage you have for this issue.
[Assistant to=functions.health_insurance_plan]
{
"__arg1": "back pain"
}
...
[Assistant to=functions.point_solutions]
{
"__arg1": "back pain"
}
````
### Expected behavior
Chain using the retrieval tools to actually query the vector store, instead of returning the placeholders
Thank you for your help! | Conversational Retrieval Agent returning partial output | https://api.github.com/repos/langchain-ai/langchain/issues/10425/comments | 4 | 2023-09-10T16:57:30Z | 2024-03-11T16:16:51Z | https://github.com/langchain-ai/langchain/issues/10425 | 1,889,238,903 | 10,425 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os, openai, weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
INDEX_NAME = 'LaborIA_VectorsDB'
client = weaviate.Client(
url = "http://10.0.1.21:8085",
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "text",
by_text = False,
embedding = embeddings,
)
hyb_weav_retriever = WeaviateHybridSearchRetriever(
client=client,
index_name=INDEX_NAME,
text_key="text",
attributes=[],
create_schema_if_missing=True,
)
returned_docs_hybrid = hyb_weav_retriever.get_relevant_documents(question, score=True)
```
This returns the following trace:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <timed exec>:1
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/retrievers/weaviate_hybrid_search.py:113, in WeaviateHybridSearchRetriever._get_relevant_documents(self, query, run_manager, where_filter, score)
111 result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do()
112 if "errors" in result:
--> 113 raise ValueError(f"Error during query: {result['errors']}")
115 docs = []
117 for res in result["data"]["Get"][self.index_name]:
ValueError: Error during query: [{'locations': [{'column': 6, 'line': 1}], 'message': 'get vector input from modules provider: VectorFromInput was called without vectorizer', 'path': ['Get', 'LaborIA_VectorsDB']}]
```
### Expected behavior
Returned relevant documents. | Weaviate Hybrid Search Returns Error | https://api.github.com/repos/langchain-ai/langchain/issues/10424/comments | 5 | 2023-09-10T16:55:14Z | 2024-01-26T00:43:23Z | https://github.com/langchain-ai/langchain/issues/10424 | 1,889,237,256 | 10,424 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Provide a parameter to determine whether to extract images from the pdf and give the support for it.
### Motivation
There may exist several images in pdf that contain abundant information but it seems that there is no support for extracting images from pdf when I read the code.
### Your contribution
I'd like to add the feature if it is really lacking. | Is there a support for extracting images from pdf? | https://api.github.com/repos/langchain-ai/langchain/issues/10423/comments | 3 | 2023-09-10T16:41:55Z | 2024-07-03T16:04:21Z | https://github.com/langchain-ai/langchain/issues/10423 | 1,889,225,613 | 10,423 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.