issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
python = "^3.8.10"
langchain = "^0.0.336"
google-cloud-aiplatform = "^1.36.3"
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms.vertexai import VertexAI
model = VertexAI(
model_name="text-bison@001",
temperature=0.2,
max_output_tokens=1024,
top_k=40,
top_p=0.8
)
model.client
# <vertexai.preview.language_models._PreviewTextGenerationModel at ...>
# it should be <vertexai.language_models.TextGenerationModel at ...>
```
### Expected behavior
Code reference: https://github.com/langchain-ai/langchain/blob/78a1f4b264fbdca263a4f8873b980eaadb8912a7/libs/langchain/langchain/llms/vertexai.py#L255C77-L255C77
The VertexAI API is now using vertexai.language_models.TextGenerationModel.
Instead, here we are still importing it from from vertexai.preview.language_models. | Changed import of VertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/13606/comments | 3 | 2023-11-20T12:55:25Z | 2024-02-26T16:05:58Z | https://github.com/langchain-ai/langchain/issues/13606 | 2,002,142,951 | 13,606 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've created a multi-level directory vector store using Faiss. How can I retrieve all indices within one or multiple subdirectories?
### Suggestion:
_No response_ | Issue: retrieve multi index from vector store using Faiss in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/13605/comments | 2 | 2023-11-20T12:47:43Z | 2023-11-21T14:58:27Z | https://github.com/langchain-ai/langchain/issues/13605 | 2,002,129,643 | 13,605 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.338
Python 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was trying to combine multiple structured `Tool`s, one that produces a `List` of values and another that consumes it, but couldn't get it to work. I asked the LangChain support bot whether it was possible and it said yes and produced the following example. But it does not work :)
```python
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import BaseTool
from typing import List
# Define the first structured tool that returns a list of strings
class ListTool(BaseTool):
name = "List Tool"
description = "Generates a list of strings."
def _run(self) -> List[str]:
"""Return a list of strings."""
return ["apple", "banana", "cherry"]
tool1 = ListTool()
# Define the second structured tool that accepts a list of strings
class ProcessListTool(BaseTool):
name = "Process List Tool"
description = "Processes a list of strings."
def _run(self, input_list: List[str]) -> str:
"""Process the list of strings."""
# Perform the processing logic here
processed_list = [item.upper() for item in input_list]
return f"Processed list: {', '.join(processed_list)}"
tool2 = ProcessListTool()
llm = OpenAI(temperature=0)
agent_executor = initialize_agent(
[tool1, tool2],
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
output = agent_executor.run("Process the list")
print(output) # Output: 'Processed list: APPLE, BANANA, CHERRY'
```
Full output:
```
> Entering new AgentExecutor chain...
Action:
{
"action": "Process List Tool",
"action_input": {
"input_list": {
"title": "Input List",
"type": "array",
"items": {
"type": "string"
}
}
}
}
Observation: Processed list: TITLE, TYPE, ITEMS
Thought: I have the processed list
Action:
{
"action": "Final Answer",
"action_input": "I have processed the list and it contains the following: TITLE, TYPE, ITEMS"
}
> Finished chain.
```
### Expected behavior
Expected output:
```
Processed list: APPLE, BANANA, CHERRY'
``` | Structured tools not able to pass structured data to each other | https://api.github.com/repos/langchain-ai/langchain/issues/13602/comments | 12 | 2023-11-20T10:21:21Z | 2024-02-26T16:06:04Z | https://github.com/langchain-ai/langchain/issues/13602 | 2,001,851,127 | 13,602 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain==0.0.338
python==3.8.1
neo4j latest
this is the error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
[/Users/m1/Desktop/LangChain/Untitled.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/Untitled.ipynb) Cell 1 line 5
[2](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=1) import os
[4](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=3) uri, user, password = os.getenv("NEO4J_URI"), os.getenv("NEO4J_USERNAME"), os.getenv("NEO4J_PASSWORD")
----> [5](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=4) graph = Neo4jGraph(
[6](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=5) url=uri,
[7](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=6) username=user,
[8](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=7) password=password,
[9](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=8) )
File [~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69), in Neo4jGraph.__init__(self, url, username, password, database)
[66](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:66) password = get_from_env("password", "NEO4J_PASSWORD", password)
[67](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:67) database = get_from_env("database", "NEO4J_DATABASE", database)
---> [69](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69) self._driver = neo4j.GraphDatabase.driver(url, auth=(username, password))
[70](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:70) self._database = database
[71](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:71) self.schema: str = ""
File [~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:190](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:190), in GraphDatabase.driver(cls, uri, auth, **config)
[170](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:170) @classmethod
[171](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:171) def driver(
[172](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:172) cls, uri: str, *,
ref='~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:0'>0</a>;32m (...)
[177](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:177) **config
[178](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:178) ) -> Driver:
...
--> [486](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:486) raise ConfigurationError("Username is not supported in the URI")
[488](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:488) if parsed.password:
[489](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:489) raise ConfigurationError("Password is not supported in the URI")
ConfigurationError: Username is not supported in the URI
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?bbc3ed55-b69e-4557-b7e7-e9913806eb86) or open in a [text editor](command:workbench.action.openLargeOutput?bbc3ed55-b69e-4557-b7e7-e9913806eb86). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.graphs import Neo4jGraph
import os
uri, user, password = os.getenv("NEO4J_URI"), os.getenv("NEO4J_USERNAME"), os.getenv("NEO4J_PASSWORD")
graph = Neo4jGraph(
url= uri,
username=user,
password=password,
)
### Expected behavior
This driver formation is running fine in v264. however its giving me error in v338 version. at last in the driver stub, its parsing the url and then the username from the parsed url is being checked. If its present then its raising this above config error.
| Neo4j - ConfigurationError: username not supported in the URI | https://api.github.com/repos/langchain-ai/langchain/issues/13601/comments | 5 | 2023-11-20T10:21:02Z | 2024-02-26T16:06:08Z | https://github.com/langchain-ai/langchain/issues/13601 | 2,001,850,563 | 13,601 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'd like to make `ConversationSummaryMemory` is filled with the previous questions and answers for a specific conversation from an SQLite database so I can have my agent already aware of previous conversation with the user.
Here's my current code:
```py
import os
import sys
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores.chroma import Chroma
from langchain.memory import ConversationSummaryMemory
from langchain.tools import Tool
from langchain.agents.types import AgentType
from langchain.agents import initialize_agent
from dotenv import load_dotenv
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
query = " ".join(sys.argv[1:]) if len(sys.argv) > 1 else None
retriever = # retriever stuff here for the `local-docs` tool
llm = ChatOpenAI(temperature=0.7, model="gpt-3.5-turbo-1106")
memory = ConversationSummaryMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
chain_type="stuff",
retriever=index.vectorstore.as_retriever(search_kwargs={"k": 4}),
get_chat_history=lambda h: h,
verbose=False,
)
system_message = (
"Be helpful to your users".
)
tools = [
Tool(
name="local-docs",
func=chain,
description="Useful when you need to answer docs-related questions",
)
]
def ask(input: str) -> str:
result = ""
try:
result = executor({"input": input})
except Exception as e:
response = str(e)
if response.startswith("Could not parse LLM output: `"):
response = response.removeprefix(
"Could not parse LLM output: `"
).removesuffix("`")
return response
else:
raise Exception(str(e))
return result
chat_history = []
executor = initialize_agent(
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
memory=memory,
agent_kwargs={"system_message": system_message},
verbose=True,
max_execution_time=30,
max_iterations=6,
handle_parsing_errors=True,
early_stopping_method="generate",
stop=["\nObservation:"],
)
result = ask(query)
print(result["output"])
``` | Issue: Filling `ConversationSummaryMemory` with existing conversation from an SQLite database | https://api.github.com/repos/langchain-ai/langchain/issues/13599/comments | 17 | 2023-11-20T08:52:20Z | 2023-11-30T03:27:24Z | https://github.com/langchain-ai/langchain/issues/13599 | 2,001,666,284 | 13,599 |
[
"langchain-ai",
"langchain"
] | ### Feature request
add support for other multimodal models like Llava, Fuyu, Bakllava... This would help with RAG, where documents have non text data.
### Motivation
I have a lot of tables and images to proccess in PDFs when doing RAG, and right now this is not ideal.
### Your contribution
no time :( | add multimodal support | https://api.github.com/repos/langchain-ai/langchain/issues/13597/comments | 3 | 2023-11-20T07:00:43Z | 2024-02-26T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13597 | 2,001,501,651 | 13,597 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Qdrant as my vector store, and now, every time I use 'max_marginal_relevance_search' with a fix k parameter. It will always return the same documents. How to add some randomness? So it will return something different(Still within the score_threshold) each time. Here is my sample code of using 'max_marginal_relevance_search':
related_docs = vectorstore.max_marginal_relevance_search(target_place, k=fetch_amount, score_threshold=0.5, filter=rest.Filter(must=[rest.FieldCondition(
key='metadata.category',
match=rest.MatchValue(value=category),
),rest.FieldCondition(
key='metadata.related_words',
match=rest.MatchAny(any=related_words),
)]))
### Suggestion:
_No response_ | Issue: How to add randomness when using max_marginal_relevance_search with Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/13596/comments | 3 | 2023-11-20T06:38:24Z | 2024-02-26T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13596 | 2,001,474,706 | 13,596 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.266
### Who can help?
@eyurtsev @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import datetime
import chainlit
from dotenv import load_dotenv
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.chat_models import ChatOpenAI
from langchain.docstore.document import Document # noqa
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import SelfQueryRetriever
from langchain.vectorstores import Chroma
chainlit.debug = True
load_dotenv()
llm = ChatOpenAI()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"released_at": 1700190868, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"released_at": 1700190868, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"released_at": 1700190868, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"released_at": 1700190868, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"released_at": 1700190868, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"released_at": 1700190868, "director": "Andrei Tarkovsky", "genre": "thriller", "rating": 9.9},
),
]
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())
metadata_field_info = [
AttributeInfo(
name="released_at",
description="Time the movie was released. It's second timestamp.",
type="integer",
),
]
document_content_description = "Brief summary of a movie"
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
)
result = retriever.invoke(
f"What's a movie in this month that's all about toys, and preferably is animated. Current time is: {datetime.datetime.now().strftime('%m/%d/%Y, %H:%M:%S')}.",
)
print(result)
```
### Expected behavior
I declared the `metadata_field_info` which includes the `released_at` field with the data type `integer`.
## I expected the following:
When my query involves the time/timerange of the release time, the query should compare using `integer` instead of `date time`.
### Why I expected this:
- The data type declared in `metadata_field_info` should be utilized.
- In the implementations of `SelfQueryRetriever` (I tested `qdrant` and `chroma`), the accepted type in comparison operations (gte/lte) must be numeric, not a date.
### Identified Reason
I identified the problem due to the `"SCHEMA[s]"` in [langchain/chains/query_constructor/prompt.py](https://github.com/langchain-ai/langchain/blob/190952fe76d8f7bf1e661cbdaa2ba0a2dc0f5456/libs/langchain/langchain/chains/query_constructor/prompt.py#L117).
This line in prompt led the result:
```
Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values
```
I guess that it works in some SQL queries such as `Postgresql`, which accepts 'YYY-MM-DD' as date query inputs.
However, we are working with metadata in vector records, which are structured like JSON objects with key-value pairs, it may not work.
### Proof of reason
I tryed modifing the PROMPTs by defining my own Classes and Functions such as `load_query_constructor_chain`, `_get_prompt`, `SelfQueryRetriever`.
After replacing the above line with the following, it worked as expected:
```
Make sure that filters only use timestamp in second (integer) when handling timestamp data typed values.
```
### Proposals
- Review the above problem. If metadata fields do not support querying with the date format 'YYYY-MM-DD' as specified in the prompt, please update it.
- If this prompt is specified for some use cases, please allow overriding the prompts.
| [SelfQueryRetriever] Generated Query Mismatched Timestamp Type | https://api.github.com/repos/langchain-ai/langchain/issues/13593/comments | 3 | 2023-11-20T04:16:00Z | 2024-04-30T16:22:56Z | https://github.com/langchain-ai/langchain/issues/13593 | 2,001,330,836 | 13,593 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Ability to use guidance.
https://github.com/guidance-ai/guidance
### Motivation
Not related to a problem.
### Your contribution
Not sure yet but I can look into it if it is something the community considers. | Support for Guidance | https://api.github.com/repos/langchain-ai/langchain/issues/13590/comments | 3 | 2023-11-20T03:54:37Z | 2024-02-26T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13590 | 2,001,313,070 | 13,590 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I know I can generate a python dictionary output using StructuredOutputParser like { "a":1, "b":2, "c":3}. However, I would like to generate a nested dic like { "a":1, "b":2, "c":{"d":4, "e":5}}
How can I do it?
### Suggestion:
_No response_ | Issue: can i generate a nested dic output | https://api.github.com/repos/langchain-ai/langchain/issues/13589/comments | 3 | 2023-11-20T03:10:22Z | 2024-02-26T16:06:27Z | https://github.com/langchain-ai/langchain/issues/13589 | 2,001,278,123 | 13,589 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm using a conversationchain that contains memory. It is defined as:
llm = ChatOpenAI(temperature=0.0, model=llm_model)
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True
)
I know I can access the current memory by using "memory.buffer". However, I was wondering if there is a way to access memory only through ConversationChain instance "conversation"?
### Suggestion:
_No response_ | Issue: can i access memory buffer through chain? | https://api.github.com/repos/langchain-ai/langchain/issues/13584/comments | 5 | 2023-11-19T21:31:23Z | 2024-02-25T16:05:02Z | https://github.com/langchain-ai/langchain/issues/13584 | 2,001,045,763 | 13,584 |
[
"langchain-ai",
"langchain"
] | ### System Info
Linux 20.04 LTS
Python 3.6
### Who can help?
@hwchase17 seems like this got introduced on 2023-11-16
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Attempt to use a tracer to trace an LLM error
2. Note that the tracer hook for _on_chain_error is called instead
### Expected behavior
_on_llm_error hook should be called.
| The tracing on_llm_error() implementation calls _on_chain_error(), not _on_llm_error() | https://api.github.com/repos/langchain-ai/langchain/issues/13580/comments | 3 | 2023-11-19T19:21:07Z | 2024-02-28T16:07:56Z | https://github.com/langchain-ai/langchain/issues/13580 | 2,000,998,966 | 13,580 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac M1
### Who can help?
@eyurtsev
Here:
https://github.com/langchain-ai/langchain/blob/78a1f4b264fbdca263a4f8873b980eaadb8912a7/libs/langchain/langchain/document_loaders/confluence.py#L284
We start adding the "max_pages" first pages to the "docs" list that will be the output of loader.load.
So we are sure that I cannot retrieve only one specific `page_id`.
`loader.load(..., page_ids=['1234'], max_pages=N)`
will output X pages where X in [min(N, # pages in my confluence), N + 1]
In other words, if I want only a specific page, I will always have at least 2 pages (in case max_pages = 1)
So page_ids does not work at all because space_key is mandatory.
adding ìf space_key and not page_ids` fix my problem but may lead to other problems (I did not check)
Dirty hack would be to collect the F last elements of the return list if pages where F is the number of found pages asked in page_ids
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No time to do but easy when reading code
### Expected behavior
I can retrieve only the page_ids specified | Confluence loader fails to retrieve specific pages when 'pages_ids' is given | https://api.github.com/repos/langchain-ai/langchain/issues/13579/comments | 5 | 2023-11-19T18:54:14Z | 2024-02-26T16:06:38Z | https://github.com/langchain-ai/langchain/issues/13579 | 2,000,989,464 | 13,579 |
[
"langchain-ai",
"langchain"
] | I am having a wonderful time with my code, but after changing my template it now fails before I even get to give my input. Baffling!
all the required imports are not shown here nor is all the prompt text (containing no special characters)
template = '''Your task is to extract the relationships between terms in the input text,
Format your output as a json list. '''
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(template),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name="history "),
])
llm = ChatOpenAI(temperature=0.8, model_name='gpt-4-1106-preview')
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)](url)
Traceback .........
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationChain
__root__ Got unexpected prompt input variables. The prompt expects ['input', 'history '], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error) | ConversationChain failure after changing template text | https://api.github.com/repos/langchain-ai/langchain/issues/13578/comments | 6 | 2023-11-19T16:56:45Z | 2023-11-20T13:28:40Z | https://github.com/langchain-ai/langchain/issues/13578 | 2,000,941,147 | 13,578 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The feature request I am proposing involves the implementation of hybrid search, specifically using the Reciprocal Rank Fusion (RRF) method, in LangChain through the integration of OpenSearch's vector store.
This would enable the combination of keyword and similarity search. Currently, LangChain doesn't appear to support this functionality, even though OpenSearch has had this capability since its 2.10 release. The goal is to allow LangChain to call search pipelines using OpenSearch's vector implementation, enabling OpenSearch to handle the complexities of hybrid search.
**Relevant Links**:
https://opensearch.org/docs/latest/query-dsl/compound/hybrid
### Motivation
The motivation behind this request stems from the current limitation in LangChain regarding hybrid search capabilities. As someone working on a search project currently, I find it frustrating that despite OpenSearch supporting hybrid search since version 2.10, LangChain has not yet integrated this feature.
### Your contribution
I would gladly help as long as I get guidance.. | Implementing Hybrid Search (RRF) in LangChain Using OpenSearch Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/13574/comments | 13 | 2023-11-19T13:59:02Z | 2024-04-05T23:00:07Z | https://github.com/langchain-ai/langchain/issues/13574 | 2,000,862,839 | 13,574 |
[
"langchain-ai",
"langchain"
] | ### System Info
Using langchain 0.0.337 python, FastAPI.
When I use openai up through 0.28.1 it works fine. Upgrading to 1.0.0 or above results in the following error (when I try to use ChatOpenAI from langchain.chat_models):
"ImportError: Could not import openai python package. Please install it with `pip install openai`."
Trying to follow this notebook to integrate vision preview model:
https://github.com/langchain-ai/langchain/blob/master/cookbook/openai_v1_cookbook.ipynb
Any thoughts on what I might try? Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. install openai (1.0.0), langchain (0.0.337) & langchain-experimental (0.0.39)
2. in FastAPI route, import ChatOpenAI from langchain.chat_models
3. Use ChatOpenAI as usual (working fine w/ openai <= 0.28.1
` llm = ChatOpenAI(
temperature=temperature,
streaming=True,
verbose=True,
model_name=nameCode,
max_tokens=tokens,
callbacks=[callback],
openai_api_key=relevantAiKey,
)`
### Expected behavior
I would expect to not get a "failed import" error when the package is clearly installed. | Upgrading to OpenAI Python 1.0+ = ImportError: Could not import openai python package. | https://api.github.com/repos/langchain-ai/langchain/issues/13567/comments | 4 | 2023-11-18T22:04:33Z | 2023-11-21T00:39:08Z | https://github.com/langchain-ai/langchain/issues/13567 | 2,000,596,810 | 13,567 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
poetry show langchain
name : langchain
version : 0.0.259
description : Building applications with LLMs through composability
dependencies
- aiohttp >=3.8.3,<4.0.0
- async-timeout >=4.0.0,<5.0.0
- dataclasses-json >=0.5.7,<0.6.0
- langsmith >=0.0.11,<0.1.0
- numexpr >=2.8.4,<3.0.0
- numpy >=1,<2
- openapi-schema-pydantic >=1.2,<2.0
- pydantic >=1,<2
- PyYAML >=5.3
- requests >=2,<3
- SQLAlchemy >=1.4,<3
- tenacity >=8.1.0,<9.0.0
```
Python: v3.10.12
### Who can help?
@hwchase17 @agola11
With the current GPT-4 model, the invocation of `from_llm_and_api_docs` works as expected. However, when switching the model to the upcoming `gpt-4-1106-preview`, the function fails as the LLM, instead of returning the URL for the API call, returns a verbose response:
```
LLM response on_text: To generate the API URL for the user's question "basketball tip of the day", we need to include the `sport` parameter with the value "Basketball" since the user is asking about basketball. We also need to include the `event_start` parameter with today's date to get the tip of the day. Since the user is asking for a singular "tip", we should set the `limit` parameter to 1. The `order` parameter should be set to "popularity" if not specified, as per the documentation.
Given that today is 2023-11-18, the API URL would be:
http://<domain_name_hidden>/search/ai?date=2023-11-18&limit=1&order=popularity
```
The prompt should be refined or extra logic should be added to retrieve just the URL with the upcoming GPT-4 model.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to get the upcoming GPT-4 model to return just the URL of the API call.
```
ERROR:root:No connection adapters were found for 'To generate the API URL for the user\'s question "<question edited>", we need to include the `sport` parameter with the value "Basketball" since the user is asking about basketball. We also need to include the `event_start` parameter with today\'s date to get the tip of the day. The `order` parameter should be set to "popularity" if not specified, as per the documentation.\n\nGiven that today is 2023-11-18, the API URL would be:\n\nhttp://<domain_removed>/search/ai?date=2023-11-18&limit=1&order=popularity'
```
### Expected behavior
The LLM to return just the URL and for Langchain to not error out. | from_llm_and_api_docs fails on gpt-4-1106-preview | https://api.github.com/repos/langchain-ai/langchain/issues/13566/comments | 3 | 2023-11-18T22:02:27Z | 2024-02-26T16:06:42Z | https://github.com/langchain-ai/langchain/issues/13566 | 2,000,596,258 | 13,566 |
[
"langchain-ai",
"langchain"
] | ### System Info
Facing this error while executing the langchain code.
```
pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA
separators
extra fields not permitted (type=value_error.extra)
```
Code for retrivalQA
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
```
```
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
```
### Expected behavior
Need a fix for the above error | pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA separators extra fields not permitted (type=value_error.extra) | https://api.github.com/repos/langchain-ai/langchain/issues/13565/comments | 3 | 2023-11-18T21:06:02Z | 2024-02-24T16:05:13Z | https://github.com/langchain-ai/langchain/issues/13565 | 2,000,580,109 | 13,565 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.337
Python: 3.10
### Who can help?
@hwchase17
Note: I am facing this issue with Weaviate, when I use the Chroma Vector Store it's working fine.
I am trying to use "Weaviate Vector DB" with ParentDocumentRetriever and I am getting this error during the pipeline:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[13], line 1
----> 1 retriever.get_relevant_documents("realization")
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/retrievers/multi_vector.py:36, in MultiVectorRetriever._get_relevant_documents(self, query, run_manager)
34 ids = []
35 for d in sub_docs:
---> 36 if d.metadata[self.id_key] not in ids:
37 ids.append(d.metadata[self.id_key])
38 docs = self.docstore.mget(ids)
KeyError: 'doc_id'
```
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import weaviate
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import RedisStore
from langchain.schema import Document
from langchain.storage._lc_store import create_kv_docstore
from langchain.storage import InMemoryStore
from langchain.vectorstores import Chroma
import redis
import os
os.environ["OPENAI_API_KEY"] = ""
client = weaviate.Client(url="https://test-n5.weaviate.network")
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate(client=client, embedding=embeddings, index_name="test1".capitalize(), text_key="text", by_text=False)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=50, chunk_overlap=1)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=5, chunk_overlap=1)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
id_key="doc_id"
)
docs = [
"The sun is shining brightly in the clear blue sky.",
"Roses are red, violets are blue, sugar is sweet, and so are you.",
"The quick brown fox jumps over the lazy dog.",
"Life is like a camera. Focus on what's important, capture the good times, develop from the negatives, and if things don't work out, take another shot.",
"A journey of a thousand miles begins with a single step.",
"The only limit to our realization of tomorrow will be our doubts of today.",
"Success is not final, failure is not fatal: It is the courage to continue that counts.",
"Happiness can be found even in the darkest of times if one only remembers to turn on the light."
]
docs = [Document(page_content=text) for en, text in enumerate(docs)]
retriever.add_documents(docs)
```
The output of below line below didn't contain a ID_Key for mapping the child to parent.
`vectorstore.similarity_search("realization", k=4)`
So, when I tried `retriever.get_relevant_documents("realization")` this returned the KeyError I mentioned.
### Expected behavior
The output of `vectorstore.similarity_search("realization", k=2)` should have been:
```
[Document(page_content='real', metadata={"doc_id": "fdsfsdfsdfsdfsd"}),
Document(page_content='real'), metadata={"doc_id": "rewrwetet"}]
```
but the output I got was:
[Document(page_content='real'),
Document(page_content='real')]
| Bug: Weaviate raise doc_id error using with ParentDocumentRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/13563/comments | 2 | 2023-11-18T18:09:56Z | 2023-11-18T18:33:42Z | https://github.com/langchain-ai/langchain/issues/13563 | 2,000,522,960 | 13,563 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, the new Cohere embedding models are now available on Amazon Bedrock. How can we use them for their reranking capability (instead of just embedding via BedrockEmbedding class)
### Motivation
These models perform well for reranking | BedrockRerank using newly available Cohere embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/13562/comments | 10 | 2023-11-18T17:51:30Z | 2024-05-25T20:47:11Z | https://github.com/langchain-ai/langchain/issues/13562 | 2,000,516,549 | 13,562 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi there,
I have a LangChain app at https://huggingface.co/spaces/bstraehle/openai-llm-rag/blob/main/app.py. Using the latest release 0.0.337 produces the error below. Pinning the library to release 0.0.336 works as expected.
:blue_heart: LangChain, thanks!
Bernd
---
Traceback (most recent call last):
File "/home/user/app/app.py", line 129, in invoke
db = document_retrieval_mongodb(llm, prompt)
File "/home/user/app/app.py", line 91, in document_retrieval_mongodb
db = MongoDBAtlasVectorSearch.from_connection_string(MONGODB_URI,
File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/mongodb_atlas.py", line 109, in from_connection_string
raise ImportError(
ImportError: Could not import pymongo, please install it with `pip install pymongo`.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. In file https://huggingface.co/spaces/bstraehle/openai-llm-rag/blob/main/requirements.txt, unpin the langchain library (or pin it to release 0.0.337).
2. Use the app at https://huggingface.co/spaces/bstraehle/openai-llm-rag with MongoDB selected to invoke `MongoDBAtlasVectorSearch.from_connection_string`, which produces the error.
### Expected behavior
When using release 0.0.337 `MongoDBAtlasVectorSearch.from_connection_string`, error "ImportError: Could not import pymongo, please install it with `pip install pymongo`." should not happen. | Release 0.0.337 breaks MongoDBAtlasVectorSearch.from_connection_string? | https://api.github.com/repos/langchain-ai/langchain/issues/13560/comments | 7 | 2023-11-18T16:43:18Z | 2023-11-28T14:54:05Z | https://github.com/langchain-ai/langchain/issues/13560 | 2,000,493,292 | 13,560 |
[
"langchain-ai",
"langchain"
] | Im building an embedded chatbot using langchain and openai its working fine but the issue is that the responses takes around 15-25 seconds and i tried to use the time library to know which line is taking this much
`import os
import sys
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores import Chroma
from cachetools import TTLCache
import time
import constants
os.environ["OPENAI_API_KEY"] = constants.APIKEY
cache = TTLCache(maxsize=100, ttl=3600) # Example: Cache up to 100 items for 1 hour
PERSIST = False
template_prompt = "If the user greets you, greet back. If there is a link in the response return it as a clickable link as if it is an a tag '<a>'. If you don't know the answer, you can say, 'I don't have the information you need, I recommend contacting our support team for assistance.' Here is the user prompt: 'On the Hawsabah platform"
def initialize_chatbot():
query = None
if len(sys.argv) > 1:
query = sys.argv[1]
if PERSIST and os.path.exists("persist"):
print("Reusing index...\n")
vectorstore = Chroma(persist_directory="persist", embedding_function=OpenAIEmbeddings())
index = VectorStoreIndexWrapper(vectorstore=vectorstore)
else:
loader = TextLoader("data/data.txt")
if PERSIST:
index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])
else:
index = VectorstoreIndexCreator().from_loaders([loader])
chat_chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model="gpt-3.5-turbo"),
retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1}),
)
chat_history = []
return chat_chain, chat_history
MAX_CONVERSATION_HISTORY = 3 # Set the maximum number of interactions to keep in the buffer
def chatbot_response(user_prompt, chat_chain, chat_history):
# Check if the response is cached
cached_response = cache.get(user_prompt)
if cached_response:
return cached_response
# Check if the user's query is a greeting or unrelated
is_greeting = check_for_greeting(user_prompt)
# Conditionally clear the conversation history
if is_greeting:
chat_history.clear()
query_with_template = f"{template_prompt} {user_prompt}'"
s = time.time()
result = chat_chain({"question": query_with_template, "chat_history": chat_history})
e = time.time()
# Append the new interaction and limit the conversation buffer to the last MAX_CONVERSATION_HISTORY interactions
chat_history.append((user_prompt, result['answer']))
if len(chat_history) > MAX_CONVERSATION_HISTORY:
chat_history.pop(0) # Remove the oldest interaction
response = result['answer']
# Cache the response for future use
cache[user_prompt] = response
print("Time taken by chatbot_response:", (e - s) * 1000, "ms")
return response`
the line result = chat_chain({"question": query_with_template, "chat_history": chat_history}) was the one taking this long i tried to figure out how to fix this but i couldnt i also tried to implement word streaming to help make it look faster but it only worked for the davinci model. Is there a way or method to make responses faster? | Response taking way to long | https://api.github.com/repos/langchain-ai/langchain/issues/13558/comments | 4 | 2023-11-18T15:01:10Z | 2024-02-25T16:05:22Z | https://github.com/langchain-ai/langchain/issues/13558 | 2,000,456,203 | 13,558 |
[
"langchain-ai",
"langchain"
] | ### System Info
Bumped into HTTPError when using DuckDuckGo search wrapper in an agent, currently using `langchain==0.0.336`.
Here's the snippet of the traceback as below.
```
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 64, in run
snippets = self.get_snippets(query)
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 55, in get_snippets
for i, res in enumerate(results, 1):
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 96, in text
for i, result in enumerate(results, start=1):
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 148, in _text_api
resp = self._get_url("GET", "https://links.duckduckgo.com/d.js", params=payload)
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 55, in _get_url
raise ex
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 48, in _get_url
raise httpx._exceptions.HTTPError("")
httpx.HTTPError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/src/single_host.py", line 179, in <module>
response = chain({"topic": "Why did Sam Altman got fired by OpenAI.",
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
else self._call(inputs)
File "/path/to/src/single_host.py", line 163, in _call
script = script_chain.run({"topic": inputs["topic"], "focuses": inputs["focuses"], "keypoints": keypoints})
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
else self._call(inputs)
File "/path/to/src/single_host.py", line 117, in _call
information = agent.run(background_info_search_formatted)
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/path/to/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
File "/path/to/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1095, in _take_next_step
observation = tool.run(
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 344, in run
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 510, in _run
self.func(
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 344, in run
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/ddg_search/tool.py", line 36, in _run
return self.api_wrapper.run(query)
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 67, in run
raise ToolException("DuckDuckGo Search encountered HTTPError.")
```
I tried to add an error handling in the method `run()` in `langchain/utilities/duckduckgo_search.py`, something look like below:
```
def run(self, query: str) -> str:
try:
snippets = self.get_snippets(query)
return " ".join(snippets)
except httpx._exceptions.HTTPError as e:
raise ToolException("DuckDuckGo Search encountered HTTPError.")
```
I have also added `handle_tool_error`, where it was copied from the langchain [documentation](https://python.langchain.com/docs/modules/agents/tools/custom_tools)
```
def _handle_error(error: ToolException) -> str:
return (
"The following errors occurred during tool execution:"
+ error.args[0]
+ "Please try another tool."
)
```
However these methods do not seems to stop and still cause the error showed in first code block above. Am I implementing this incorrectly? or should there be other mechanism to handle the error occuried?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Adding `handle_tool_errors` and passing the `_handle_error` function into it.
```
news_tool = Tool.from_function(name="News Search",
func=news_duckduckgo.run,
description="News search to help you look up latest news, which help you to understand latest current affair, and being up-to-date.",
handle_tool_errors=_handle_error)
```
2. Does not seems to work, so I tried to change the DuckDuckGo Wrapper, as described above.
3. HTTPError still lead to abrupt stop of Agent actions.
### Expected behavior
Expecting a proper error handling method, if tool fails, Agent moves on, or try n time before moving on to next step. | Adding DuckDuckGo search HTTPError handling | https://api.github.com/repos/langchain-ai/langchain/issues/13556/comments | 8 | 2023-11-18T13:58:54Z | 2024-02-24T16:05:22Z | https://github.com/langchain-ai/langchain/issues/13556 | 2,000,431,479 | 13,556 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
How to update the template and packages of an app created from a template?
I checked:
https://github.com/langchain-ai/langchain/tree/master/templates
and a couple of templates' README.mds, but this info is missing and it's not obvious for us citizen devs.
I supposed it should be done via langchain-cli, but there's no such option.
So pls. provide a solution and add it to docs.
### Idea or request for content:
How to update the template and packages of an app created from a template? | DOC: add info about how to update the template and the packages of an app created from a template | https://api.github.com/repos/langchain-ai/langchain/issues/13551/comments | 5 | 2023-11-18T10:44:59Z | 2024-02-24T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13551 | 2,000,367,453 | 13,551 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.336
Python: 3.11.6
OS: Microsoft Windows [Version 10.0.19045.3693]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Should be very easy to reproduce. Just enable streaming and use function call in chat. Info about `function_call` supposed to be in `additional_kwargs` has lost. I found this issue because I wanted to use 'function call' feature.
These is a debug output from my console. As you see, output becomes an `AIMessage` with empty `content`, and `additional_kwargs` is empty.
```
[llm/end] [1:chain:AgentExecutor > 2:llm:QianfanChatEndpoint] [2.39s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "stop"
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "ERNIE-Bot"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor] [2.40s] Exiting Chain run with output:
{
"output": ""
}
```
A quick-and-dirty hack in `QianfanChatEndpoint` can fix the issue. Please read the following code related to `first_additional_kwargs` (which is added by me).
```python
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
if self.streaming:
completion = ""
token_usage = {}
first_additional_kwargs = None
async for chunk in self._astream(messages, stop, run_manager, **kwargs):
if first_additional_kwargs is None:
first_additional_kwargs = chunk.message.additional_kwargs
completion += chunk.text
lc_msg = AIMessage(content=completion, additional_kwargs=first_additional_kwargs or {})
gen = ChatGeneration(
message=lc_msg,
generation_info=dict(finish_reason="stop"),
)
return ChatResult(
generations=[gen],
llm_output={"token_usage": {}, "model_name": self.model},
)
params = self._convert_prompt_msg_params(messages, **kwargs)
response_payload = await self.client.ado(**params)
lc_msg = _convert_dict_to_message(response_payload)
generations = []
gen = ChatGeneration(
message=lc_msg,
generation_info={
"finish_reason": "stop",
**response_payload.get("body", {}),
},
)
generations.append(gen)
token_usage = response_payload.get("usage", {})
llm_output = {"token_usage": token_usage, "model_name": self.model}
return ChatResult(generations=generations, llm_output=llm_output)
```
Similarly `_generate` probably contains the same bug.
The following is the new debug output in console. As you can see, 'function call' now works. `additional_kwargs` also contains non-empty `usage`. But `token_usage` in `llm_output` is still empty.
```
[llm/end] [1:chain:AgentExecutor > 2:llm:QianfanChatEndpointHacked] [2.21s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "stop"
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"id": "as-zh6tasbjyb",
"object": "chat.completion",
"created": 1700274407,
"sentence_id": 0,
"is_end": true,
"is_truncated": false,
"result": "",
"need_clear_history": false,
"function_call": {
"name": "GetCurrentTime",
"arguments": "{}"
},
"search_info": {
"is_beset": 0,
"rewrite_query": "",
"search_results": null
},
"finish_reason": "function_call",
"usage": {
"prompt_tokens": 121,
"completion_tokens": 0,
"total_tokens": 121
}
}
}
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "ERNIE-Bot"
},
"run": null
}
```
### Expected behavior
`additional_kwargs` should not be empty. | AIMessage in output of Qianfan with streaming enabled may lose info about 'additional_kwargs', which causes 'function_call', 'token_usage' info lost. | https://api.github.com/repos/langchain-ai/langchain/issues/13548/comments | 6 | 2023-11-18T03:19:52Z | 2024-02-25T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13548 | 2,000,187,192 | 13,548 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Here is my code:
"""For basic init and call"""
import os
import qianfan
from langchain.chat_models import QianfanChatEndpoint
from langchain.chat_models.base import HumanMessage
os.environ["QIANFAN_AK"] = "myak"
os.environ["QIANFAN_SK"] = "mysk"
chat = QianfanChatEndpoint(
streaming=True,
)
res = chat.stream([HumanMessage(content="给我一篇100字的睡前故事")], streaming=True)
for r in res:
print("chat resp:", r)
And after it prints two sentences, returns an error. The full error message is:
Traceback (most recent call last):
File "d:\work\qianfan_test.py", line 13, in <module>
for r in res:
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chat_models\base.py", line 220, in stream
raise e
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chat_models\base.py", line 216, in stream
generation += chunk
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\output.py", line 94, in __add__
message=self.message + other.message,
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\messages.py", line 225, in __add__
additional_kwargs=self._merge_kwargs_dict(
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\messages.py", line 138, in _merge_kwargs_dict
raise ValueError(
ValueError: Additional kwargs key created already exists in this message.
I am only following the official langchain documentation:https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint
And it is not working. What have I done wrong?
### Suggestion:
_No response_ | Issue: When using Qianfan chat model and enabling streaming, get ValueError | https://api.github.com/repos/langchain-ai/langchain/issues/13546/comments | 4 | 2023-11-18T02:49:13Z | 2024-03-13T19:55:40Z | https://github.com/langchain-ai/langchain/issues/13546 | 2,000,175,679 | 13,546 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.337
Python version: 3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db = Chroma.from_documents(docs, AzureOpenAIEmbeddings())
### Expected behavior
This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error
BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
| New update broke embeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/13539/comments | 3 | 2023-11-17T21:47:33Z | 2023-11-18T20:07:42Z | https://github.com/langchain-ai/langchain/issues/13539 | 1,999,979,607 | 13,539 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Why does the below code complain that extra_instructions is a missing key, even though it's learning included in input_variables=["context", "question", "extra_instructions"]?
Any help is greatly appreciated.
vectorstore = Chroma(
collection_name=collection_name,
persist_directory=chroma_db_directory,
embedding_function=embedding,
)
prompt_template = """
{extra_instructions}
{context}
{question}
Continuation:
"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question", "extra_instructions"],
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_kwargs={"k": 1}
),
chain_type_kwargs={"verbose": True, "prompt": PROMPT},
)
### Suggestion:
_No response_ | Issue: Missing some input keys in langchain even when it's present - unclear how prompt args are treated | https://api.github.com/repos/langchain-ai/langchain/issues/13536/comments | 3 | 2023-11-17T21:00:10Z | 2024-02-23T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13536 | 1,999,921,377 | 13,536 |
[
"langchain-ai",
"langchain"
] | ### System Info
python==3.10.13
langchain==0.0.336
pydantic==1.10.13
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Noticed an error with Pydantic validation when the schema contains optional lists. Here are the steps to reproduce the issue and the error that I am getting.
1. A basic extraction scheme is defined using Pydantic.
```python
from pydantic import BaseModel, Field
class Properties(BaseModel):
person_names: Optional[List[str]] = Field([], description="The names of the people")
person_heights: Optional[List[int]] = Field([], description="The heights of the people")
person_hair_colors: Optional[List[str]] = Field([], description="The hair colors of the people")
```
2. Extraction chain is created to extract the defined fields from a document.
```python
from langchain.chat_models import ChatOpenAI
from langchain.chains import create_extraction_chain_pydantic
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo", request_timeout=20, max_retries=1
)
chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)
```
3. When we run the extraction on a document, sometimes the OpenAI function call does not return the `info` field as a `list` but instead as a `dict`. That creates a validation error with Pydantic, even if the extracted fields are perfectly given in the returned dictionary.
```python
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
response = chain.run(inp)
```
4. The error and the traceback are as follows
```python
File "pydantic/main.py", line 549, in pydantic.main.BaseModel.parse_raw
File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PydanticSchema
info
value is not a valid list (type=type_error.list)
```
### Expected behavior
We can make the Pydantic validation pass by maybe simply casting the `info` field into a list if it is somehow returned as a dictionary by the OpenAI function call. | OpenAI Functions Extraction Chain not returning a list | https://api.github.com/repos/langchain-ai/langchain/issues/13533/comments | 6 | 2023-11-17T20:18:52Z | 2024-04-15T16:42:30Z | https://github.com/langchain-ai/langchain/issues/13533 | 1,999,863,087 | 13,533 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Python 3.11.5
- google-ai-generativelanguage==0.3.3
- langchain==0.0.336
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the issue:
1. Create a simple GooglePalm llm
2. Run simple chain with some medical related prompt e.g: `Tell me reasons why I am having symptoms of cold`
Error:
```
return self.parse(result[0].text)
~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
There is a proper error thrown by GooglePalm in completion object
```
Completion(candidates=[],
result=None,
filters=[{'reason': <BlockedReason.SAFETY: 1>}],
safety_feedback=[{'rating': {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>,
'probability': <HarmProbability.HIGH: 4>},
'setting': {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>,
'threshold': <HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE: 2>}}])
```
If same is relayed to user it will be more useful. | Output parser fails with index out of range error but doesn't give actual fail reason in case of GooglePalm | https://api.github.com/repos/langchain-ai/langchain/issues/13532/comments | 3 | 2023-11-17T20:02:52Z | 2024-02-23T16:05:32Z | https://github.com/langchain-ai/langchain/issues/13532 | 1,999,837,560 | 13,532 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would be very nice to be able to build something similar to RAG, but for checking an assumption quality against a knowledge base.
### Motivation
Currently, RAG allows a semantic search but does not help when it comes to evaluating the quality of a user input, based on your vector db.
The point is not to fact-check the news in a newspaper (must be very hard actually...), but to evaluate truth in a random prompt.
### Your contribution
- prompting
- PR
- doc
- discussions
| RAG but for fact checking | https://api.github.com/repos/langchain-ai/langchain/issues/13526/comments | 5 | 2023-11-17T17:31:41Z | 2024-02-24T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13526 | 1,999,617,934 | 13,526 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain == 0.0.336
python == 3.9.6
### Who can help?
@hwchase17 @eyu
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
video_id = YoutubeLoader.extract_video_id("https://www.youtube.com/watch?v=DWUzWvv3xz4")
loader = YoutubeLoader(video_id, add_video_info=True,
language=["en","hi","ru"],
translation = "en")
loader.load()
Output:
[Document(page_content='में रुके शोध सजावट हाउ टू लेट सफल टीम मेंबर्स नो दैट वास नॉट फीयर कॉन्टेक्ट्स रिस्पोंड फॉर एग्जांपल एक्टिव एंड स्ट्रैंथ एंड assistant 3000 1000 कॉन्टैक्ट एंड वे नेवर रिग्रेट थे रिस्पांस यू वांट यू ऑटोमेटेकली गेट नोटिफिकेशन टुडे ओके खुफिया इज द नोटिफिकेशन पाइथन इन नोटिफिकेशन सीनेटर नोटिफिकेशन प्यार सेलिब्रेट विन रिस्पोंस वर्षीय बेटे ईमेल एस वेल एजुकेटेड व्हाट्सएप नोटिफिकेशन इफ यू वांट एनी फीयर अदर टीम मेंबर टू ऑल्सो गेट नोटिफिकेशन व्हेन यू व्हेन यू रिसीवर रिस्पांस फ्रॉम अननोन फीयर कांटेक्ट सुधर रिस्पांस सिस्टम लिफ्ट से ज़ू और यह टीम मेंबर कैन रिस्पोंड इम्युनिटी द न्यू कैंट व ईमेल ऐड्रेस आफ थे पर्सन वरीय स्वीडन से लेफ्ट से अब दूर एक पाठ 7 टारगेट्स ऑयल सुबह रायपुर दिस ईमेल एड्रेस नो अब्दुल विल गेट एनी नोटिफिकेशन फ्रॉम assistant साक्षी व्हेनेवर एनी बडी दिस पॉइंट स्ट्रेन ईमेल अब्दुल विल अलसो गेट डर्टी में अगर सब्जेक्ट लाइन रिस्पांस रिसीवड ए [संगीत]', metadata={'source': 'DWUzWvv3xz4', 'title': 'How to Notify your team member Immediately when a Lead Responds', 'description': 'Unknown', 'view_count': 56, 'thumbnail_url': 'https://i.ytimg.com/vi/DWUzWvv3xz4/hq720.jpg', 'publish_date': '2021-10-04 00:00:00', 'length': 87, 'author': '7Targets AI Sales Assistant'})]
### Expected behavior
The output video transcript should be in English. | YoutubeLoader translation not working | https://api.github.com/repos/langchain-ai/langchain/issues/13523/comments | 6 | 2023-11-17T16:32:37Z | 2024-02-19T18:30:42Z | https://github.com/langchain-ai/langchain/issues/13523 | 1,999,507,481 | 13,523 |
[
"langchain-ai",
"langchain"
] | Hello,
### System Info
Langchain Version: 0.0.336
OS: Windows
### Who can help?
No response
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [x] Vector Stores / Retrievers
- [x] Chains
- [x] SQL Database
### Reproduction
Related to structured data.
I have predefined SQL and variable information. SQL is quite complicated when having to join multiple tables with abbreviated column names. This is a common practical situation. Is there a way to output sqltext and run them with the constraint that the user must fully fill in the variable value?
Example SQL standard information:
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/buido/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/buido/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:Calibri, sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
.xl65
{border:.5pt solid windowtext;}
.xl66
{text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl67
{text-align:left;
vertical-align:middle;}
.xl68
{font-size:9.0pt;
font-weight:700;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl69
{font-size:9.0pt;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:left;
vertical-align:middle;
border:.5pt solid windowtext;
white-space:normal;}
.xl70
{font-size:9.0pt;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl71
{font-weight:700;
border:.5pt solid windowtext;}
-->
</style>
</head>
<body link="#0563C1" vlink="#954F72">
No | Schema | SQL Text | Intent | Condition | Example
-- | -- | -- | -- | -- | --
1 | m_data | Select sum(s.qty) from shipment_info s, product_info p where 1=1 and s.prod_id = p.prod_id and p.prod_type = #prod_type and p.prod_name = #prod_name | Total quantity of goods exported during the day | #prod_type, #prod_name | I want to calculate the total number of model AAA(#prod_name) phones(#prod_type) shipped during the day
100 | … | … | … | … | …
</body>
</html>
### Expected behavior
I expect to output SQLtext when variables are fully supplied and execute sql.
| Extract SQL information and execute | https://api.github.com/repos/langchain-ai/langchain/issues/13519/comments | 7 | 2023-11-17T15:27:24Z | 2024-02-24T16:05:42Z | https://github.com/langchain-ai/langchain/issues/13519 | 1,999,384,822 | 13,519 |
[
"langchain-ai",
"langchain"
] | ### System Info
aiohttp==3.8.4
aiosignal==1.3.1
altair==5.0.1
annotated-types==0.6.0
anyio==3.7.1
asgiref==3.7.2
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
blinker==1.6.2
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.2.0
click==8.1.5
clickhouse-connect==0.6.6
coloredlogs==15.0.1
cryptography==41.0.2
dataclasses-json==0.5.9
decorator==5.1.1
deprecation==2.1.0
dnspython==2.4.0
duckdb==0.8.1
ecdsa==0.18.0
et-xmlfile==1.1.0
fastapi==0.104.1
fastapi-pagination==0.12.12
filetype==1.2.0
flatbuffers==23.5.26
frozenlist==1.4.0
gitdb==4.0.10
GitPython==3.1.32
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httpcore==0.17.3
httptools==0.6.0
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
install==1.3.5
itsdangerous==2.1.2
Jinja2==3.1.2
joblib==1.3.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.18.3
jsonschema-specifications==2023.6.1
langchain==0.0.335
langsmith==0.0.64
loguru==0.7.0
lxml==4.9.3
lz4==4.3.2
Markdown==3.4.3
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.1
olefile==0.46
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
overrides==7.3.1
packaging==23.1
pandas==2.0.3
pdf2image==1.16.3
pdfminer.six==20221105
Pillow==9.5.0
pinecone-client==2.2.4
posthog==3.0.1
protobuf==4.23.4
psycopg2-binary==2.9.7
pulsar-client==3.2.0
py-automapper==1.2.3
pyarrow==12.0.1
pyasn1==0.5.0
pycparser==2.21
pycryptodome==3.18.0
pydantic==2.5.0
pydantic-settings==2.1.0
pydantic_core==2.14.1
pydeck==0.8.1b0
Pygments==2.15.1
pymongo==4.6.0
Pympler==1.0.1
pypandoc==1.11
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-jose==3.3.0
python-keycloak==2.16.6
python-magic==0.4.27
python-pptx==0.6.21
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyYAML==6.0
referencing==0.29.1
regex==2023.6.3
requests==2.31.0
requests-toolbelt==1.0.0
rich==13.4.2
rpds-py==0.8.11
rsa==4.9
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
SQLAlchemy==2.0.19
starlette==0.27.0
streamlit==1.24.1
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
tornado==6.3.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==4.3.1
unstructured==0.8.1
urllib3==2.0.3
uvicorn==0.23.0
uvloop==0.17.0
validators==0.20.0
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
xlrd==2.0.1
XlsxWriter==3.1.2
yarl==1.9.2
zipp==3.16.2
zstandard==0.21.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
// Omitted LLM and store retriever code
memory = VectorStoreRetrieverMemory(
retriever=retriever,
return_messages=True,
)
tool = create_retriever_tool(
retriever,
"search_egypt_mythology",
"Searches and returns documents about egypt mythology",
)
tools = [tool]
system_message = SystemMessage(
content=(
"Do your best to answer the questions. "
"Feel free to use any tools available to look up "
"relevant information, only if necessary"
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[
MessagesPlaceholder(variable_name="history")
],
)
agent = OpenAIFunctionsAgent(llm=__llm, tools=tools, prompt=prompt)
chat = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
return_intermediate_steps=True,
)
result = chat({"input":"my question"})
answer = result['output']
```
**The error is:
ValueError: variable history should be a list of base messages, got**
The agent works with the mongodb as chat history though, so it should have worked with the vector memory retriever:
```
mongo_history = MongoDBChatMessageHistory(
connection_string=settings.mongo_connection_str,
session_id=__get_chat_id(user_uuid),
database_name = settings.mongo_db_name,
collection_name = 'chat_history',
)
memory = ConversationBufferMemory(
chat_memory=mongo_history,
memory_key='history',
input_key="input",
output_key="output",
return_messages=True,
)
```
### Expected behavior
Vector retrieval memory should have worked like the MongoDBChatMessageHistory memory.
There is some omission about this issue in the documentation. | VectorStoreRetrieverMemory doesn't work with AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/13516/comments | 4 | 2023-11-17T14:08:19Z | 2024-02-23T16:05:47Z | https://github.com/langchain-ai/langchain/issues/13516 | 1,999,219,108 | 13,516 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I see the current SQLiteCache is storing the entire prompt message in the SQLite db.
It would be more efficient to just hash the prompt and store this as the key for cache lookup.
### Motivation
My prompt messages are often lengthy and I want to optimize the storage requirements of the Cache.
Keeping an md5 hash as the lookup key would make the lookup also faster rather than doing a string search.
### Your contribution
If this feature makes sense, I can work on this and raise a PR. Let me know. | SQLiteCache - store only the hash of the prompt as key instead of the entire prompt | https://api.github.com/repos/langchain-ai/langchain/issues/13513/comments | 3 | 2023-11-17T12:40:31Z | 2024-02-23T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13513 | 1,999,036,428 | 13,513 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am facing an issue using `MarkdownHeaderTextSplitter` class and, after looking at the code, I noticed that the problem might be present in several Text Splitters and I did not find any issue on this, so I create a new one.
I am trying to use `MarkdownHeaderTextSplitter` regarding the `TextSplitter` interface by calling the method `transform_document`.
However, the [`MarkdownHeaderTextSplitter` does not inherit from `TextSplitter`](https://github.com/langchain-ai/langchain/blob/35e04f204ba3e69356a4f8f557ea88f46d2fa389/libs/langchain/langchain/text_splitter.py#L331) and I wondered if it was a justified implementation or just an oversight.
It seems that the [`HTMLHeaderTextSplitter` is in that case too](https://github.com/langchain-ai/langchain/blob/35e04f204ba3e69356a4f8f557ea88f46d2fa389/libs/langchain/langchain/text_splitter.py#L496).
Can you give me some insight on how to use theses classes if the behavior is normal ?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code is a good way to reproduce what I am trying to do
```python
from langchain.text_splitter import MarkdownHeaderTextSplitter
from langchain.document_loaders import TextLoader
loader = TextLoader("test.md")
document = loader.load()
transformer = MarkdownHeaderTextSplitter([
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
])
tr_documents = transformer.transform_documents(document)
```
### Expected behavior
I want this to return a list of documents (`langchain.docstore.document.Document`) splitted in the same way `MarkdownHeaderTextSplitter.split_text` does on the content of a markdown document [as presented in the documentation](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/markdown_header_metadata).
| Text splitters inhéritance | https://api.github.com/repos/langchain-ai/langchain/issues/13510/comments | 3 | 2023-11-17T10:20:27Z | 2024-03-18T16:06:44Z | https://github.com/langchain-ai/langchain/issues/13510 | 1,998,763,281 | 13,510 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I am trying claude-2 model using the ChatAnthropic library and iterating over my data to call the model end for predictions. Since it's a chain of input, I am using StuffDocumentsChain. I am facing two issues currently.
1. CLOSE_WAIT error after the connection is established with cloud front with anthropic, it keeps creating a new connection for each iteration of data and goes into CLOSE_WAIT state after the process is completed for that iteration after some time.
2. When too many connections go into the close wait state the application gives too many files open error due to the file descriptor handling too many connections.
## Solutions Tried
1. Giving request_default_timeout to handle the CLOSE_WAIT error.
2. Tried creating a function try/catch and final to close any objects within the block.
## Code
```
import warnings
warnings.filterwarnings("ignore")
import jaconv
import numpy as np
import pandas as pd
from langchain import PromptTemplate
from fuzzysearch import find_near_matches
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatAnthropic
from langchain.document_loaders import PDFPlumberLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import HumanMessage, SystemMessage
from langchain.schema.output_parser import OutputParserException
from langchain.chains import LLMChain, StuffDocumentsChain
from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
import time
from json.decoder import JSONDecodeError
from loguru import logger
output_parser = StructuredOutputParser.from_response_schemas([
ResponseSchema(
name="answer",
description="answer to the user's question",
type="string"
),
])
format_instructions = output_parser.get_format_instructions()
def load_data_(file_path):
return PDFPlumberLoader(file_path).load_and_split()
def retriever_(docs):
return EnsembleRetriever(
retrievers=[
BM25Retriever.from_documents(docs, k=4),
Chroma.from_documents(
docs,
OpenAIEmbeddings(
chunk_size=200,
max_retries=60,
show_progress_bar=True
)
).as_retriever(search_kwargs={"k": 4})
],
weights=[0.5, 0.5]
)
def qa_chain():
start_qa = time.time()
qa_chain_output = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatAnthropic(
model_name="claude-2",
temperature=0,
top_p=1,
max_tokens_to_sample=500000,
default_request_timeout=30
),
prompt=ChatPromptTemplate(
messages=[
SystemMessage(content='You are a world class & knowledgeable product catalog assistance.'),
HumanMessagePromptTemplate.from_template('Context:\n{context}'),
HumanMessagePromptTemplate.from_template('{format_instructions}'),
HumanMessagePromptTemplate.from_template('Questions:\n{question}'),
HumanMessage(content='Tips: Make sure to answer in the correct format.'),
],
partial_variables={"format_instructions": format_instructions}
)
),
document_variable_name='context',
document_prompt=PromptTemplate(
template='Content: {page_content}',
input_variables=['page_content'],
)
)
logger.info("Finished QA Chain in {}",time.time()-start_qa)
return qa_chain_output
def generate(input_data, retriever):
start_generate = time.time()
doc_query = jaconv.z2h(
jaconv.normalize(
"コード: {}\n製品の種類: {}\nメーカー品名: {}".format(
str(input_data["Product Code"]),
str(input_data["Product Type"]),
str(input_data["Product Name"])
),
"NFKC"
),
kana=False,
digit=True,
ascii=True
)
docs = RecursiveCharacterTextSplitter(
chunk_size=512,
chunk_overlap=128
).split_documents(
retriever.get_relevant_documents(doc_query)
)
docs = [
doc
for doc in docs
if len(
find_near_matches(
str(input_data["Product Code"]),
str(doc.page_content),
max_l_dist=1
)
) > 0
]
pages = list(set([str(doc.metadata["page"]) for doc in docs]))
question = (
"Analyze the provided context and understand to extract the following attributes as precise as possible for {}"
"Attributes:\n"
"{}\n"
"Return it in this JSON format: {}. "
"if no information found or not sure return \"None\"."
)
generate_output = output_parser.parse(
qa_chain().run({
"input_documents": docs,
"question": question.format(
str(input_data["Product Code"]),
'\n'.join([
f" {i + 1}. What is the value of \"{attrib}\"?"
for i, attrib in enumerate(input_data["Attributes"])
]),
str({"answer": {attrib: f"value of {attrib}" for attrib in input_data["Attributes"]}}),
)
})
)["answer"], ';'.join(pages)
logger.info("Finished QA Chain in {}",time.time()-start_generate)
return generate_output
def predict(pdf_file_path:str, csv_file_path:str,output_csv_file_path:str):
# load PDF data
start_load = time.time()
logger.info("Started loading the pdf")
documents = load_data_(pdf_file_path)
logger.info("Finished loading the pdf in {}",time.time()-start_load)
try:
start_retriever = time.time()
logger.info("Started retriever the pdf")
retriever = retriever_(documents)
logger.info("Finished retriever the pdf in {}",time.time()-start_retriever)
except Exception as e:
logger.error(f"Error in Retriever: {e}")
# Load CSV
start_load_csv = time.time()
logger.info("Started Reading Csv")
df = pd.read_csv(
csv_file_path,
low_memory=False,
dtype=object
).replace(np.nan, 'None')
logger.info("Finished Reading Csv in {}",time.time()-start_load_csv)
# Inference
index = 0
result_df = list()
start_generate = time.time()
logger.info("Started Generate Function")
for code, dfx in df.groupby('Product Code'):
start_generate_itr = time.time()
try:
logger.info("Reached index {}",index)
logger.info("Reached index code {}",code)
index = index + 1
result, page = generate(
{
'Product Code': code,
'Product Type': dfx['Product Type'].tolist()[0],
'Product Name': dfx['Product Name'].tolist()[0],
'Attributes': dfx['Attributes'].tolist()
},
retriever
)
dfx['Value'] = dfx['Attributes'].apply(lambda attrib: result.get(attrib, 'None'))
dfx['Page'] = page
logger.info("Generate Function 1 iterations {}",time.time()-start_generate_itr)
except OutputParserException as e:
dfx['Value'] = "None"
dfx['Page'] = "None"
logger.info("Generate Function 1 iterations {}",time.time()-start_generate_itr)
logger.info("JSONDecodeError Exception Occurred in {}",e)
result_df.append(dfx)
logger.info("Finished Generate Function in {}",time.time()-start_generate)
try:
result_df = pd.concat(result_df)
result_df.to_csv(output_csv_file_path)
except FileNotFoundError as e:
logger.error("FileNotFoundError Exception Occurred in {}",e)
return result_df
# df = predict(f"{PDF_FILE}", f"{CSV_FILE}","output.csv")
```
### Suggestion:
I would like to know how to handle the connection CLOSE_WAIT error since I am handling a big amount of data to be processed through Anthropic claude-2 | Issue: Getting CLOSE_WAIT and too many files open error using ChatAnthropic and StuffDocumentsChain | https://api.github.com/repos/langchain-ai/langchain/issues/13509/comments | 12 | 2023-11-17T10:05:35Z | 2024-06-24T16:07:27Z | https://github.com/langchain-ai/langchain/issues/13509 | 1,998,738,328 | 13,509 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.9
langchain 0.0.336
openai 1.3.2
pandas 2.1.3
### Who can help?
@EYU
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
First of all, thank you for this great library !
Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command:
```
python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta
```
On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes).
```python
from langchain.llms import VLLMOpenAI
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="zephyr-7b-beta",
)
print(llm("Rome is"))
```
And I got the following error:
```text
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[19], line 6
1 llm = VLLMOpenAI(
2 openai_api_key="EMPTY",
3 openai_api_base="http://localhost:8000/v1",
4 model_name="zephyr-7b-beta",
5 )
----> 6 llm("Rome is")
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
869 if not isinstance(prompt, str):
870 raise ValueError(
871 "Argument `prompt` is expected to be a string. Instead found "
872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
873 "`generate` instead."
874 )
875 return (
--> 876 self.generate(
877 [prompt],
878 stop=stop,
879 callbacks=callbacks,
880 tags=tags,
881 metadata=metadata,
882 **kwargs,
883 )
884 .generations[0][0]
885 .text
886 )
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
641 raise ValueError(
642 "Asked to cache, but no cache found at `langchain.cache`."
643 )
644 run_managers = [
645 callback_manager.on_llm_start(
646 dumpd(self),
(...)
654 )
655 ]
--> 656 output = self._generate_helper(
657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
658 )
659 return output
660 if len(missing_prompts) > 0:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
542 for run_manager in run_managers:
543 run_manager.on_llm_error(e)
--> 544 raise e
545 flattened_outputs = output.flatten()
546 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
521 def _generate_helper(
522 self,
523 prompts: List[str],
(...)
527 **kwargs: Any,
528 ) -> LLMResult:
529 try:
530 output = (
--> 531 self._generate(
532 prompts,
533 stop=stop,
534 # TODO: support multiple run managers
535 run_manager=run_managers[0] if run_managers else None,
536 **kwargs,
537 )
538 if new_arg_supported
539 else self._generate(prompts, stop=stop)
540 )
541 except BaseException as e:
542 for run_manager in run_managers:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
442 choices.append(
443 {
444 "text": generation.text,
(...)
451 }
452 )
453 else:
--> 454 response = completion_with_retry(
455 self, prompt=_prompts, run_manager=run_manager, **params
456 )
457 if not isinstance(response, dict):
458 # V1 client returns the response in an PyDantic object instead of
459 # dict. For the transition period, we deep convert it to dict.
460 response = response.dict()
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs)
112 """Use tenacity to retry the completion call."""
113 if is_openai_v1():
--> 114 return llm.client.create(**kwargs)
116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
118 @retry_decorator
119 def _completion_with_retry(**kwargs: Any) -> Any:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
297 msg = f"Missing required argument: {quote(missing[0])}"
298 raise TypeError(msg)
--> 299 return func(*args, **kwargs)
TypeError: create() got an unexpected keyword argument 'api_key'
```
It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working.
### Expected behavior
I expect a completion with no error. | VLLMOpenAI -- create() got an unexpected keyword argument 'api_key' | https://api.github.com/repos/langchain-ai/langchain/issues/13507/comments | 3 | 2023-11-17T08:56:07Z | 2023-11-20T01:49:57Z | https://github.com/langchain-ai/langchain/issues/13507 | 1,998,591,711 | 13,507 |
[
"langchain-ai",
"langchain"
] | ### System Info
I was using 0.0.182-rc.1 and I tried upgrading to the latest, 0.0.192
But I'm still getting the error.
Please note: everything was working fine before, I have made no changes.
Did openai change something?
Not sure what is going on here but it looks like its from openai side. If so how do I fix this? Do I wait for a langchain update?
Error:
/node_modules/openai
/src/error.ts:66
return new BadRequestError(status, error, message, headers);
^
Error: 400 '$.input' is invalid. Please check the API reference: https://p
latform.openai.com/docs/api-reference.
at Function.generate (/home/hedgehog/Europ/profiling-github/profiling/
server/node_modules/openai/src/error.ts:66:14)
at OpenAI.makeStatusError (/home/hedgehog/Europ/profiling-github/profi
ling/server/node_modules/openai/src/core.ts:358:21)
at OpenAI.makeRequest (/home/hedgehog/Europ/profiling-github/profiling
/server/node_modules/openai/src/core.ts:416:24)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at /home/hedgehog/Europ/profiling-github/profiling/server/node_modules
/langchain/dist/embeddings/openai.cjs:223:29
at RetryOperation._fn (/home/hedgehog/Europ/profiling-github/profiling
/server/node_modules/p-retry/index.js:50:12)
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Not sure, it broke suddenly without any changes
### Expected behavior
It would initialize properly and execute all the onModuleLoad functions in the Nest js application with the embeddings ready for use. | Sudden failure to initialize | https://api.github.com/repos/langchain-ai/langchain/issues/13505/comments | 3 | 2023-11-17T07:42:15Z | 2024-02-23T16:05:57Z | https://github.com/langchain-ai/langchain/issues/13505 | 1,998,469,952 | 13,505 |
[
"langchain-ai",
"langchain"
] | ### System Info
Location: langchain/lllms/base.py
I think there's a hidden error in a 'generate' method. (line 554)
Inside of the second 'if' statement, it calls the index of callbacks that may cause the potential error.
`Callbacks` is an object of `CallbackManager`, which is not iterable, so it can not be called as `callbacks[0]`. (see the source code below)
Thanks for @Seuleeee
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain/lllms/base.py
```
if (
isinstance(callbacks, list)
and callbacks
and (
isinstance(callbacks[0], (list, BaseCallbackManager))
or callbacks[0] is None
)
):
```
Error message
```
Traceback (most recent call last):
File "/tmp/ipykernel_713/4197562787.py", line 10, in test_rag
result=conv_chain.run(query)
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/conversational_retrieval/base.py", line 159, in _call
answer = self.combine_docs_chain.run(
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 510, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
...
File "/usr/local/lib/python3.8/dist-packages/langchain/llms/base.py", line 617, in generate
isinstance(callbacks[0], (list, BaseCallbackManager)
TypeError: 'CallbackManager' object is not subscriptable
```
### Expected behavior
Revise the code that has a potential error.
Tell me how can I contribute to fix this code. (test code ...etc) | Found Potential Bug! (langchain > llm > base.py) | https://api.github.com/repos/langchain-ai/langchain/issues/13504/comments | 4 | 2023-11-17T07:05:15Z | 2024-02-26T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13504 | 1,998,415,070 | 13,504 |
[
"langchain-ai",
"langchain"
] | ### Feature request
can you add memory for RetrievalQA.from_chain_type(). I haven't seen any implementation of memory for this kind of RAG chain. It would be nice to have memory and ask questions in context.
### Motivation
I just can't get any memory to work with RetrievalQA.from_chain_type().
### Your contribution
Not right now... I don't have all required knowledge about LLMs | implement memory for RetrievalQA.from_chain_type() | https://api.github.com/repos/langchain-ai/langchain/issues/13503/comments | 3 | 2023-11-17T06:52:41Z | 2024-02-23T16:06:07Z | https://github.com/langchain-ai/langchain/issues/13503 | 1,998,400,060 | 13,503 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**Description:**
The Redistext search feature in Redis Vector Store functions well with a small number of indexed documents (20-30). However, when the quantity of indexed documents exceeds 400-500, the performance degrades noticeably. Some keys are missed during the Redistext search, and Redis Similarity search retrieves incorrect keys.
**Steps to Reproduce:**
1. Store 400-500 documents in an Index of Redis vector store database.
2. Conduct Redistext search and observe that it is not able to find some of the stored keys.
3. Use Redis Similarity search and notice retrieval of incorrect keys, for some number of keys that are already stored in REdis vector store database.
**Additional Details:**
The issue is not present with smaller datasets (20-30 indexed documents) .
All problematic keys are confirmed to be stored in the Redis Vector Store database with double-checked hash values.
**Expected Behavior:**
Redistext search and Redis Similarity search should provide accurate results even with larger datasets.
**Environment:**
- Official Redis stack docker image (redis/redis-stack-server:latest)
- Docker version 24.0.2
**Any guidance or solutions to improve search performance would be highly appreciated.**
Thank you.
### Suggestion:
**Optimize Redistext Search and Redis Similarity Search for Larger Datasets**
**Proposed Solution:**
Given the observed performance degradation with Redistext search and Redis Similarity search when handling larger datasets (400-500 indexed documents), I suggest reassessing the Redis index similarity search and Redistext search functions. The objective is to ensure accurate results even when the dataset is extensive.
| Why does Redis vector store misses some keys in redistext search. although key and related details exists in index.Issue | https://api.github.com/repos/langchain-ai/langchain/issues/13500/comments | 3 | 2023-11-17T05:05:26Z | 2024-02-23T16:06:12Z | https://github.com/langchain-ai/langchain/issues/13500 | 1,998,286,493 | 13,500 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I created a tool in an agent to output some data. The data contains ADMET properties and other properties. Although I have made it very clear that all properties should be kept in the tool function as well as in the output parser, I still can not get the non-ADMET properties in the final output. Here is my code:
def Func_Tool_XYZ(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"SMILES": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
prompt = f"Properties: {data['SMILES']} {data['humanIntestinalAbsorption']} {data['caco2Permeability']} {data['caco2PermeabilityIi']} {data['savePath']} {data['status']} {data['favoriteFlag']}"
return {
'output': data,
'prompt': prompt
}
tools = [
Tool(
name="Tool XYZ",
func=Func_Tool_XYZ,
description="""
useful when you want to obtain the XYZ data for a molecule.
like: get the XYZ data for molecule X
The input to this tool should be a string, representing the smiles_id.
"""
)
]
class CustomOutputParser(JSONAgentOutputParser):
def parse(self, llm_output: str):
print('jdlfjdlfjdlfjsdlfdjld')
print(llm_output)
smiles = self.data["SMILES"]
return f"SMILES: {smiles}, {llm_output}"
output_parser = CustomOutputParser()
llm = OpenAI(temperature=0, max_tokens=2048)
memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
memory.clear()
agent = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True,
output_parser=output_parser
)
pdf_id = 1111
Human_prompt = f'provide a document. PDF ID is {pdf_id}. This information helps you to understand this document, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\".'
AI_prompt = "Received. "
memory.save_context({"input": Human_prompt}, {"output": AI_prompt})
answer = agent({"input": "Get the ADMET data for molecule X."})
print(answer["output"])
And here is my output messages:
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Tool XYZ
Action Input: molecule X
molecule X
触发Func_Tool_ADMET插件
........正在解析ADMET属性..........
Observation: {'output': {'SMILES': 'C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1', 'humanIntestinalAbsorption': 'HIA+|0.73', 'caco2Permeability': 'None', 'caco2PermeabilityIi': 'Caco2+|0.70', 'savePath': '/profile/chemicalAppsResult/admetResult/2023/11/10/2567', 'status': '1', 'favoriteFlag': '0'}, 'prompt': 'Properties: C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1 HIA+|0.73 None Caco2+|0.70 /profile/chemicalAppsResult/admetResult/2023/11/10/2567 1 0'}
Thought: Do I need to use a tool? No
AI: The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
> Finished chain.
The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
It seems like the non-ADMET properties have been lost.
And, it may be causing by the output parser not working, becuase if it works fine, I should get something like "jdlfjdlfjdlfjsdlfdjld" which is stated in the output parser.
So how should I do next?
### Suggestion:
_No response_ | Output Parser not Work in an Agent Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13498/comments | 3 | 2023-11-17T03:53:58Z | 2024-02-23T16:06:17Z | https://github.com/langchain-ai/langchain/issues/13498 | 1,998,227,516 | 13,498 |
[
"langchain-ai",
"langchain"
] | ### System Info
When using fail, similarity_search_with_score function, the parameters filter and score_threshold together, the results will be problematic
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.use similarity_search_with_score
2.parameter:filter and score_threshold
### Expected behavior
result is error | faiss中的错误 | https://api.github.com/repos/langchain-ai/langchain/issues/13497/comments | 3 | 2023-11-17T03:29:19Z | 2024-02-23T16:06:22Z | https://github.com/langchain-ai/langchain/issues/13497 | 1,998,208,034 | 13,497 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to create an ADMET properties prediction tool in an agent. But the result is not so good. Here is my code:
def Func_Tool_XYZ(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"SMILES": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
prompt = f"Properties: {data['SMILES']} {data['humanIntestinalAbsorption']} {data['caco2Permeability']} {data['caco2PermeabilityIi']} {data['savePath']} {data['status']} {data['favoriteFlag']}"
return {
'output': data,
'prompt': prompt
}
tools = [
Tool(
name="Tool XYZ",
func=Func_Tool_XYZ,
description="""
useful when you want to obtain the XYZ data for a molecule.
like: get the XYZ data for molecule X
The input to this tool should be a string, representing the smiles_id.
"""
)
]
class CustomOutputParser(JSONAgentOutputParser):
def parse(self, llm_output: str):
print('jdlfjdlfjdlfjsdlfdjld')
print(llm_output)
return llm_output
output_parser = CustomOutputParser()
llm = OpenAI(temperature=0, max_tokens=2048)
memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
memory.clear()
agent = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True,
output_parser=output_parser
)
pdf_id = 1111
Human_prompt = f'provide a document. PDF ID is {pdf_id}. This information helps you to understand this document, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\".'
AI_prompt = "Received. "
memory.save_context({"input": Human_prompt}, {"output": AI_prompt})
answer = agent({"input": "Get the ADMET data for molecule X."})
print(answer["output"])
I got the output like this:
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Tool XYZ
Action Input: molecule X
molecule X
触发Func_Tool_ADMET插件
........正在解析ADMET属性..........
Observation: {'output': {'SMILES': 'C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1', 'humanIntestinalAbsorption': 'HIA+|0.73', 'caco2Permeability': 'None', 'caco2PermeabilityIi': 'Caco2+|0.70', 'savePath': '/profile/chemicalAppsResult/admetResult/2023/11/10/2567', 'status': '1', 'favoriteFlag': '0'}, 'prompt': 'Properties: C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1 HIA+|0.73 None Caco2+|0.70 /profile/chemicalAppsResult/admetResult/2023/11/10/2567 1 0'}
Thought: Do I need to use a tool? No
AI: The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
> Finished chain.
The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
As you can see it, the property of "SMILES" has been lost, although I have made it clear that the properties in the tool should be kept as they are originally.
So what happend? How should I revise the code to make it work?
### Suggestion:
_No response_ | Property Lost in an Agent Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13495/comments | 4 | 2023-11-17T03:09:10Z | 2024-02-23T16:06:27Z | https://github.com/langchain-ai/langchain/issues/13495 | 1,998,187,189 | 13,495 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The `make api_docs_build` command is very slow. This command builds the API Reference documentation. This command is defined in the `Makefile`
### Suggestion:
_No response_ | very slow make command | https://api.github.com/repos/langchain-ai/langchain/issues/13494/comments | 7 | 2023-11-17T02:58:21Z | 2024-02-09T16:47:54Z | https://github.com/langchain-ai/langchain/issues/13494 | 1,998,176,960 | 13,494 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.320
MacOS 13.14.1
Python 3.9.18
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using `DynamoDBChatMessageHistory` for storing chat messages. However, I do not use `Human` or `AI` as my chat prefixes. Using pre-defined methods `add_user_message` and `add_ai_message` won't work for me. I extended the `BaseMessage` class to create a new message type. I get this error when trying to read messages from history.
```
raise ValueError(f"Got unexpected message type: {_type}")
ValueError: Got unexpected message type: User
```
Here's my code:
```
import uuid
from typing_extensions import Literal
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.schema.messages import BaseMessage
class UserMessage(BaseMessage):
type: Literal["User"] = "User"
class BotMessage(BaseMessage):
type: Literal["Bot"] = "Bot"
session_id = str(uuid.uuid4())
history = DynamoDBChatMessageHistory(
table_name="tableName",
session_id=session_id,
primary_key_name='sessionId'
)
# history.add_user_message("hi!") # works fine
# history.add_ai_message("whats up?") # works fine
history.add_message(UserMessage(content="Hello, I'm the user!")) # saves the message to the table with the correct message type
history.add_message(BotMessage(content="Hello, I'm the bot!!")) # doesn't run due to the above-encountered error
print(history.messages) # throws a ValueError
```
### Expected behavior
The error should not be thrown so that other execution steps are completed. | Adding messages to history doesn't work with custom message types | https://api.github.com/repos/langchain-ai/langchain/issues/13493/comments | 4 | 2023-11-17T02:23:36Z | 2024-02-23T16:06:32Z | https://github.com/langchain-ai/langchain/issues/13493 | 1,998,146,316 | 13,493 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform = "^1.36.4"
langchain = "0.0.336"
python 3.11
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for VertexAI
__root__
Unknown model publishers/google/models/chat-bison-32k; {'gs://google-cloud-aiplatform/schema/predict/instance/text_generation_1.0.0.yaml': <class 'vertexai.preview.language_models._PreviewTextGenerationModel'>} (type=value_error)
```
```
(Pdb) model_id
'publishers/google/models/codechat-bison-32k'
(Pdb) _publisher_models._PublisherModel(resource_name=model_id)
<google.cloud.aiplatform._publisher_models._PublisherModel object at 0xffff752ba110>
resource name: publishers/google/models/codechat-bison-32k
```
Public Preview: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/chat-bison?hl=en
|name|released|status|
| -------------- | ------------ | -------------|
|chat-bison-32k | 2023-08-29 | Public Preview|
### Who can help?
@ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = VertexAI(
model_name="chat-bison-32k",
max_output_tokens=8192,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
# streaming=True,
)
```
### Expected behavior
chat-bison-32k works
There may be some other models released recently that may also need similar update. | VertexAI chat-bison-32k support | https://api.github.com/repos/langchain-ai/langchain/issues/13478/comments | 3 | 2023-11-16T20:58:24Z | 2023-11-16T22:28:16Z | https://github.com/langchain-ai/langchain/issues/13478 | 1,997,770,078 | 13,478 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The MLflowAIGateway module names should be consistent with `MLflow` (upper case ML and lowercase f):
`from langchain.chat_models import ChatMLflowAIGateway`
For embeddings and completions, current module names are:
```
from langchain.llms import MlflowAIGateway
from langchain.embeddings import MlflowAIGatewayEmbeddings
```
should be:
```
from langchain.llms import MLflowAIGateway
from langchain.embeddings import MLflowAIGatewayEmbeddings
```
### Suggestion:
_No response_ | Issue: minor naming discrepency of MLflowAIGateway | https://api.github.com/repos/langchain-ai/langchain/issues/13475/comments | 3 | 2023-11-16T19:00:55Z | 2024-02-22T16:06:03Z | https://github.com/langchain-ai/langchain/issues/13475 | 1,997,547,851 | 13,475 |
[
"langchain-ai",
"langchain"
] | ### System Info
databricks Machine Learning Runtime: 13.3
langchain==0.0.319
python == 3.9
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create a route following [this instruction](https://mlflow.org/docs/latest/python_api/mlflow.gateway.html)
```
gateway.create_route(
name="chat",
route_type="llm/v1/chat",
model={
"name": "llama2-70b-chat",
"provider": "mosaicml",
"mosaicml_config": {
"mosaicml_api_key": <key>
}
}
)
```
3. use the template code from the documentation page [here](https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway#chat-example)
```
from langchain.chat_models import ChatMLflowAIGateway
from langchain.schema import HumanMessage, SystemMessage
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1
}
)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French: I love programming."
),
]
print(chat(messages))
```
This will complain parameter `max_tokens` is not provided. Similarly, if we update the model
```
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1,
"max_tokens": 200
}
)
```
it complains about parameter `stop` is not provided.
However, both parameters are supposed to be non-required parameters based on MLflow's doc [here](https://mlflow.org/docs/latest/llms/gateway/index.html#chat)
### Expected behavior
expecting the example code below should be executed successfully
```
from langchain.chat_models import ChatMLflowAIGateway
from langchain.schema import HumanMessage, SystemMessage
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1
}
)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French: I love programming."
),
]
print(chat(messages))
```
| inconsistency parameter requirements for chat_models.ChatMLflowAIGateway | https://api.github.com/repos/langchain-ai/langchain/issues/13474/comments | 3 | 2023-11-16T18:51:10Z | 2024-02-22T16:06:09Z | https://github.com/langchain-ai/langchain/issues/13474 | 1,997,529,011 | 13,474 |
[
"langchain-ai",
"langchain"
] | ### System Info
Version: langchain 0.0.336
Version: sqlalchemy 2.0.1
File "/Users/anonymous/code/anon/anon/utils/cache.py", line 40, in set_cache
from langchain.cache import SQLiteCache
File "/Users/anonymous/dotfiles/virtualenvs/aiproject/lib/python3.11/site-packages/langchain/cache.py", line 45, in <module>
from sqlalchemy import Column, Integer, Row, String, create_engine, select
ImportError: cannot import name 'Row' from 'sqlalchemy' (/Users/anonymous/dotfiles/virtualenvs/aiproject/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.cache import SQLiteCache
### Expected behavior
Doesn't throw exception | ImportError: cannot import name 'Row' from 'sqlalchemy' | https://api.github.com/repos/langchain-ai/langchain/issues/13464/comments | 6 | 2023-11-16T15:08:51Z | 2024-03-18T16:06:39Z | https://github.com/langchain-ai/langchain/issues/13464 | 1,997,084,178 | 13,464 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am facing an issue during the testing of my chatbot, here is below i attached a screenshot which can show you the exact issue.

But when i hit the api again with first small letter 'what' instead of 'What' then it return correct response.

Could someone please help me identify the exact issue? thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
{
"question": "what are the reactions associated with hydromorphone use in animals?"
}
### Expected behavior

| ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/13461/comments | 6 | 2023-11-16T13:42:31Z | 2024-02-22T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13461 | 1,996,884,066 | 13,461 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Wrote the following code in Google Colab but it is unable to fetch the data. Kindly help.
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
print(docs)

### Suggestion:
_No response_ | Issue: Sitemap Loader not fetching | https://api.github.com/repos/langchain-ai/langchain/issues/13460/comments | 7 | 2023-11-16T12:56:45Z | 2024-05-03T06:29:38Z | https://github.com/langchain-ai/langchain/issues/13460 | 1,996,800,557 | 13,460 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
1 more question for my code thats below.
This is my code:
loader = PyPDFLoader(file_name)
documents = loader.load()
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=300,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\Users\Asus\OneDrive\Documents\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory=persist_directory)
knowledge_base.persist()
prompt_template = """
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
memory = memory,
retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":2}),
chain_type="stuff",
verbose=False,
combine_docs_chain_kwargs={"prompt":PROMPT}
)
def main():
chat_history = []
while True:
query = input("Ask me anything about the files (type 'exit' to quit): ")
if query.lower() in ["exit"] and len(query) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if query != "":
# with get_openai_callback() as cb:
llm_response = conversation({"question": query})
if name == "main":
main()
Below is an example of my terminal in vs code when I ask my AI model a question.
Ask me anything about the files (type 'exit' to quit): How do I delete a staff account
How can I delete a staff account?To delete a staff account, you need to have administrative privileges. As an admin, you have the ability to delete staff accounts when necessary.
Ask me anything about the files (type 'exit' to quit): | Output being rephrased | https://api.github.com/repos/langchain-ai/langchain/issues/13458/comments | 3 | 2023-11-16T12:34:29Z | 2024-02-22T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13458 | 1,996,761,493 | 13,458 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
Below is my code and in the line " llm_response = conversation({"query": question})", what would happen if I did conversation.run or conversation.apply or conversation.batch or conversation.invoke? When I do use each of those?
load_dotenv()
file_name = "Admin 2.0.pdf"
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
loader = PyPDFLoader(file_name)
documents = loader.load()
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=300,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory=persist_directory)
knowledge_base.persist()
prompt_template = """
You must only follow the instructions in list below:
1) You are a friendly and conversational assistant named RAAFYA.
3) Answer the questions based on the document or if the user asked something
3) Never mention the name of the file to anyone to prevent any potential security risk
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory()
# conversation = ConversationalRetrievalChain.from_llm(
# llm=llm,
# retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":2}),
# chain_type="stuff",
# verbose=False,
# combine_docs_chain_kwargs={"prompt":PROMPT}
# )
conversation = RetrievalQA.from_chain_type(
llm = llm,
retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":3}),
chain_type="stuff",
return_source_documents = True,
verbose=False,
chain_type_kwargs={"prompt":PROMPT,
"memory":memory}
)
def process_source(llm_response):
print(llm_response['result'])
print('\n\nSources:')
for source in llm_response['source_documents']:
print(source.metadata['source'])
def main():
chat_history = []
while True:
question = input("Ask me anything about the files (type 'exit' to quit): ")
if question.lower() in ["exit"] and len(question) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if question != "":
# with get_openai_callback() as cb:
llm_response = conversation.({"query": question})
process_source(llm_response)
# chat_history.append((question, llm_response["result"]))
# print(result["answer"])
print()
# print(cb)
# print()
if __name__ == "__main__":
main() | Difference between run, apply, invoke, batch | https://api.github.com/repos/langchain-ai/langchain/issues/13457/comments | 7 | 2023-11-16T11:42:35Z | 2024-04-09T16:15:50Z | https://github.com/langchain-ai/langchain/issues/13457 | 1,996,672,590 | 13,457 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.336
OS: Windows
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using "SQLDatabase.from_uri" I am not able to access table information from private schema of PostgreSQL database. But using the same I am able to access table information from public schema of PostgreSQL DB. How can I access table information from all the schema's ? Please find the code below. Can someone help me?
db = SQLDatabase.from_uri(f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}",sample_rows_in_table_info=5)
print(db.get_table_names())
### Expected behavior
I expected it give information of tables from all the schemas. | Langchain SQLDatabase is not able to access table information correctly from private schema of PostgreSQL database | https://api.github.com/repos/langchain-ai/langchain/issues/13455/comments | 3 | 2023-11-16T11:12:24Z | 2024-03-17T16:05:51Z | https://github.com/langchain-ai/langchain/issues/13455 | 1,996,620,397 | 13,455 |
[
"langchain-ai",
"langchain"
] | ### System Info
**System Information:**
- Python: 3.10.13
- Conda: 23.3.1
- Openai: 0.28.1
- LangChain: 0.0.330
- System: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
- Platform: AWS SageMaker Notebook
**Issue:**
- Few hours ago everything was working fine but now all of a sudden OpenAI isn't generating JSON format results
- I can't upgrade OpenAI or LangChain because another issue related to ChatCompletion due to OpenAI's recent update comes in which LangChain hasn't resolved
- Screenshot of Prompt and the output is below
**Screenshot:**

### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Just ask the OpenAI to generate a JSON format output:**
prompt = """
You're analyzing an agent's calls performance with his customers.
Please generate an expressive agent analysis report by using the executive summary, metadata provided to you and by
comparing the agent emotions against the top agent's emotions. Don't make the report about numbers only.
Make it look like an expressive and qualitative summary but keep it to ten lines only. Also generate only
five training guidelines for the agent in concise bullet points.
The output should be in the following JSON format:
{
"Agent Analysis Report" : "..."
"Training Guidelines" : "..."
}
"""
### Expected behavior
**Output should be like this:**
Output = {
"Agent Performance Report": "Agent did not perform well....",
"Training Guidelines": "Agent should work on empathy..."
} | LangChain(OpenAI) not returning JSON format output | https://api.github.com/repos/langchain-ai/langchain/issues/13454/comments | 3 | 2023-11-16T10:42:38Z | 2024-02-18T13:34:12Z | https://github.com/langchain-ai/langchain/issues/13454 | 1,996,569,188 | 13,454 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Can you allow the usage of existing Open ai assistant instead of creating a new one every time when using OpenAI Runnable
### Motivation
I dont want to clutter my assistant list with a bunch of clones
### Your contribution
The developer should only provide the assistant ID in the Constructor for OpenAIRunnable | OpenAI Assitant | https://api.github.com/repos/langchain-ai/langchain/issues/13453/comments | 10 | 2023-11-16T10:40:22Z | 2024-03-10T06:08:22Z | https://github.com/langchain-ai/langchain/issues/13453 | 1,996,565,178 | 13,453 |
[
"langchain-ai",
"langchain"
] | ### System Info
Error info:
<CT_SectPr '<w:sectPr>' at 0x2c9fa6c50> is not in list
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create a new word ( .docx ) file,
2. wirte some content inside, then insert a directory in some where and save file.
3. upload .docx file.
4. click Button 'add file to Knowledge base'
5. check the file added.
6. you could see "DocumentLoader, spliter is None" and Document count is 0.
### Expected behavior
load docx file successful. | Can't load docx file with directory in content. | https://api.github.com/repos/langchain-ai/langchain/issues/13452/comments | 4 | 2023-11-16T10:17:19Z | 2024-02-22T16:06:28Z | https://github.com/langchain-ai/langchain/issues/13452 | 1,996,521,784 | 13,452 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Friends, how can I return the retrieval texts(context)? My script is as follows:
```python
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.schema import Document
from langchain.chat_models.openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
pdf_path = '/home/data/pdf'
pdf_loader = PyPDFDirectoryLoader(pdf_path)
docs = pdf_loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=256,chunk_overlap=0)
split_docs = splitter.split_documents(docs)
docsearch = FAISS.from_documents(split_docs,OpenAIEmbeddings())
retriever = docsearch.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(temperature=0.2)
# RAG pipeline
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
print(chain.invoke("How to maintain the seat belts in the Jingke model?"))
```
### Suggestion:
_No response_ | Issue: How to return retrieval texts? | https://api.github.com/repos/langchain-ai/langchain/issues/13446/comments | 5 | 2023-11-16T08:05:09Z | 2024-02-22T16:06:33Z | https://github.com/langchain-ai/langchain/issues/13446 | 1,996,290,256 | 13,446 |
[
"langchain-ai",
"langchain"
] | ### System Info
python
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import OpenAI
from langchain.chains import AnalyzeDocumentChain
from langchain.chains.summarize import load_summarize_chain
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentType
model = ChatOpenAI(model="gpt-4") # gpt-3.5-turbo, gpt-4
agent = create_csv_agent(model,"APT.csv",verbose=True)
agent.run("how many rows are there?")
### Expected behavior
UnicodeDecodeError Traceback (most recent call last)
[<ipython-input-25-0a5f8e8f5933>](https://localhost:8080/#) in <cell line: 11>()
9 model = ChatOpenAI(model="gpt-4") # gpt-3.5-turbo, gpt-4
10
---> 11 agent = create_csv_agent(model,"APT.csv",verbose=True)
12
13 agent.run("how many rows are there?")
11 frames
/usr/local/lib/python3.10/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 0: invalid start byte | create_csv_agent UnicodeDecodeError('utf-8' ) | https://api.github.com/repos/langchain-ai/langchain/issues/13444/comments | 4 | 2023-11-16T06:55:28Z | 2024-02-22T16:06:38Z | https://github.com/langchain-ai/langchain/issues/13444 | 1,996,188,135 | 13,444 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: latest (0.0.336)
### Who can help?
@hwchase17 (from git blame and from #13110)
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce (based on [code from docs](https://python.langchain.com/docs/modules/agents/agent_types/openai_tools))
```python
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import BearlyInterpreterTool, DuckDuckGoSearchRun
from langchain.tools.render import format_tool_to_openai_tool
lc_tools = [DuckDuckGoSearchRun()]
oai_tools = [format_tool_to_openai_tool(tool) for tool in lc_tools]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-1106",
streaming=True)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=lc_tools, verbose=True)
agent_executor.invoke(
{"input": "What's the average of the temperatures in LA, NYC, and SF today?"}
)
```
Logs:
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "./test-functions.py", line 42, in <module>
agent_executor.invoke(
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 87, in invoke
return self(
^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 461, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 1427, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2787, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 142, in invoke
self.generate_prompt(
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 422, in _generate
return _generate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 61, in _generate_from_stream
generation += chunk
File "./venv/lib/python3.11/site-packages/langchain/schema/output.py", line 94, in __add__
message=self.message + other.message,
~~~~~~~~~~~~~^~~~~~~~~~~~~~~
File "./venv/lib/python3.11/site-packages/langchain/schema/messages.py", line 225, in __add__
additional_kwargs=self._merge_kwargs_dict(
^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/messages.py", line 138, in _merge_kwargs_dict
raise ValueError(
ValueError: Additional kwargs key tool_calls already exists in this message.
```
`left` and `right` from inside`_merge_kwargs_dict`:
```python
left = {'tool_calls': [{'index': 0, 'id': 'call_xhpbRSsUKkzsvtFgnkTXEFtAtHc', 'function': {'arguments': '', 'name': 'duckduckgo_search'}, 'type': 'function'}]}
right = {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': '{"qu', 'name': None}, 'type': None}]}
```
### Expected behavior
No errors and same result as without `streaming=True`. | openai tools don't work with streaming=True | https://api.github.com/repos/langchain-ai/langchain/issues/13442/comments | 6 | 2023-11-16T05:02:17Z | 2023-12-16T07:55:17Z | https://github.com/langchain-ai/langchain/issues/13442 | 1,996,065,154 | 13,442 |
[
"langchain-ai",
"langchain"
] | ### System Info
This was working fine for my previous configuration,
langchain v0.0.225
chromadb v0.4.7
But now neither this is working, nor the latest version of both
langchain v0.0.336
chromadb v0.4.17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have the packages installed
Running these pieces of code
```
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
loader = TextLoader(file_path)
index = VectorstoreIndexCreator().from_loaders([loader]) # this is where I am getting the error
```
OR
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
text_splitter = RecursiveCharacterTextSplitter()
splits = text_splitter.split_documents(docs)
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents( # this is where I am getting the error
documents=splits,
embedding=embedding,
)
```
Here is the error
```
Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])\nPlease see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.\nPlease note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 \n
```
### Expected behavior
Earlier a chromadb instance would be created, and I would be able to query it with my prompts. That is the expected behaviour. | langchain.vectorstores.Chroma support for EmbeddingFunction.__call__ update of ChromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/13441/comments | 6 | 2023-11-16T04:32:31Z | 2024-03-18T16:06:34Z | https://github.com/langchain-ai/langchain/issues/13441 | 1,996,037,409 | 13,441 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi Guys,
Here I made a new fork to [https://github.com/MetaSLAM/CyberChain](https://github.com/MetaSLAM/CyberChain), where I would like to combine the powerful Langchain ability and GPT4 into the real-world robotic challenges.
My question raised here is mainly about:
1. How can I setup the Chat GPT 4 version into Langchain, where I would like to levevarge the powerful visual inference of GPT4;
2. I there any suggestion for the memory system, because the robot may travel through a large-scale environment for lifelong navigation, I would like to construct a memory system within langchain to enhance its behaviour in the long-term operation.
Many thanks for your hard work, and Langchain is difinitely an impressive work.
Max
### Motivation
Combine real-world robotic application with the LangChain framework.
### Your contribution
I will provide my research outcomes under our MetaSLAM organization (https://github.com/MetaSLAM), hope this can benefit both robotics and AI communitry, targeting for the general AI system. | Request new feature for Robotic Application | https://api.github.com/repos/langchain-ai/langchain/issues/13440/comments | 5 | 2023-11-16T04:11:33Z | 2024-04-18T16:35:43Z | https://github.com/langchain-ai/langchain/issues/13440 | 1,996,017,598 | 13,440 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.336
mac
python3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
import langchain
from langchain.chat_models.minimax import MiniMaxChat
print(langchain.__version__)
chat = MiniMaxChat(minimax_api_host="test_host", minimax_api_key="test_api_key", minimax_group_id="test_group_id")
assert chat._client
assert chat._client.host == "test_host"
assert chat._client.group_id == "test_group_id"
assert chat._client.api_key == "test_api_key"
```
output:
```sh
0.0.336
Traceback (most recent call last):
File "/Users/hulk/code/py/workhome/test_pydantic/test_langchain.py", line 4, in <module>
chat = MiniMaxChat(minimax_api_host="test_host", minimax_api_key="test_api_key", minimax_group_id="test_group_id")
File "/Users/hulk/miniforge3/envs/py38/lib/python3.8/site-packages/langchain/llms/minimax.py", line 121, in __init__
self._client = _MinimaxEndpointClient(
File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__
ValueError: "MiniMaxChat" object has no field "_client"
```
### Expected behavior
Instantiation Success | model init ValueError | https://api.github.com/repos/langchain-ai/langchain/issues/13438/comments | 3 | 2023-11-16T03:23:29Z | 2024-02-21T09:50:32Z | https://github.com/langchain-ai/langchain/issues/13438 | 1,995,978,394 | 13,438 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.266
### Who can help?
@hwchase17
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Behavior
When I call the `vector_store.similarity_search_with_score` function:
- Expected: The returned scores will be proportional to the similarity. This means the higher score, the higher similarity.
- Actual: The scores are proportional to the the distance.
### Problem
- When I call `as_retriever` function with the `score_threshold`, the behavior is wrong. Because when `score_threshold` is declared, it will filter documents that have score greater than or equal to `score_threshold` value. So the top documents that found from pgvector will be filter out while it's the most similar in fact.
### Expected behavior
The returned scores from PGVector queries are proportional to the similarity.
In other words, the higher score, the higher similarity. | [PGVector] The scores returned by 'similarity_search_with_score' are NOT proportional to the similarity | https://api.github.com/repos/langchain-ai/langchain/issues/13437/comments | 5 | 2023-11-16T03:10:54Z | 2024-02-22T16:06:43Z | https://github.com/langchain-ai/langchain/issues/13437 | 1,995,965,954 | 13,437 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to make a chatbot using conversationchain. The prompt takes three variables, “history” from memory, user input: "input", and another variable say variable3. But it has an error “Got unexpected prompt input variables. The prompt expects ['history', 'input', 'variable3'], but got ['history'] as inputs from memory, and input as the normal input key. ”
This error doesn't occur if I'm using llmchain. So how can I prevent it if I want to use conversationchain?
Thanks
### Suggestion:
_No response_ | Issue: problem with conversationchain take multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/13433/comments | 4 | 2023-11-16T00:54:37Z | 2024-07-19T13:57:00Z | https://github.com/langchain-ai/langchain/issues/13433 | 1,995,838,321 | 13,433 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Although there are many output parsers in langchain, how to custom the output in a chain agent can not find any solutions right now--yet it may be sometimes necessary. Let's say a chain agent x, the tools are Tool1, Tool2 and Tool3, if:
the output of Tool2 should be customized, and no longer be processed by GPT again,
the outputs of Tool1 and Tool3 should be normal, and be processed by GPT again,
in this case, no solution could be found: because the output parser is based on the agent other than any specific tools.
Should this feature be satisfied?
### Motivation
When multiple tools in an agent chain, and some of the tools should be output customized.
### Your contribution
no | Custom Tool Output in a Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13432/comments | 2 | 2023-11-16T00:46:31Z | 2024-02-22T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13432 | 1,995,831,335 | 13,432 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain==0.0.335
python==3.10
```
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
From the LCEL interface [docs](https://python.langchain.com/docs/expression_language/interface) we have the following snippet:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
```
From the token usage tracking [docs](https://python.langchain.com/docs/modules/model_io/chat/token_usage_tracking) we have the snippet
```python
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4")
with get_openai_callback() as cb:
result = llm.invoke("Tell me a joke")
print(cb)
```
that yields the following output:
```
Tokens Used: 24
Prompt Tokens: 11
Completion Tokens: 13
Successful Requests: 1
Total Cost (USD): $0.0011099999999999999
```
I am trying to combine the two concepts in the following snippet
```python
from langchain.prompts import ChatPromptTemplate
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | llm
with get_openai_callback() as cb:
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
print(cb)
```
but get the following result:
```
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0
```
Is token counting (and pricing) while streaming not supported at the moment?
### Expected behavior
The following but with the actual values for tokens and cost.
```
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0
``` | `get_openai_callback()` does not count tokens when LCEL chain used with `.stream()` method | https://api.github.com/repos/langchain-ai/langchain/issues/13430/comments | 13 | 2023-11-16T00:05:51Z | 2024-07-24T13:34:05Z | https://github.com/langchain-ai/langchain/issues/13430 | 1,995,786,367 | 13,430 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.335
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When status code is not 200, [TextGen ](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py#L223)only prints the code and returns an empty string.
This is a problem if I want to use TextGen `with_retry()` and I want to retry non 200 responses such as 5xx.
- If I edit the textgen.py source code myself and raise an Exception [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py#L223), then my `with_retry()` works as desired.
I tried to handle this by raising an empty string error in my Output Parser, but the TextGen `with_retry()` is not triggering.
### Expected behavior
I'd like TextGen to raise an Exception when the status code is not 200.
Perhaps consider using the `HTTPError` from `requests` package. | TextGen is not raising Exception when response status code is not 200 | https://api.github.com/repos/langchain-ai/langchain/issues/13416/comments | 4 | 2023-11-15T21:01:18Z | 2024-06-01T00:07:34Z | https://github.com/langchain-ai/langchain/issues/13416 | 1,995,562,229 | 13,416 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I have been learning LangChain for the last month and I have been struggling in the last week to "_guarantee_" `ConversationalRetrievalChain` only answers based on the knowledge added on embeddings. I don't know if I am missing some LangChain configuration or if it is just a matter of tuning my prompt. I will add my code here (simplified, not the actual one, but I will try to preserve everything important).
```
chat = AzureChatOpenAI(
deployment_name="chat",
model_name="gpt-3.5-turbo",
openai_api_version=os.getenv('OPENAI_API_VERSION'),
openai_api_key=os.getenv('OPENAI_API_KEY'),
openai_api_base=os.getenv('OPENAI_API_BASE'),
openai_api_type="azure",
temperature=0
)
embeddings = OpenAIEmbeddings(deployment_id="text-embedding-ada-002", chunk_size=1)
acs = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)
custom_template = """You work for CompanyX which sells things located in United States.
If you don't know the answer, just say that you don't. Don't try to make up an answer.
Base your questions only on the knowledge provided here. Do not use any outside knowledge.
Given the following chat history and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:
"""
CUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=acs.as_retriever(),
condense_question_prompt=CUSTOM_QUESTION_PROMPT,
memory=memory
)
```
When I ask it something like `qa({"question": "What is an elephant?"})` it still answers it, although it is totally unrelated to the knowledge base added to the AzureSearch via embeddings.
I tried different `condense_question_prompt`, with different results, but nothing near _good_. I've been reading the documentation and API for the last 3 weeks, but nothing else seems to help in this case. I'd appreciate any suggestions.
### Suggestion:
_No response_ | Issue: Making sure `ConversationalRetrievalChain` only answer based on the retriever information | https://api.github.com/repos/langchain-ai/langchain/issues/13414/comments | 9 | 2023-11-15T19:19:26Z | 2024-05-30T06:05:59Z | https://github.com/langchain-ai/langchain/issues/13414 | 1,995,385,588 | 13,414 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add IDE auto-complete to `langchain.llm` module
Currently, IntelliJ-based IDEs (PyCharm) interpret an LLM model to be `typing.Any`. Below is an example for the `GPT4All` package

### Motivation
Not having auto-complete on a `LLM` class can be a bit frustrating as a Python developer who works on an IntelliJ product. I've been performing a more direct import of the LLM models to handle this instead:
```python
from langchain.llms.gpt4all import GPT4All
```
Adding support for the `langchain.llms` API would improve the developer experience with the top level `langchain.llms` API
### Your contribution
The existing implementation uses a lazy-loading technique implemented in #11237 to speed up imports. Maintaining this performance is important for whatever solution is implemented. I believe this can be achieved with some imports behind an `if TYPE_CHECKING` block. If the below `Proposed Implementation` is acceptable I'd be happy to open a PR to add this functionality.
<details><summary>Current Implementation</summary>
<p>
`langchain.llms.__init__.py` (abbreviated)
```python
from typing import Any, Callable, Dict, Type
from langchain.llms.base import BaseLLM
def _import_anthropic() -> Any:
from langchain.llms.anthropic import Anthropic
return Anthropic
def _import_gpt4all() -> Any:
from langchain.llms.gpt4all import GPT4All
return GPT4All
def __getattr__(name: str) -> Any:
if name == "Anthropic":
return _import_anthropic()
elif name == "GPT4All":
return _import_gpt4all()
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"Anthropic",
"GPT4All",
]
```
</p>
</details>
<details><summary>Proposed Implementation</summary>
<p>
`langchain.llms.__init__.py` (abbreviated)
```python
from typing import Any, Callable, Dict, Type, TYPE_CHECKING
from langchain.llms.base import BaseLLM
if TYPE_CHECKING:
from langchain.llms.anthropic import Anthropic
from langchain.llms.gpt4all import GPT4All
def _import_anthropic() -> "Anthropic":
from langchain.llms.anthropic import Anthropic
return Anthropic
def _import_gpt4all() -> "GPT4All":
from langchain.llms.gpt4all import GPT4All
return GPT4All
def __getattr__(name: str) -> "BaseLLM":
if name == "Anthropic":
return _import_anthropic()
elif name == "GPT4All":
return _import_gpt4all()
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"Anthropic",
"GPT4All",
]
```
</p>
</details>
<details><summary>IntelliJ Screenshot</summary>
<p>
Here's a screenshot after implementing the above `Proposed Implementation`
<img width="588" alt="image" src="https://github.com/langchain-ai/langchain/assets/49741340/1647f3b5-238e-4137-8fa4-f30a2234fdb0">
</p>
</details> | IDE Support - Python Package - langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/13411/comments | 1 | 2023-11-15T18:14:01Z | 2024-02-21T16:06:19Z | https://github.com/langchain-ai/langchain/issues/13411 | 1,995,289,924 | 13,411 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.335
openai = 1.2.4
python = 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Modified example code (https://python.langchain.com/docs/integrations/llms/azure_openai) from langchain to access AzureOpenAI inferencing endpoint
```
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(
deployment_name="td2",
model_name="text-davinci-002",
)
# Run the LLM
llm("Tell me a joke")
```
I get the following error:
TypeError: Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') arguments to be given
If I modify the last line as follows:
`llm("Tell me a joke", model="text-davinci-002") `
i get a different error:
Completions.create() got an unexpected keyword argument 'engine'
It appears to be passing all keywords to the create method, the first of which is 'engine', and it appears that and other kws are being added by the code.
### Expected behavior
I expect the model to return a response, such as is shown in the example. | Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') | https://api.github.com/repos/langchain-ai/langchain/issues/13410/comments | 14 | 2023-11-15T18:00:46Z | 2024-05-10T16:08:15Z | https://github.com/langchain-ai/langchain/issues/13410 | 1,995,270,190 | 13,410 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.327
Name: chromadb
Version: 0.4.8
Summary: Chroma.
### Who can help?
@hwchase17 I believe the chromadb don't close the file handle during persistence making it difficult to use it on cloud services like Modal Labs. What about adding a close method or similar to make sure this doesn't happen?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using ModalLabs with a simple example:
```
@stub.function(volumes={CHROMA_DIR: stub.volume})
def test_chroma():
import chromadb
from importlib.metadata import version
print("ChromaDB: %s" % version('chromadb'))
# Initialize ChromaDB client
client = chromadb.PersistentClient(path=SENTENCE_DIR.as_posix())
# Create the collection
neo_collection = client.create_collection(name="neo")
# Adding raw documents
neo_collection.add(
documents=["I know kung fu.", "There is no spoon."], ids=["quote_1", "quote_2"]
)
# Counting items in a collection
item_count = neo_collection.count()
print(f"Count of items in collection: {item_count}")
stub.volume.commit()
```
Error:
`grpclib.exceptions.GRPCError: (<Status.FAILED_PRECONDITION: 9>, 'there are open files preventing the operation', None)
`
### Expected behavior
There shouldn't be any open file handles.
| Persistent client open file handles | https://api.github.com/repos/langchain-ai/langchain/issues/13409/comments | 3 | 2023-11-15T17:59:14Z | 2024-02-21T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13409 | 1,995,268,049 | 13,409 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| RunnableLambda: returned runnable called synchronously when using ainvoke | https://api.github.com/repos/langchain-ai/langchain/issues/13407/comments | 2 | 2023-11-15T17:27:49Z | 2023-11-28T11:18:27Z | https://github.com/langchain-ai/langchain/issues/13407 | 1,995,223,902 | 13,407 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am doing a research on creating a PDF reader AI which can answer users' questions based on the PDF uploaded and the prompt user entered. I got it so far with using the OpenAI package but now want's to make it more advance by using ChatOpenAI with the LangChain Schema package (SystemMessage, HumanMessage, and AIMessage). I am kinda lost on where I should start and make the adjustments. Could you help me on that?
Below is my code so far:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
from streamlit_extras.add_vertical_space import add_vertical_space
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = sections
# st.write(chunks)
# embeddings
file_name = pdf.name[:-4]
# comvert into pickle file
# wb: open in binary mode
# rb: read the file
# Note: only create new vectors for new files updated
if os.path.exists(f"{file_name}.pkl"):
with open(f"{file_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
st.write('Embeddings Loaded from the Disk')
else:
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
VectorStore = FAISS.from_texts(chunks,embedding=embeddings)
with open(f"{file_name}.pkl", "wb") as f:
pickle.dump(VectorStore, f)
st.write('Embeddings Computation Completed')
# Create chat history
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = VectorStore.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-3.5-turbo', temperature=0.7)
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=prompt)
print(cb)
# st.write(response)
# st.write(docs)
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
### Suggestion:
_No response_ | Issue: Need Help - Implement ChatOpenAI into my LangChain Research | https://api.github.com/repos/langchain-ai/langchain/issues/13406/comments | 3 | 2023-11-15T17:12:08Z | 2023-11-28T21:44:27Z | https://github.com/langchain-ai/langchain/issues/13406 | 1,995,196,688 | 13,406 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently no support for multi-modal embeddings from VertexAI exists. However, I did stumble upon this experimental implementation of [GoogleVertexAIMultimodalEmbeddings](https://js.langchain.com/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai) in LangChain for Javascript. Hence, I think this would also be a very nice feature to implement in the Python version of LangChain.
### Motivation
Using multi-modal embeddings could positively affect applications that rely on information of different modalities. One example could be product search in a web catalogue. Since more cloud providers are making [endpoints for multi-modal embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-multimodal-embeddings) available, it makes sense to incorporate these into LangChain as well. The embeddings of these endpoints could be stored in vector stores and hence be used in downstream applications that are built using LangChain.
### Your contribution
I can contribute to this feature. | Add support for multimodal embeddings from Google Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/13400/comments | 4 | 2023-11-15T15:02:35Z | 2024-02-23T16:06:37Z | https://github.com/langchain-ai/langchain/issues/13400 | 1,994,956,885 | 13,400 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Elastic supports natively to have multiple DenseVectors in a document, which can be selected during query time / search. The langchain vector search interface enables to pass additional keyword args. But at the moment, the implementation of _search in the ElasticSearch implementation does not consider the `vector_query_field` variable, which could be passed through the kwargs. Furthermore, there should be a solution, to allow a document to have multiple text fields that get passed as a queryable vector, not just the standard text field.
### Motivation
If you have multiple vector fields in one index, this feature could simplify the query of the right one, like it's natively allowed in Elastic. In the current implementation one would need to add additional vector fields in the metadata and change the `vector_query_field` of ElasticSearchStore the whole class every time before you call the search. This is not a clean solution and I would vote for a more generic and clean solution.
### Your contribution
I could help by implementing this issue, although I need to state that I am not an expert in Elastic. I saw this issue when we tried to use an existing index in Elastic to add and retrieve Documents within the Langchain Framework. | ElasticSearch allow for multiple vector_query_fields than default text & make it a kwarg in search functions | https://api.github.com/repos/langchain-ai/langchain/issues/13398/comments | 1 | 2023-11-15T14:42:10Z | 2024-03-13T19:57:51Z | https://github.com/langchain-ai/langchain/issues/13398 | 1,994,918,113 | 13,398 |
[
"langchain-ai",
"langchain"
] | ### System Info
Not relevant.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any LLM relying on stop words with special regex characters.
For example instanciate a `HuggingFacePipeline` LLM instance with [openchat](https://huggingface.co/openchat/openchat_3.5) model. This model uses `<|end_of_turn|>` stop words.
Since the `llms.utils.enforce_stop_tokens()` function doesn't escape the provided stop words strings the `|` chars are interpreted as part of the regex instead of the stop word. So in this case any single `<` chars in the output would trigger the
split.
### Expected behavior
Stop words should be escaped with `re.escape()` so the split only happens on the complete words. | Missing escape in `llms.utils.enforce_stop_tokens()` | https://api.github.com/repos/langchain-ai/langchain/issues/13397/comments | 3 | 2023-11-15T14:36:39Z | 2023-11-17T22:09:17Z | https://github.com/langchain-ai/langchain/issues/13397 | 1,994,907,050 | 13,397 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Make `Message` and/or `Memory` to support `Timestamp`
### Motivation
To provide context and clarity regarding the timing of conversation. This can be helpful for reference and coordination, especially when discussing time-sensitive topics.
I noticed that [one agent in opengpts](https://opengpts-example-vz4y4ooboq-uc.a.run.app/) has supported this feature.
### Your contribution
I have not made a clear outline of adding the `Timestamps` feature.
The following is some possible ways to support it for discussion:
Proposal 1: Add `Timestamps` to `Message Schema`. This way every `Memory Entity` should support `Timestamps`.
Proposal 2: Create a new `TimestampedMemory`. This way has a better backward compatibility. | [Enhancement] Timestamp supported Message and/or Memory | https://api.github.com/repos/langchain-ai/langchain/issues/13393/comments | 2 | 2023-11-15T13:21:24Z | 2023-11-15T13:40:39Z | https://github.com/langchain-ai/langchain/issues/13393 | 1,994,769,118 | 13,393 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 11, Langchain 0.0327, Python 3.10.
The Doc2txtLoader does not work for web paths, as a PermissionError occurs when self.tempfile attempts to write content to the tempfile:
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
r = requests.get(self.file_path)
if r.status_code != 200:
raise ValueError(
"Check the url of your file; returned status code %s"
% r.status_code
)
self.web_path = self.file_path
self.temp_file = tempfile.NamedTemporaryFile()
**self.temp_file.write(r.content)**
self.file_path = self.temp_file.name
elif not os.path.isfile(self.file_path):
raise ValueError("File path %s is not a valid file or url" % self.file_path)
It produces a Permission Error: _[Errno 13] Permission denied:_ as the file is already open, and will be deleted on close.
I can work around this by replacing this section of the code with the following:
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
self.temp_dir = tempfile.TemporaryDirectory()
_, suffix = os.path.splitext(self.file_path)
temp_pdf = os.path.join(self.temp_dir.name, f"tmp{suffix}")
self.web_path = self.file_path
if not self._is_s3_url(self.file_path):
r = requests.get(self.file_path, headers=self.headers)
if r.status_code != 200:
raise ValueError(
"Check the url of your file; returned status code %s"
% r.status_code
)
with open(temp_pdf, mode="wb") as f:
f.write(r.content)
self.file_path = str(temp_pdf)
elif not os.path.isfile(self.file_path):
raise ValueError("File path %s is not a valid file or url" % self.file_path)
This is the method that works for the PDF loader.
The workaround is fine for now but will cause a problem if I need to update the langchain version any time in the future.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import Docx2txtLoader
loader = Docx2txtLoader("https://file-examples.com/wp-content/storage/2017/02/file-sample_100kB.docx")
doc = loader.load()[0]
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.2\plugins\python-ce\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 3, in <module>
File "C:\Users\crawleyb\PycharmProjects\SharepointGPT\venv\lib\site-packages\langchain\document_loaders\word_document.py", line 55, in load
return [
File "C:\Users\crawleyb\PycharmProjects\SharepointGPT\venv\lib\site-packages\docx2txt\docx2txt.py", line 76, in process
zipf = zipfile.ZipFile(docx)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\zipfile.py", line 1251, in __init__
self.fp = io.open(file, filemode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\
### Expected behavior
The DocumentLoader should be able to get the contents of the docx file, loader.load()[0] should return a Document object. | Doc2txtLoader not working for web paths | https://api.github.com/repos/langchain-ai/langchain/issues/13391/comments | 3 | 2023-11-15T10:15:47Z | 2024-02-21T16:06:34Z | https://github.com/langchain-ai/langchain/issues/13391 | 1,994,466,938 | 13,391 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm trying to work through the streaming parameters around run_manager and callbacks.
Here's a minimal setup of what I'm trying to establish
```
class MyTool(BaseTool)
name: "my_extra_tool"
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
nested_manager = run_manager.get_child() # run_manager doesn't have an `on_llm_start` method, only supports `on_tool_end` / `on_tool_error` and `on_text` (which has no callback in LangchainTracer
llm_run_manager = await nested_manager.on_llm_start({"llm": self.llm, "name": self.name+"_substep"}, prompts=[query]) # need to give
# do_stuff, results in main_response
main_response = "<gets created in the tool, might be part of streaming output in the future>"
await llm_run_manager[0].on_llm_new_token(main_response) # can't use llm_run_manager directly as it's a list
await llm_run_manager[0].on_llm_end(response=main_response)
```
I'm seeing that on_llm_new_token callback is being called in my custom callback handler, but I don't see the response in Langsmith.
The docs aren't fully clear on how to make sure these run_ids should be propagated.
### Idea or request for content:
It would be fantastic to have a detailed example of how to correctly nest runs with arbitrary tools. | DOC: Clarify how to handle runs and linked calls with run_managers | https://api.github.com/repos/langchain-ai/langchain/issues/13390/comments | 5 | 2023-11-15T09:59:55Z | 2024-02-21T16:06:39Z | https://github.com/langchain-ai/langchain/issues/13390 | 1,994,439,783 | 13,390 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.321
Python: 3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using Langchain to generate and execute SQL queries for MySql database.
The SQL Query generated is enclosed in single quotes
Generate SQL Query: **'**"SELECT * FROM EMPLOYEE WHERE ID = 123"**'**
Expected SQL Query: "SELECT * FROM EMPLOYEE WHERE ID = 123"
Hence though the query is correct, sql alchemy is unable to execute query and gives Programming error
(pymysql.err.ProgrammingError) (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use
### Expected behavior
Expected SQL Query should be without enclosing single quotes.
I did some debugging as looks like we get single quotes while invoking predict method of LLM - https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/sql/base.py#L156 | MySQL : SQL Query generated contains enclosing single quotes leading to SQL Alchemy giving Programming Error, 1064 | https://api.github.com/repos/langchain-ai/langchain/issues/13387/comments | 4 | 2023-11-15T09:09:20Z | 2023-11-15T09:51:03Z | https://github.com/langchain-ai/langchain/issues/13387 | 1,994,353,668 | 13,387 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
openai.proxy = {
"http": "http://127.0.0.1:7890",
"https": "http://127.0.0.1:7890"
}
callback = AsyncIteratorCallbackHandler()
llm = OpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
temperature=0,
streaming=True,
callbacks=[callback]
)
embeddings = OpenAIEmbeddings()
# faq
loader = TextLoader("static/faq/ecommerce_faq.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
docsearch = Chroma.from_documents(texts, embeddings)
faq_chain = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
@tool("FAQ")
def faq(input) -> str:
""""useful for when you need to answer questions about shopping policies, like return policy, shipping policy, etc."""
print('faq input', input)
return faq_chain.acall(input)
tools = [faq]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation_agent = initialize_agent(
tools,
llm,
agent="conversational-react-description",
memory=memory,
verbose=True,
)
async def wait_done(fn, event):
try:
await fn
except Exception as e:
print('error', e)
# event.set()
finally:
event.set()
async def call_openai(question):
# chain = faq(question)
chain = conversation_agent.acall(question)
coroutine = wait_done(chain, callback.done)
task = asyncio.create_task(coroutine)
async for token in callback.aiter():
# print('token', token)
yield f"{token}"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Thought: Do I need to use a tool? No
AI: 你好!很高兴认识你!
> Finished chain.
INFO: 127.0.0.1:65193 - "POST /ask HTTP/1.1" 200 OK
conversation_agent <coroutine object Chain.acall at 0x1326b78a0>
coroutine <async_generator object AsyncIteratorCallbackHandler.aiter at 0x133b49340>
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: FAQ
Action Input: 如何更改帐户信息error faq() takes 1 positional argument but 2 were given
### Suggestion:
_No response_ | Issue: <error faq() takes 1 positional argument but 2 were given> | https://api.github.com/repos/langchain-ai/langchain/issues/13383/comments | 8 | 2023-11-15T04:05:00Z | 2024-02-21T16:06:44Z | https://github.com/langchain-ai/langchain/issues/13383 | 1,993,997,381 | 13,383 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi guys,
Here is the situation, I have 3 tools to use one by one. But, the outputs of which are not in the same types. Say:
Tool 1: output normally, and can be further thought by GPT in the chain,
Tool 2: output specially, which means the output should not be thought further by GPT again, because in that way the output format and data will be not correctly,
Tool 3: output normally, like Tool 1.
In this way, I found it hard to get the correct answer for myself. If all the tools are set to return_direct=False, the answer to Tool 2 will not be right, and if I set return_dircect=True, the chain of Tools 1->2->3 would be lost...
What should I do?
Any help will be highly appreciated.
Best
### Suggestion:
_No response_ | Custom Tool Output | https://api.github.com/repos/langchain-ai/langchain/issues/13382/comments | 4 | 2023-11-15T03:27:22Z | 2024-02-21T16:06:49Z | https://github.com/langchain-ai/langchain/issues/13382 | 1,993,967,484 | 13,382 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version:
Platform: WSL for Windows (Linux 983G3J3 5.15.90.1-microsoft-standard-WSL2)
Python version: 3.10.6
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1a. Create a project in GCP and deploy an open source model from Vertex AI Model Garden (follow the provided Colab notebook to deploy to an endpoint)
1b. Instantiate a VertexAIModelGarden object
```python
llm = VertexAIModelGarden(project=PROJECT_ID, endpoint_id=ENDPOINT_ID)
```
2. Create a prompt string
```python
prompt = "This is an example prompt"
```
3. Call the generate method on the VertexAIModelGarden object
```python
llm.generate([prompt])
```
4. The following error will be produced:
```python
../python3.10/site-packages/langchain/llms/vertexai.py", line 452, in <listcomp>
[Generation(text=prediction[self.result_arg]) for prediction in result]
TypeError: string indices must be integers
```
### Expected behavior
Expecting the generate method to return an LLMResult object that contains the model's response in the 'generations' property
In order to align with Vertex AI api the _generate method should iterate through response.predictions and set text property of Generation object to the iterator variable since response.predictions is a list data type that contains the output strings.
```python
for result in response.predictions:
generations.append(
[Generation(text=result)]
)
``` | VertexAIModelGarden _generate method not in sync with VertexAI API | https://api.github.com/repos/langchain-ai/langchain/issues/13370/comments | 8 | 2023-11-14T21:55:57Z | 2024-03-18T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13370 | 1,993,638,093 | 13,370 |
[
"langchain-ai",
"langchain"
] | ### System Info
openai==1.2.4
langchain==0.0.325
llama_index==0.8.69
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
/usr/local/lib/python3.10/site-packages/llama_index/indices/base.py:102: in from_documents
return cls(
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:49: in __init__
super().__init__(
/usr/local/lib/python3.10/site-packages/llama_index/indices/base.py:71: in __init__
index_struct = self.build_index_from_nodes(nodes)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:254: in build_index_from_nodes
return self._build_index_from_nodes(nodes, **insert_kwargs)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:235: in _build_index_from_nodes
self._add_nodes_to_index(
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:188: in _add_nodes_to_index
nodes = self._get_node_with_embedding(nodes, show_progress)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:100: in _get_node_with_embedding
id_to_embed_map = embed_nodes(
/usr/local/lib/python3.10/site-packages/llama_index/indices/utils.py:137: in embed_nodes
new_embeddings = embed_model.get_text_embedding_batch(
/usr/local/lib/python3.10/site-packages/llama_index/embeddings/base.py:250: in get_text_embedding_batch
embeddings = self._get_text_embeddings(cur_batch)
/usr/local/lib/python3.10/site-packages/llama_index/embeddings/langchain.py:82: in _get_text_embeddings
return self._langchain_embedding.embed_documents(texts)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:490: in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:374: in _get_len_safe_embeddings
response = embed_with_retry(
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:100: in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:47: in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
E AttributeError: module 'openai' has no attribute 'error'
```
### Expected behavior
I suppose it should run, I'll provide some reproducible code here in a minute. | module 'openai' has no attribute 'error' | https://api.github.com/repos/langchain-ai/langchain/issues/13368/comments | 15 | 2023-11-14T20:37:27Z | 2024-05-15T21:00:26Z | https://github.com/langchain-ai/langchain/issues/13368 | 1,993,528,820 | 13,368 |
[
"langchain-ai",
"langchain"
] | ### System Info
after getting million embedding records in postgres, everything became ridiculously slow and postgres cpu usage went to 100%
fix was simple:
```
CREATE INDEX CONCURRENTLY langchain_pg_embedding_collection_id ON langchain_pg_embedding(collection_id);
CREATE INDEX CONCURRENTLY langchain_pg_collection_name ON langchain_pg_collection(name);
```
I think it's important to include index creation in basic setup.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just run PGVector, and it will create tables without indices.
### Expected behavior
Should create indices | PGVector don't have indices what kills postgres in production | https://api.github.com/repos/langchain-ai/langchain/issues/13365/comments | 2 | 2023-11-14T19:38:34Z | 2024-02-20T16:05:56Z | https://github.com/langchain-ai/langchain/issues/13365 | 1,993,436,169 | 13,365 |
[
"langchain-ai",
"langchain"
] | ### System Info
CosmosDBHistory messages are wiping out the session of each run.
Need the library to not recreate the object for the same session or else this cannot be used as chatbot which is stateless
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`import os
from time import perf_counter
from langchain.chat_models import AzureChatOpenAI
from langchain.memory.chat_message_histories import CosmosDBChatMessageHistory
from langchain.schema import (
SystemMessage,
HumanMessage
)
from logger_setup import logger
llm = AzureChatOpenAI(
deployment_name=os.getenv('OPENAI_GPT4_DEPLOYMENT_NAME'),
model=os.getenv('OPENAI_GPT4_MODEL_NAME')
)
cosmos_history = CosmosDBChatMessageHistory(
cosmos_endpoint=os.getenv('AZ_COSMOS_ENDPOINT'),
cosmos_database=os.getenv('AZ_COSMOS_DATABASE'),
cosmos_container=os.getenv('AZ_COSMOS_CONTAINER'),
session_id='1234',
user_id='user001',
connection_string=os.getenv('AZ_COSMOS_CS')
)
# cosmos_history.prepare_cosmos()
sys_msg = SystemMessage(content='You are a helpful bot that can run various SQL queries')
# cosmos_history.add_message(sys_msg)
human_msg = HumanMessage(content='Can you tell me how things are done in database')
cosmos_history.add_message(human_msg)
messages = []
messages.append(sys_msg)
messages.append(human_msg)
start_time = perf_counter()
response = llm.predict_messages(messages=messages)
end_time = perf_counter()
logger.info('Total time taken %d s', (end_time - start_time))
print(response)
messages.append(response)
cosmos_history.add_message(response)
`
### Expected behavior
Need this to save it for each subsequent web service calls with the same session id
Also this code does not retrieve data
try:
logger.info(f"Reading item with session_id: {self.session_id}, user_id: {self.user_id}")
item = self._container.read_item(
item=self.session_id,
partition_key=self.user_id
)
except CosmosHttpResponseError as ex:
logger.error(f"Error reading item from CosmosDB: {ex}")
return
But using sql does
query = f"SELECT * FROM c WHERE c.id = '{self.session_id}' AND c.user_id = '{self.user_id}'"
items = list(self._container.query_items(query, enable_cross_partition_query=True))
if items:
item = items[0]
# Continue with reading the item
else:
logger.info("Item not found in CosmosDB")
return
| CosmosDBHistoryMessage | https://api.github.com/repos/langchain-ai/langchain/issues/13361/comments | 6 | 2023-11-14T18:58:13Z | 2024-06-08T16:07:30Z | https://github.com/langchain-ai/langchain/issues/13361 | 1,993,370,326 | 13,361 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
notion page properties
https://developers.notion.com/reference/page-property-values
Current version Notion DB loader for doesn't supports following properties for metadata
- `checkbox`
- `email`
- `number`
- `select`
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | Issue: Notion DB loader for doesn't supports some properties | https://api.github.com/repos/langchain-ai/langchain/issues/13356/comments | 0 | 2023-11-14T17:20:22Z | 2023-11-15T04:31:13Z | https://github.com/langchain-ai/langchain/issues/13356 | 1,993,198,924 | 13,356 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Supplying a not default parameter to a model is not something that will require always showing a warning. It should be easy to suppress that specific warning.
### Motivation
The logging gets full of those warnings just because you are supplying not default parameters such as "top_p", "frequency_penalty" or "presence_penalty". | Ability to suppress warning when supplying a "not default parameter" to OpenAI Chat Models | https://api.github.com/repos/langchain-ai/langchain/issues/13351/comments | 3 | 2023-11-14T15:43:35Z | 2024-02-20T16:06:00Z | https://github.com/langchain-ai/langchain/issues/13351 | 1,993,012,205 | 13,351 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have my code and for the first section of it I am inputting a memory key based in the session_id as a UUID
I am receiving the following error: Any idea what the issue is?
> Traceback (most recent call last):
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/llm.py", line 62, in <module>
> print(chat_csv(session_id,None,'Hi how are you?'))
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/llm.py", line 37, in chat_csv
> response = conversation({"question": question})
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 286, in __call__
> inputs = self.prep_inputs(inputs)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 443, in prep_inputs
> self._validate_inputs(inputs)
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 195, in _validate_inputs
> raise ValueError(f"Missing some input keys: {missing_keys}")
> ValueError: Missing some input keys: {'session_id'}
here is the input:
`session_id = uuid.uuid4()
print(chat_csv(session_id,None,'Hi how are you?'))`
Here is the code:
```
def chat_csv(
session_id: UUID = None,
file_path: str = None,
question: str = None
):
# session_id = str(session_id)
memory = ConversationBufferMemory(
memory_key=str(session_id),
return_messages=True
)
if file_path == None:
template = """You are a nice chatbot having a conversation with a human.
Previous conversation:
{session_id}
New human question: {question}
Response:"""
prompt = PromptTemplate.from_template(template)
conversation = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=False,
memory=memory
)
response = conversation({"question": question})
```
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13349/comments | 7 | 2023-11-14T13:53:39Z | 2024-02-20T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13349 | 1,992,791,286 | 13,349 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
llm = ChatOpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
temperature=0,
streaming=True,
callbacks=[AsyncIteratorCallbackHandler()]
)
embeddings = OpenAIEmbeddings()
loader = TextLoader("static/faq/ecommerce_faq.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
docsearch = Chroma.from_documents(texts, embeddings)
faq_chain = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
async def call_openai(question):
callback = AsyncIteratorCallbackHandler()
# coroutine = wait_done(model.agenerate(messages=[[HumanMessage(content=question)]]), callback.done)
coroutine = wait_done(faq_chain.arun(question), callback.done)
task = asyncio.create_task(coroutine)
async for token in callback.aiter():
print('token', token)
yield f"data: {token}\n\n"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
I can run it normally using model.agenerate, but it cannot run after using faq_chain.arun. what is the reason for this?
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13344/comments | 4 | 2023-11-14T11:02:21Z | 2024-02-20T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13344 | 1,992,514,531 | 13,344 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there a way to autoawq support for vllm, Im 'setting quantization to 'awq' but its not working
### Motivation
faster inference
### Your contribution
N/A | autoawq for vllm | https://api.github.com/repos/langchain-ai/langchain/issues/13343/comments | 3 | 2023-11-14T10:47:09Z | 2024-03-13T19:57:18Z | https://github.com/langchain-ai/langchain/issues/13343 | 1,992,482,519 | 13,343 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.335
Platform: Windows 10
Python Version = 3.11.3
IDE: VS Code
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
MWE:
A PowerPoint presentation named "test.pptx" in the same folder as the script. Content is a title slide with sample text.
```python
from langchain.document_loaders import UnstructuredPowerPointLoader
def ingest_docs():
loader= UnstructuredPowerPointLoader('test.pptx')
docs = loader.load()
return docs
```
I get a problem in 2/3 tested environments:
1. Running the above MWE with `ingest_docs()` in a simple python script will yield no problem. The content of the PowerPoint (text on the title slide) is displayed.
2. Running the above MWE in a Jupyter Notebook with `ingest_docs()` will cause the cell to run indefinetely. Trying to interrupt the kernel results in: `Interrupting the kernel timed out`. A fix is to restart the kernel.
3. Running the MWE in Streamlit (see code below) will result the spawned server to die immediately. (The cmd Window simply closes)
```python
import streamlit as st
from langchain.document_loaders import UnstructuredPowerPointLoader
def ingest_docs():
[as above]
st.write(ingest_docs())
```
### Expected behavior
I expect the MWE to work the same in the Notebook and Streamlit environments just as in the simple Python script. | PowerPoint loader crashing | https://api.github.com/repos/langchain-ai/langchain/issues/13342/comments | 9 | 2023-11-14T10:24:09Z | 2024-02-25T16:05:42Z | https://github.com/langchain-ai/langchain/issues/13342 | 1,992,439,429 | 13,342 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi all. Can anyone help to answer why there's no init function (constructor) for classes in LangChain, while Pydantic occurs everywhere. It seems hard for IDEs to jump to related codes. Is that one kind of anti-pattern of Python programming?
### Suggestion:
_No response_ | Why no constructor (init function) in Langchain while pydantics are everywhere? | https://api.github.com/repos/langchain-ai/langchain/issues/13340/comments | 3 | 2023-11-14T09:14:09Z | 2024-02-14T00:35:21Z | https://github.com/langchain-ai/langchain/issues/13340 | 1,992,311,627 | 13,340 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hi all, on main website [langchain.com](https://www.langchain.com/), above the folder, to the right of hero section both boxes (python and js) point to the [python docs](https://python.langchain.com/docs/get_started/introduction).
### Idea or request for content:
It's better and clearer if each block points to the related lang. Of course, it's not urgent or critical :) | DOC: Wrong documentation link | https://api.github.com/repos/langchain-ai/langchain/issues/13336/comments | 2 | 2023-11-14T08:10:40Z | 2024-02-20T16:06:20Z | https://github.com/langchain-ai/langchain/issues/13336 | 1,992,202,637 | 13,336 |
[
"langchain-ai",
"langchain"
] | ### Feature request
LangChain supports GET functions, but there is no support for POST functions. This feature request proposes the addition of POST API functionality to enhance the capabilities of LangChain.
### Motivation
The motivation behind this feature request is to extend the capabilities of LangChain to handle not only GET requests but also POST requests.
### Your contribution
I am willing to contribute to the development of this feature. I will carefully follow the contributing guidelines and provide a pull request to implement the POST API functionality. | Add POST API Functionality to LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/13334/comments | 5 | 2023-11-14T07:49:34Z | 2024-05-03T18:24:54Z | https://github.com/langchain-ai/langchain/issues/13334 | 1,992,174,212 | 13,334 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I couldnt find any documentation on this , please help
How can i stream the response from Ollama ?
### Suggestion:
_No response_ | Issue: <How to do streaming response from Ollama??> | https://api.github.com/repos/langchain-ai/langchain/issues/13333/comments | 6 | 2023-11-14T06:21:20Z | 2024-06-14T21:05:29Z | https://github.com/langchain-ai/langchain/issues/13333 | 1,992,069,370 | 13,333 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to replicate [this example](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch#basic-example) of langchain. I'm using ElasticSearch as the database to store the embedding. In the given example I have replaced `embeddings = OpenAIEmbeddings()` with `embeddings = OllamaEmbeddings(model="llama2")` which one can import `from langchain.embeddings import OllamaEmbeddings`. I'm running `Ollama` locally. But, I'm running into below error:
```
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.RequestError: RequestError(400, 'mapper_parsing_exception', 'The number of dimensions for field [vector] should be in the range [1, 2048] but was [4096]')
```
The Ollama model always create the embedding of size `4096` even when I set the chunk size of `500`. Is there any way to reduce the size of embedding? or is there anyway to store larger size embeddings in `ElasticSearch`
### Suggestion:
_No response_ | Reduce embeddings size | https://api.github.com/repos/langchain-ai/langchain/issues/13332/comments | 4 | 2023-11-14T05:19:21Z | 2024-02-26T01:10:56Z | https://github.com/langchain-ai/langchain/issues/13332 | 1,992,011,242 | 13,332 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.