issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.214
Python 3.10
Ubuntu 22.04
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
["titanic.csv", "titanic_age_fillna.csv"],
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
agent.run("how many rows in the age column are different between the two dfs?")
Got error: ValueError: Invalid file path or buffer object type:
### Expected behavior
According to Langchain documentation https://python.langchain.com/docs/modules/agents/toolkits/csv.html, the CSV agent "can interact with multiple csv files passed in as a list", and it should not generate an error. | Issue: create_csv_agent() error when loading a list of csv files | https://api.github.com/repos/langchain-ai/langchain/issues/6695/comments | 7 | 2023-06-24T23:06:32Z | 2023-07-08T15:24:50Z | https://github.com/langchain-ai/langchain/issues/6695 | 1,772,957,835 | 6,695 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows 11 python 3.9.16 langchain 0.0.212
### Who can help?
Code from https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/sitemap
```python
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
```
throws:
```python
self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
TypeError: _request() got an unexpected keyword argument 'verify'
```python
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
### Expected behavior
to work or get a doc update | sitemap loader throws error TypeError: _request() got an unexpected keyword argument 'verify', many docs refer to wrong links for sitemap as well. | https://api.github.com/repos/langchain-ai/langchain/issues/6691/comments | 8 | 2023-06-24T19:11:29Z | 2023-10-24T16:07:28Z | https://github.com/langchain-ai/langchain/issues/6691 | 1,772,877,397 | 6,691 |
[
"langchain-ai",
"langchain"
] | ### System Info
platform: macOS-13.2.1-arm64-arm-64bit
Python 3.11.3
langchain 0.0.212
langchainplus-sdk 0.0.17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to get my Agent to correctly use my tools - from it's internal dialogue I can see it knows it's about to use a tool incorrectly but then it just goes ahead and does it resulting in an exception from my arg schema.
Here's the input and output with my agent:
---
💬: add a todo to by tortillas to my grocery shopping todo
> Entering new chain...
Action:
```
{
"action": "save_sub_todo",
"action_input": {
"name": "Buy tortillas",
"tags": ["grocery shopping"],
"parent_id": "ID of the grocery shopping todo"
}
}
```
Replace "ID of the grocery shopping todo" with the actual ID of the todo for grocery shopping. You can use `get_all_todos()` to find the ID if you don't know it.
```
Traceback (most recent call last):
File "/Users/jacobbrooks/PythonProjects/jbbrsh/term.py", line 32, in <module>
Fire(run)
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/term.py", line 24, in run
response = agent.run(user_input)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
Here are the tools I'm passing to my agent:
---
```python
class TodoInput(BaseModel):
name: str = Field(description="Todo name")
tags: List[str] = Field(description="Tags to categorize todos")
@tool(return_direct=False, args_schema=TodoInput)
def save_todo(name: str, tags: List[str]) -> str:
"""
Saves a todo for the user
The string returned contians the todos name and its ID. The ID can be used to add child todos.
"""
notion_todo = NotionTodo.new(
notion_client=_notion,
database_id=DATABASE_ID,
properties={
"Tags": tags,
"Name": name
}
)
_notion.todos.append(notion_todo)
return f"todo saved: {notion_todo}"
class SubTodoInput(TodoInput):
parent_id: str = Field(
description="ID for parent todo, only needed for sub-todos",
)
@root_validator
def validate_query(cls, values: Dict[str, Any]) -> Dict:
parent_id = values["parent_id"]
if re.match(r"[\d\w]{8}-[\d\w]{4}-[\d\w]{4}-[\d\w]{4}-[\d\w]{12}", parent_id) is None:
raise ValueError(f'Invalid parent ID "{values["parent_id"]}"')
return values
@tool(return_direct=False, args_schema=SubTodoInput)
def save_sub_todo(name: str, tags: List[str], parent_id:str) -> str:
"""
Saves a child todo with a parent todo for the user
The string returned contians the todos name and its ID, The ID is formatted like so "f1ab8b74-6b67-46b1-81ec-519805c7a1cb"
Do not make up IDs! use get_all_todos to find the best ID if a real one is unavailable.
"""
notion_todo = NotionTodo.new(
notion_client=_notion,
database_id=DATABASE_ID,
properties={
"Tags": tags,
"Name": name,
"Parent todo": parent_id
}
)
_notion.todos.append(notion_todo)
return f"todo saved: {notion_todo}"
@tool
def get_all_todos():
"""
Returns a list of all existing todos.
Useful for finding an ID for an existing todo when you have to adda child todo.
"""
return '\n'.join([
f"'{t.name}' {'Complete' if t.complete else 'Incomplete'} id={t.id} parent_id={t.parent_todo}"
for t in _notion.todos
])
```
Here's how I'm creating my agent
---
```python
_system_message = """
Your purpose is to store and track todos for your user
When using Tools with ID arguments don't make up IDs! Find the best ID from looking through all todos.
"""
llm = ChatOpenAI(temperature=0)
memory_key = "chat_history"
chat_history = MessagesPlaceholder(variable_name=memory_key)
memory = ConversationBufferMemory(memory_key=memory_key, return_messages=True)
agent = initialize_agent(
llm=llm,
tools = [HumanInputRun(), *tools],
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
memory = memory,
agent_kwargs={
"system_message": _system_message,
"memory_prompts": [chat_history],
"input_variables": [
"input", "agent_scratchpad", memory_key
]
},
**kwargs
)
return agent
```
### Expected behavior
The bot init's internal dialogue _knows_ that it's about to use an invalid ID, and it knows how to go about getting the real ID
> Replace "ID of the grocery shopping todo" with the actual ID of the todo for grocery shopping. You can use `get_all_todos()` to find the ID if you don't know it.
But it just goes ahead and uses the tool resulting in an exception.
The Agent should acknowledge this insight, use the tool it knows it should use to get the proper ID, and then reformat the initial tool attempt with the legitimate ID. | Agent knows how to correctly proceed, but uses tool incorrectly anyways | https://api.github.com/repos/langchain-ai/langchain/issues/6690/comments | 3 | 2023-06-24T18:58:15Z | 2024-04-08T05:17:16Z | https://github.com/langchain-ai/langchain/issues/6690 | 1,772,873,469 | 6,690 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a use case where the agent is supposed to perform certain activities (going over the metadata and telling if the currently selected column is fit for querying). This would need a `zero-shot-react-agent` to use several LLMs as tools instead of the present ones (like search being shown everywhere). The [documentation](https://python.langchain.com/docs/modules/agents/how_to/custom_mrkl_agent#multiple-inputs) shows that this is possible but is in itself quite ambiguous.
How do I create an LLMChain as a tool if it always needs a prompt while initialisation? And the prompt can be created only after mentioning this LLMChain as a tool in the `create_prompt` function?
### Suggestion:
_No response_ | Issue: Can an LLM be used as a tool? | https://api.github.com/repos/langchain-ai/langchain/issues/6687/comments | 4 | 2023-06-24T18:41:11Z | 2023-11-07T03:29:34Z | https://github.com/langchain-ai/langchain/issues/6687 | 1,772,868,158 | 6,687 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I would like to know what software was used to create the flowcharts in the document. They look very beautiful.
### Suggestion:
_No response_ | Issue: What software was used to create the flowcharts in the document? | https://api.github.com/repos/langchain-ai/langchain/issues/6681/comments | 1 | 2023-06-24T09:21:32Z | 2023-09-30T16:05:07Z | https://github.com/langchain-ai/langchain/issues/6681 | 1,772,562,004 | 6,681 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This problem occurred when I tried to use the tool support for a conversation-type agent using an agent of type SRUCTURED_CHAT_SHOT_REACT_DESCRIPTION. Here is some of the source code and errors

env:
Python 3.10
Langchain Lastest
WIN 10 Profession Lastest
### Suggestion:
_No response_ | Help for an error that appears in the CONVERSATIONAL Agent | https://api.github.com/repos/langchain-ai/langchain/issues/6680/comments | 2 | 2023-06-24T08:20:06Z | 2023-09-30T16:05:13Z | https://github.com/langchain-ai/langchain/issues/6680 | 1,772,530,890 | 6,680 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 2.12
commit: #6455
version 2.11 does niet have this
### Who can help?
@rlancemartin, @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
workaround: install bs4 manually (pip install bs4)
`from langchain.agents import initialize_agent, AgentType`
leads to:
```
File "//main.py", line 17, in <module>
from langchain.agents import initialize_agent, AgentType
File "/usr/local/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/usr/local/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/usr/local/lib/python3.11/site-packages/langchain/tools/__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/usr/local/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/usr/local/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/usr/local/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 97, in <module>
from langchain.document_loaders.recursive_url_loader import RecusiveUrlLoader
File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/recursive_url_loader.py", line 5, in <module>
from bs4 import BeautifulSoup
ModuleNotFoundError: No module named 'bs4'
```
requirements.txt:
```
openai==0.27.8
fastapi==0.97.0
websockets==11.0.3
pydantic==1.10.9
langchain==0.0.212
uvicorn[standard]
jinja2
lancedb==0.1.8
itsdangerous
tiktoken==0.4.0
```
### Expected behavior
I think
from bs4 import BeautifulSoup
in recursive_url_loader.py
should have been a local import | crash because of missing bs4 dependency in version 2.12 | https://api.github.com/repos/langchain-ai/langchain/issues/6679/comments | 3 | 2023-06-24T06:31:34Z | 2023-06-24T20:54:12Z | https://github.com/langchain-ai/langchain/issues/6679 | 1,772,489,819 | 6,679 |
[
"langchain-ai",
"langchain"
] | ### System Info
see: https://github.com/hwchase17/langchain/discussions/1533
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use FAISS on windows and next try to reuse those embeddings on lunix
### Expected behavior
```
I have a problem, when I want to run a project with langchain on windows, everything works perfectly, and with the same conditions on linux (libraries, python version, etc.) it doesn't work and throws this error, does anyone know what it could be?
2023-03-08 15:13:24 Failed to run listener function (error: search() missing 3 required positional arguments: 'k', 'distances', and 'labels')
2023-03-08 15:13:24 Traceback (most recent call last):
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/slack_bolt/listener/thread_runner.py", line 120, in run_ack_function_asynchronously
2023-03-08 15:13:24 listener.run_ack_function(request=request, response=response)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/slack_bolt/listener/custom_listener.py", line 50, in run_ack_function
2023-03-08 15:13:24 return self.ack_function(
2023-03-08 15:13:24 File "//./main.py", line 28, in question
2023-03-08 15:13:24 response = processQuestion(query)
2023-03-08 15:13:24 File "/api.py", line 42, in processQuestion
2023-03-08 15:13:24 sources = doSimilaritySearch(index, query)
2023-03-08 15:13:24 File "/utils.py", line 87, in doSimilaritySearch
2023-03-08 15:13:24 docs = indexFaiss.similarity_search(query, k=5)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 166, in similarity_search
2023-03-08 15:13:24 docs_and_scores = self.similarity_search_with_score(query, k)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 136, in similarity_search_with_score
2023-03-08 15:13:24 docs = self.similarity_search_with_score_by_vector(embedding, k)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 110, in similarity_search_with_score_by_vector
2023-03-08 15:13:24 scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k)
2023-03-08 15:13:24 TypeError: search() missing 3 required positional arguments: 'k', 'distances', and 'labels'
``` | Problem to run on linux but not on windows | https://api.github.com/repos/langchain-ai/langchain/issues/6678/comments | 3 | 2023-06-24T06:10:07Z | 2024-03-22T07:26:20Z | https://github.com/langchain-ai/langchain/issues/6678 | 1,772,484,021 | 6,678 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In my application, I am using the ConversationalRetrievalChain with "stuff" chain type, FAISS as the vector store and ConversationBufferMemory. I have noticed that when I ask a question related to a previous response (i.e. a request that is only looking for answers from chat history, such as a summarization request), the system still searches for answers from both the vector store and chat history. This often leads to incorrect or irrelevant answers (the summarization will include data from chat history as well as data that is never presented in previous conversation from vector store).
I've tried to address this issue by passing a custom prompt using combine_docs_chain_kwargs to specify whether a response should be generated based on the chat history only, or the chat history and the vector store. However, this approach hasn't been effective.
It seems that the system is currently unable to correctly discern the user's intention to exclusively use the chat history for generating a response. It's crucial that the system can accurately determine this to provide relevant and accurate responses.
### Suggestion:
I propose that the system should be enhanced with a mechanism to first detect the user's intention to either:
Select a response from the chat history only, or
Select a response from the chat history in combination with the vector store.
This could possibly be achieved by conditionally including or excluding certain parts of a prompt, such as {context} (from stuff_prompt.py, this is the default prompt used by stuff chain type), based on user input or intentions. However, this logic would need to be implemented during the data preparation for the template.
I haven't figured out a way to do so. Looking forward to your help!
| Issue: ConversationalRetrievalChain Fails to Distinguish User's Intention for Chat History Only or Chat History + Vector Store Answer | https://api.github.com/repos/langchain-ai/langchain/issues/6677/comments | 5 | 2023-06-24T04:38:11Z | 2023-10-30T16:06:18Z | https://github.com/langchain-ai/langchain/issues/6677 | 1,772,449,472 | 6,677 |
[
"langchain-ai",
"langchain"
] | ### System Info
```python
Linux-6.3.7-1-default-x86_64-with-glibc2.37
Python Version: 3.10.11
Langchain Version: 0.0.21
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Input
```Python
from langchain.chat_models import ChatVertexAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage
chat = ChatVertexAI(temperature=0.7,verbose=True)
chat(
[
SystemMessage(content="Assume you are an expert tour guide.Help the user and assist him in his travel"),
HumanMessage(content="I like lush green valleys with cool weather. Where should I go?"),
AIMessage(content="Switzerland is a nice place to visit"),
HumanMessage(content="Name some of the popular places there to visit")
]
)
```
Response
```python
File [~/gamedisk/PyTorch2.0_env/lib/python3.10/site-packages/langchain/chat_models/vertexai.py:136], in ChatVertexAI._generate(self, messages, stop, run_manager, **kwargs)
134 chat = self.client.start_chat(**params)
135 for pair in history.history:
--> 136 chat._history.append((pair.question.content, pair.answer.content))
137 response = chat.send_message(question.content, **params)
138 text = self._enforce_stop_words(response.text, stop)
AttributeError: 'ChatSession' object has no attribute '_history'
```
### Expected behavior
It is expected to return an object similar to this
```python
AIMessage(content='Lauterbrunnen is a nice place to visit', additional_kwargs={}, example=False) | [ChatVertexAI] 'ChatSession' object has no attribute '_history' | https://api.github.com/repos/langchain-ai/langchain/issues/6675/comments | 4 | 2023-06-24T04:02:32Z | 2023-10-02T16:06:29Z | https://github.com/langchain-ai/langchain/issues/6675 | 1,772,436,936 | 6,675 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
with the QA generation over a document store is it possible to use hugging face models (local) instead of chat open AI?
### Suggestion:
_No response_ | Issue: using different local models for QA generation | https://api.github.com/repos/langchain-ai/langchain/issues/6674/comments | 1 | 2023-06-24T04:02:09Z | 2023-09-30T16:05:28Z | https://github.com/langchain-ai/langchain/issues/6674 | 1,772,436,473 | 6,674 |
[
"langchain-ai",
"langchain"
] | ### System Info
v0.0.211
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use an OpenAPI spec which contains a key `format` with a known type like `date`.
```yaml
date_de_naissance_dirigeant_min:
name: date_de_naissance_dirigeant_min
in: query
description: Date de naissance minimale du dirigeant (ou de l'un des dirigeants de l'entreprise pour une recherche d'entreprises), au format JJ-MM-AAAA.
required: false
schema:
type: string
format: date
example: 1970-01-01
```
This gets translated into
```python
'date_de_naissance_dirigeant_min': {
'type': 'string',
'schema_format': 'date',
'description': "Date de naissance minimale du dirigeant (ou de l'un des dirigeants de l'entreprise pour une recherche d'entreprises), au format JJ-MM-AAAA.",
'example': datetime.date(1970, 1, 1),
},
```
### Expected behavior
No objects other that strings and lists should be instanciated by `openapi_spec_to_openai_fn` | openapi_spec_to_openai_fn generates Date objects which are not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/6671/comments | 3 | 2023-06-23T22:52:10Z | 2023-09-30T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6671 | 1,772,260,655 | 6,671 |
[
"langchain-ai",
"langchain"
] | ### System Info
|software|Version|
|:---:|:---:|
|python|3.10.11|
|LangChain|0.0.209|
|Chroma|0.3.26|
|Windows|11|
|Ubuntu|22.06|
I have tried on both windows and ubuntu
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i am having trouble adding multiple documents into a vectordb. I am using chromadb here.
The following loads, split and embed 2 text files and store them in a persistant vector database. Then it queries the database.
```python
from langchain.document_loaders import TextLoader
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
import os
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
embeddings = OpenAIEmbeddings()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
persist_directory = "db"
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
# -------------------------- Adding spacex_wiki.txt -------------------------- #
loader = TextLoader("this_has_to_work/spacex_wiki.txt", encoding="utf8")
documents = loader.load()
docs = text_splitter.split_documents(documents)
db.add_documents(docs)
# ------------------------- Adding imploson_wiki.txt ------------------------- #
loader = TextLoader("this_has_to_work/implosion_wiki.txt", encoding="utf8")
documents = loader.load()
docs = text_splitter.split_documents(documents)
db.add_documents(docs)
db.persist()
# --------------------------- querying the vectordb -------------------------- #
db = None
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
retriever = db.as_retriever(search_type="mmr")
query = "What is implosion?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
query = "Who is elon?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
```
The above code runs without a problem and is able to retreive from both text file. The problem starts with the following code.
The following code only loads the vectordb from the persistant location.
```python
from langchain.document_loaders import TextLoader
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
import os
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
embeddings = OpenAIEmbeddings()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
persist_directory = "db"
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
# --------------------------- querying the vectordb -------------------------- #
retriever = db.as_retriever(search_type="mmr")
query = "What is implosion?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
query = "Who is elon?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
```
The above code will only return from the first document stored in the vectordb (spacex_wiki.txt), no matter what the prompt is.
The following are the text files used.
[implosion_wiki.txt](https://github.com/hwchase17/langchain/files/11850301/implosion_wiki.txt)
[spacex_wiki.txt](https://github.com/hwchase17/langchain/files/11850303/spacex_wiki.txt)
### Expected behavior
It is expected that information from both documents can be retreived when the vectordb is loaded from persistant location.
However, only the first embedded document can be retreived. | Chromadb only returns the first document from persistent db | https://api.github.com/repos/langchain-ai/langchain/issues/6657/comments | 3 | 2023-06-23T16:36:06Z | 2023-12-15T12:38:44Z | https://github.com/langchain-ai/langchain/issues/6657 | 1,771,741,425 | 6,657 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.208
Archcraft x86_64
Python 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Vector Stores / Retrievers
- [X] Document Loaders
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Export Chat from Whatsapp
2. The exported chat contains messages that aren't being extracted by the regex. Example :
`7/19/22, 11:26 pm - User: Message`
https://github.com/hwchase17/langchain/blob/980c8651743b653f994ad6b97a27b0fa31ee92b4/langchain/document_loaders/whatsapp_chat.py#L43
There are two issues here:
1. The regex is looking for a space character but in my exported message there was a unicode NNBSP character (U+202F)
2. AM/PM are expected in capital case whereas my export was in small case.
### Expected behavior
Message is parsed successfully. | WhatsAppChatLoader doesn't extract messages exported from WhatsApp | https://api.github.com/repos/langchain-ai/langchain/issues/6654/comments | 0 | 2023-06-23T15:42:11Z | 2023-06-26T09:16:16Z | https://github.com/langchain-ai/langchain/issues/6654 | 1,771,671,290 | 6,654 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Since the doc refactor, users are now limited to four search results per query.
<img width="837" alt="image" src="https://github.com/hwchase17/langchain/assets/1082786/e340d78c-b8bb-4242-93bc-0d96d4514b44">
I think users should be able to purchase tokens if they would like to be able to access more search results, like perhaps 1 token per 10 extra results. This would enable functionality similar to the previous search functions, which would return up to 50 results.
Tokens could also be used for respecting `@media (prefers-color-scheme: dark)` since right now my laptop is not rated high enough for the default brightness and I would not like to blow out my display. Lastly, I would be willing to pay 5 tokens to increase the font weight, since it is unfortunately not very accessible for people with low vision, though that price should probably be determined by what the market will bear.
### Motivation
Motivation: find broad array code definitions and usage examples when trying to integrate a piece of the library into my application.
Related to #6300.
Will allow users to surface relevant information without having to implement custom crawler/indexer for the docs.
### Your contribution
I will happily serve as QA tester to test the amount of search results returned. I don't think my sunglasses offer enough protection to test whether the docs site respects the dark-mode CSS media query. | Allow users to purchase tokens for more search results | https://api.github.com/repos/langchain-ai/langchain/issues/6651/comments | 1 | 2023-06-23T14:10:15Z | 2023-09-29T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6651 | 1,771,531,070 | 6,651 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type.
It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'.
Why is it so?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
answering_llm=AzureChatOpenAI(
deployment_name=ANSWERING_MODEL_CONFIG.model_name,
model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo"
openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS
openai_api_key=auth_token,
temperature=ANSWERING_MODEL_CONFIG.temperature,
max_tokens=ANSWERING_MODEL_CONFIG.max_tokens
)
### Expected behavior
We expect the wrapper to take the value of the environmental variable correctly. | [AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value | https://api.github.com/repos/langchain-ai/langchain/issues/6650/comments | 1 | 2023-06-23T14:09:47Z | 2023-08-04T03:21:42Z | https://github.com/langchain-ai/langchain/issues/6650 | 1,771,530,370 | 6,650 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.3
MacOs
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I encountered a problem during my initial installation of the Langchain package. I adhered to the installation instructions provided at https://python.langchain.com/docs/get_started/installation.
The command I used for installation was pip install langchain, which resulted in the installation of Langchain version 0.0.209.
However, when I attempted to execute the following code:
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI()
res = chat.predict_messages([HumanMessage(
content="Translate this sentence from English to French. I love programming.")])
print(res.content)
```
I received an error message stating that the `predict_messages` function was not available. It appears that the package version available on pip does not align with the latest version on the GitHub repository.
Interestingly, when I installed the package from the cloned repository, it worked as expected.
### Expected behavior
After installing the Langchain package using pip install langchain, I should be able to import the OpenAI module from `langchain.chat_models` and use the predict function without any issues. The `predict_messages` function should be available and functional in the pip version of the package, just as it is in the version available in the GitHub repository. | Installation Issue with Langchain Package - 'predict_messages' Function Not Available in Pip Version 0.0.209 | https://api.github.com/repos/langchain-ai/langchain/issues/6643/comments | 2 | 2023-06-23T11:27:58Z | 2023-10-01T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6643 | 1,771,298,502 | 6,643 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I would like to know how many **tokens** the tokenizer would generate for the **prompt** doing the **OpenAI** call, but I'm finding issues in reproducing the real call.
Indeed I'm trying two methods (that internally use `tiktoken` library if I'm not wrong) found in the documentation:
- [`get_num_tokens`](https://api.python.langchain.com/en/latest/modules/llms.html#langchain.llms.AI21.get_num_tokens)
- [`get_num_tokens_from_messages`](https://api.python.langchain.com/en/latest/modules/llms.html#langchain.llms.AI21.get_num_tokens_from_messages)
Then I check the number of prompt tokens with the callback `get_openai_callback` the understand if the calculation was correct:
```python
from langchain.llms import OpenAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage
from langchain.callbacks import get_openai_callback
models_name = ["text-davinci-003", "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613"]
for model_name in models_name:
print(f"----{model_name}----")
llm = OpenAI(model_name = model_name)
print(llm)
text = "Hello world"
tokens = llm.get_num_tokens(text)
print(f"1) get_num_tokens: {tokens}")
human_message = HumanMessage(content=text)
system_message = SystemMessage(content=text)
ai_message = AIMessage(content=text)
tokens = llm.get_num_tokens_from_messages([human_message]), llm.get_num_tokens_from_messages([system_message]), llm.get_num_tokens_from_messages([ai_message])
print(f"2) get_num_tokens_from_messages: {tokens}")
with get_openai_callback() as cb:
llm_response = llm(text)
print(f"3) callback: {cb}")
```
The output is:
```
----text-davinci-003----
OpenAI
Params: {'model_name': 'text-davinci-003', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'request_timeout': None, 'logit_bias': {}}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 23
Prompt Tokens: 2
Completion Tokens: 21
Successful Requests: 1
Total Cost (USD): $0.00045999999999999996
----gpt-3.5-turbo-0301----
OpenAIChat
Params: {'model_name': 'gpt-3.5-turbo-0301'}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 50
Prompt Tokens: 10
Completion Tokens: 40
Successful Requests: 1
Total Cost (USD): $0.0001
----gpt-3.5-turbo-0613----
OpenAIChat
Params: {'model_name': 'gpt-3.5-turbo-0613'}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 18
Prompt Tokens: 9
Completion Tokens: 9
Successful Requests: 1
Total Cost (USD): $0.0
```
I understand that each model has a different way to count the tokens, for example **text-davinci-003** has the same number between `get_num_tokens` output and the callback. The other two models: **gpt-3.5-turbo-0301** and **gpt-3.5-turbo-0613** seems to have respectively 6 and 5 tokens more in the callback compared to `get_num_tokens_from_messages`.
So how I can reproduce exactly the calculation of the token in the real call? Which is the official function used in it?
### Suggestion:
_No response_ | Tokenize before OpenAI call issues | https://api.github.com/repos/langchain-ai/langchain/issues/6642/comments | 2 | 2023-06-23T11:21:02Z | 2023-07-12T13:21:20Z | https://github.com/langchain-ai/langchain/issues/6642 | 1,771,287,642 | 6,642 |
[
"langchain-ai",
"langchain"
] | 因为LLAMA没有中文数据,想用中文数据fine tuning LLAMA模型,请问langchain有这个功能吗? | 是否有fine tuning LLAMA预训练模型功能 | https://api.github.com/repos/langchain-ai/langchain/issues/6641/comments | 1 | 2023-06-23T10:34:28Z | 2023-09-29T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6641 | 1,771,224,942 | 6,641 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.207
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///sample.db",)
db.get_usable_table_names() # Change table names order each time the application is retarted
```
Current implementation
```python
class SQLDatabase:
def get_usable_table_names(self) -> Iterable[str]:
if self._include_tables:
return self._include_tables
return self._all_tables - self._ignore_tables # THIS IS A SET
class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):
def _run(self, tool_input: str = "", ...) -> str:
return ", ".join(self.db.get_usable_table_names()) # ORDER CHANGES EACH RUN
```
### Expected behavior
```python
class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):
def _run(self, tool_input: str = "", ...) -> str:
return ", ".join(sorted(self.db.get_usable_table_names()))
``` | sql_db_list_tables returning different order each time making caching impossible | https://api.github.com/repos/langchain-ai/langchain/issues/6640/comments | 2 | 2023-06-23T10:32:43Z | 2023-09-30T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6640 | 1,771,222,506 | 6,640 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In order to read all the text of wikipedia page, we would need to allow overriding the hard limit of 4000 characters set in `WikipediaAPIWrapper`
### Suggestion:
Just add a new argument to `WikipediaLoader` named `doc_content_chars_max` (the very same name that uses `WikipediaAPIWrapper` under the hood and use it when instancing the client. | Issue: Set doc_content_chars_max with WikipediaLoader | https://api.github.com/repos/langchain-ai/langchain/issues/6639/comments | 2 | 2023-06-23T10:20:04Z | 2023-10-30T09:11:52Z | https://github.com/langchain-ai/langchain/issues/6639 | 1,771,205,138 | 6,639 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain - 0.0.198
platform - ubuntu
python - 3.10.11
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Objective is to pass additional variables to `_call` method in CustomLLM.
Colab Link - https://colab.research.google.com/drive/19VSmSEBq5D0MDXQ3CF0rrmOdGjdaELUj?usp=sharing
Sample code:
```
from langchain import PromptTemplate, LLMChain
from langchain.llms.base import LLM
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
model_name = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
class CustomLLM(LLM):
def _call(self, prompt, stop=None, **kwargs) -> str:
print("Kwargs: ", kwargs)
inputs = tokenizer([prompt], return_tensors="pt")
response = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(response[0])
return response
@property
def _identifying_params(self):
return {"name_of_model": model_name}
@property
def _llm_type(self) -> str:
return "custom"
llm = CustomLLM()
prompt_template = "Answer the question - {question}"
prompt = PromptTemplate(template=prompt_template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
```
### Expected behavior
### Scenario1
passing additional parameter `foo=123`
```
result = llm_chain.run({"question":"What is the weather in LA and SF"}, foo=123)
```
Following error is thrown
```
ValueError: `run` supported with either positional arguments or keyword arguments but not both. Got args: ({'question': 'What is the weather in LA and SF'},) and kwargs: {'foo': 123}.
```
### Scenario2
if we pass it as a dictionary - `{'foo': 123}`
```
result = llm_chain.run({"question":"What is the weather in LA and SF"}, {"foo":123})
```
Following error is thrown
```
ValueError: `run` supports only one positional argument.
```
### Scenario3
if we pass everything together
```
result = llm_chain.run({"question":"What is the weather in LA and SF", "foo":123})
```
The code works, but the kwargs in CustomLLM - `_call` is still empty. i guess, the chain is safely ignoring the variables which are not part of prompt template.
Is there any way to pass the additional parameter to the kwargs of CustomLLM - `_call` method without changing the prompt template?
| how to pass additional variables using kwargs to CustomLLM | https://api.github.com/repos/langchain-ai/langchain/issues/6638/comments | 5 | 2023-06-23T10:09:31Z | 2024-02-08T16:29:11Z | https://github.com/langchain-ai/langchain/issues/6638 | 1,771,189,299 | 6,638 |
[
"langchain-ai",
"langchain"
] | ### System Info
I've recently changed to use this agent since I started getting errors with `chat-conversational-react-description` (about it not being able to use multi-input tools). I've noticed that it often finishes a chain telling the user that it'll make a search/use a tool but it never does (because the chain is already finished).
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is how the agent is set up
```python
from langchain.chat_models import ChatOpenAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.agents import AgentType, initialize_agent
from agent_tools.comparables_tool import ComparablesTool
# from agent_tools.duck_search_tool import duck_search
from langchain.prompts import SystemMessagePromptTemplate, PromptTemplate
from agent_tools.python_repl_tool import PythonREPL
from token_counter import get_token_count
from langchain.prompts import MessagesPlaceholder
from langchain.memory import ConversationBufferMemory
tools = [PythonREPL(), ComparablesTool()]
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True)
gpt = ChatOpenAI(
temperature=0.2,
model_name='gpt-3.5-turbo-16k',
verbose=True
)
conversational_agent = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=gpt,
verbose=True,
max_iterations=10,
memory=memory,
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"]
}
)
async def get_response(user_message: str) -> str:
return await conversational_agent.arun(user_message)
```
And this is what's on the terminal:
```python
FerAtTheFringe#1080 said: "Hey I need to find apartments in madrid with at least 3 rooms" (general)
←[1m> Entering new chain...←[0m
←[32;1m←[1;3mSure! I can help you find apartments in Madrid with at least 3 rooms. Let me search for some options for you.←[0m
←[1m> Finished chain.←[0m
```
### Expected behavior
```python
FerAtTheFringe#1080 said: "Hey I need to find apartments in madrid with at least 3 rooms" (general)
←[1m> Entering new chain...←[0m
←[32;1m←[1;3m "action": "get_comparables",
"action_input": {
"latitude": "38.9921979",
"longitude": "-1.878099",
"rooms": "5",
"nresults": "10"
}
``` | STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION finishes chain BEFORE using a tool | https://api.github.com/repos/langchain-ai/langchain/issues/6637/comments | 14 | 2023-06-23T09:50:41Z | 2024-02-28T16:10:15Z | https://github.com/langchain-ai/langchain/issues/6637 | 1,771,161,521 | 6,637 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 20.04
Python 3.10
langchain 0.0.166
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code is retrieved from the official website
https://python.langchain.com/docs/modules/memory/how_to/summary#initializing-with-messages
```python
from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI
from dotenv import load_dotenv
load_dotenv()
history = ChatMessageHistory()
history.add_user_message("hi")
history.add_ai_message("hi there!")
memory = ConversationSummaryMemory.from_messages(llm=OpenAI(temperature=0), chat_memory=history, return_messages=True)
```
The above code will throw out an exception
```
AttributeError: type object 'ConversationSummaryMemory' has no attribute 'from_messages'
```
I guess the class method has been deprecated?
### Expected behavior
It passes. | 'ConversationSummaryMemory' has no attribute 'from_messages' | https://api.github.com/repos/langchain-ai/langchain/issues/6636/comments | 2 | 2023-06-23T08:37:14Z | 2023-09-29T16:05:58Z | https://github.com/langchain-ai/langchain/issues/6636 | 1,771,057,110 | 6,636 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.206
python 3.11.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code
```
tfretriever = TFIDFRetriever.from_texts(
["My name is Luis Valencia",
"I am 70 years old",
"I like gardening, baking and hockey"])
template = """
Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{question}
Answer:
"""
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(llm,
vectordb.as_retriever(),
memory=memory,
chain_type_kwargs={
"verbose": True,
"prompt": prompt,
"memory": ConversationBufferMemory(
memory_key="chat_history",
input_key="question"),
})
```
Error:
ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)
### Expected behavior
I should be able to provide custom context to my conversational retrieval chain, without custom prompt it works and gets good answers from vector db, but I cant use custom prompts | ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra) | https://api.github.com/repos/langchain-ai/langchain/issues/6635/comments | 11 | 2023-06-23T08:13:12Z | 2023-11-03T04:33:18Z | https://github.com/langchain-ai/langchain/issues/6635 | 1,771,023,299 | 6,635 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Why does the langchain js version have a github repo document loader and this one can only load github issues?
### Motivation
-
### Your contribution
- | Github issues instead of Github repo? | https://api.github.com/repos/langchain-ai/langchain/issues/6631/comments | 1 | 2023-06-23T06:21:35Z | 2023-09-29T16:06:04Z | https://github.com/langchain-ai/langchain/issues/6631 | 1,770,856,353 | 6,631 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.209
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
`import asyncio
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(3)]
await asyncio.gather(*tasks)
asyncio.run(generate_concurrently())`
no error and no answer until timeout
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds...
I know this message mean it is working but no retrun and can't stop
and the same problem in Jupyter, code like this
my code in Jupyter
`import asyncio
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(3)]
await asyncio.gather(*tasks)
await generate_concurrently()`
### Expected behavior
no error and no answer until timeout
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds...
I know this message mean it is working but no retrun and can't stop
and the same problem in Jupyter | Can't use arun acall, no return and can't stop | https://api.github.com/repos/langchain-ai/langchain/issues/6630/comments | 1 | 2023-06-23T06:18:54Z | 2023-09-29T16:06:09Z | https://github.com/langchain-ai/langchain/issues/6630 | 1,770,853,281 | 6,630 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
qa = ConversationalRetrievalChain.from_llm(AzureChatOpenAI(deployment_name="gpt-35-turbo"), db.as_retriever(), memory=memory)
print(qa.combine_docs_chain.llm_chain.prompt)
ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}', template_format='f-string', validate_template=True), additional_kwargs={})])
How can I get the complete prompt includes questions and context?
### Suggestion:
_No response_ | Issue: How to print the complete prompt that chain used | https://api.github.com/repos/langchain-ai/langchain/issues/6628/comments | 12 | 2023-06-23T04:03:35Z | 2024-05-17T16:06:03Z | https://github.com/langchain-ai/langchain/issues/6628 | 1,770,741,393 | 6,628 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.209, Python 3.8.17
https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5725807
Hi, we are deploying an app in our environment to production with langchain as one of the packages.
Today, on Snyk this critical vulnerability showed up, and as a result we're blocked from deploying as Snyk flagged this out as critical.
Are there any plans to fix this soon?
Thank you very much.
<img width="1383" alt="image" src="https://github.com/hwchase17/langchain/assets/1635202/81aa2179-7c10-4f3c-9fa4-11042f43a9be">
### Who can help?
@hwchase17 @dev2049 @vowelparrot @bborn @Jflick58 @duckdoom4 @verm
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My requirements.txt
#
# This file is autogenerated by pip-compile with Python
# by the following command:
#
# pip-compile --allow-unsafe --output-file=requirements.txt --resolver=backtracking requirements.in
#
aioboto3==11.2.0
# via python-commons
aiobotocore[boto3]==2.5.0
# via aioboto3
aiohttp==3.8.4
# via
# aiobotocore
# langchain
# openai
aioitertools==0.11.0
# via aiobotocore
aiosignal==1.3.1
# via aiohttp
alembic==1.10.4
# via -r requirements.in
anyio==3.7.0
# via
# httpcore
# starlette
# watchfiles
async-timeout==4.0.2
# via
# aiohttp
# langchain
asyncpg==0.27.0
# via -r requirements.in
attrs==23.1.0
# via
# aiohttp
# pytest
boto3==1.26.76
# via aiobotocore
botocore==1.29.76
# via
# aiobotocore
# boto3
# s3transfer
certifi==2023.5.7
# via
# httpcore
# httpx
# python-commons
# requests
cffi==1.15.1
# via cryptography
charset-normalizer==3.1.0
# via
# aiohttp
# python-commons
# requests
click==8.1.3
# via uvicorn
coverage==7.2.7
# via pytest-cov
cryptography==41.0.1
# via
# pyopenssl
# python-commons
dataclasses-json==0.5.8
# via langchain
dnspython==2.3.0
# via email-validator
email-validator==1.3.1
# via -r requirements.in
exceptiongroup==1.1.1
# via anyio
fastapi==0.95.2
# via -r requirements.in
frozenlist==1.3.3
# via
# aiohttp
# aiosignal
greenlet==2.0.2
# via sqlalchemy
gunicorn==20.1.0
# via python-commons
h11==0.14.0
# via
# httpcore
# uvicorn
httpcore==0.17.2
# via httpx
httptools==0.5.0
# via uvicorn
httpx==0.24.1
# via python-commons
idna==3.4
# via
# anyio
# email-validator
# httpx
# requests
# yarl
iniconfig==2.0.0
# via pytest
jmespath==1.0.1
# via
# boto3
# botocore
langchain==0.0.209
# via -r requirements.in
langchainplus-sdk==0.0.16
# via langchain
loguru==0.7.0
# via python-commons
mako==1.2.4
# via alembic
markdown-it-py==3.0.0
# via rich
markupsafe==2.1.3
# via mako
marshmallow==3.19.0
# via
# dataclasses-json
# marshmallow-enum
marshmallow-enum==1.5.1
# via dataclasses-json
mdurl==0.1.2
# via markdown-it-py
multidict==6.0.4
# via
# aiohttp
# yarl
mypy-extensions==1.0.0
# via typing-inspect
numexpr==2.8.4
# via langchain
numpy==1.24.3
# via
# -r requirements.in
# langchain
# numexpr
openai==0.27.8
# via -r requirements.in
openapi-schema-pydantic==1.2.4
# via langchain
packaging==23.1
# via
# marshmallow
# pytest
pluggy==1.2.0
# via pytest
py==1.11.0
# via pytest
pycparser==2.21
# via cffi
pydantic==1.10.9
# via
# fastapi
# langchain
# langchainplus-sdk
# openapi-schema-pydantic
# python-commons
pygments==2.15.1
# via rich
pyopenssl==23.2.0
# via python-commons
pytest==6.2.5
# via
# -r requirements.in
# pytest-asyncio
# pytest-cov
# pytest-mock
pytest-asyncio==0.18.3
# via -r requirements.in
pytest-cov==2.12.1
# via -r requirements.in
pytest-mock==3.6.1
# via -r requirements.in
python-commons @ ## masked internal repo ##
# via -r requirements.in
python-dateutil==2.8.2
# via botocore
python-dotenv==1.0.0
# via
# -r requirements.in
# uvicorn
pyyaml==6.0
# via
# langchain
# uvicorn
regex==2023.6.3
# via tiktoken
requests==2.31.0
# via
# langchain
# langchainplus-sdk
# openai
# tiktoken
rich==13.4.2
# via python-commons
s3transfer==0.6.1
# via boto3
six==1.16.0
# via python-dateutil
sniffio==1.3.0
# via
# anyio
# httpcore
# httpx
sqlalchemy[asyncio]==2.0.16
# via
# -r requirements.in
# alembic
# langchain
sse-starlette==1.6.1
# via -r requirements.in
starlette==0.27.0
# via
# fastapi
# python-commons
# sse-starlette
tenacity==8.2.2
# via
# langchain
# langchainplus-sdk
tiktoken==0.4.0
# via -r requirements.in
toml==0.10.2
# via
# pytest
# pytest-cov
tqdm==4.65.0
# via openai
typing-extensions==4.6.3
# via
# aioitertools
# alembic
# pydantic
# sqlalchemy
# starlette
# typing-inspect
typing-inspect==0.9.0
# via dataclasses-json
urllib3==1.26.16
# via
# botocore
# python-commons
# requests
uvicorn[standard]==0.21.1
# via
# -r requirements.in
# python-commons
uvloop==0.17.0
# via uvicorn
watchfiles==0.19.0
# via uvicorn
websockets==11.0.3
# via uvicorn
wrapt==1.15.0
# via aiobotocore
yarl==1.9.2
# via aiohttp
# The following packages are considered to be unsafe in a requirements file:
setuptools==68.0.0
# via
# gunicorn
# python-commons
### Expected behavior
Critical vulnerability would have to be fixed for us to deploy, thanks. | Critical Vulnerability Blocking Deployment | https://api.github.com/repos/langchain-ai/langchain/issues/6627/comments | 10 | 2023-06-23T03:47:23Z | 2023-08-28T21:35:45Z | https://github.com/langchain-ai/langchain/issues/6627 | 1,770,729,226 | 6,627 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.209
The recent commit #6518 provided an OpenAIMultiFunctionsAgent class.
This MultiFunctions agent fails often when using Custom Tools that worked fine with the OpenAIFunctionsAgent.
```
File "/home/gene/endpoints/app/routers/query.py", line 44, in query3
result = await agent.acall(inputs={"input":query.query})
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 215, in acall
raise e
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 209, in acall
await self._acall(inputs, run_manager=run_manager)
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1006, in _acall
next_step_output = await self._atake_next_step(
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 853, in _atake_next_step
output = await self.agent.aplan(
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/agents/openai_functions_multi_agent/base.py", line 301, in aplan
agent_decision = _parse_ai_message(predicted_message)
File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/agents/openai_functions_multi_agent/base.py", line 110, in _parse_ai_message
tools = json.loads(function_call["arguments"])["actions"]
KeyError: 'actions'
```
**Example tool that FAILS:**
```
from typing import Optional, Type
from langchain.tools import BaseTool
from pydantic import BaseModel, Field
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
class ProductInput(BaseModel):
prod_name: str = Field(description="Product Name or Type of Product")
class CustomProductTool(BaseTool):
name : str = "price_lookup"
description : str = "useful to look up pricing for a specific product or product type and shopping url of products offered by the Company's online website."
args_schema: Type[BaseModel] = ProductInput
def _run(self, prod_name: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> dict:
# custom code here
products = {}
return products
async def _arun(self, prod_name: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> dict:
return self._run(prod_name)
```
**Example tool that WORKS:**
```
from typing import Optional, Type
from langchain.tools import BaseTool
from ..src.OrderStatus import func_get_order_status, afunc_get_order_status
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
from pydantic import BaseModel, Field
class OrderInput(BaseModel):
order_num: str = Field(description="order number")
class CustomOrderTool(BaseTool):
name = "order_status"
description = "useful for when you need to look up the shipping status of an order."
args_schema: Type[BaseModel] = OrderInput
def _run(self, order_num: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> dict:
# Your custom logic here
return func_get_order_status(order_num)
async def _arun(self, order_num: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> dict:
return await afunc_get_order_status(order_num)
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Instantiate a OpenAIMultiFunctionAgent:
`agent = initialize_agent(tools,llm,agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=True)`
Create a custom tool (example above):
```
tools=[
CustomOrderTool(return_direct=False),
CustomQAToolSources(llm=llm,vectorstore=general_vectorstore),
CustomProductTool(return_direct=False),
CustomEscalateTool(return_direct=False)
]
```
Call agent:
```
result = await agent.acall(inputs={"input":query.query})
```
### Expected behavior
Tools are very similar to each other, not sure why one would work and the other fails. Might have something to do with the different description lengths? As far as I can tell the structure of the args_schema are the same between the two tools. Both tools work fine on OpenAIFunctionAgent.
I expected tools would work on OpenAIMultiFunctionAgent. Instead, **KeyError: 'actions'** results. Somehow the transformation of langchain tools to OpenAI function schema is not working as expected for OpenAIMultiFunctionAgent. | OpenAIMultiFunctionsAgent KeyError: 'actions' on custom tools | https://api.github.com/repos/langchain-ai/langchain/issues/6624/comments | 11 | 2023-06-23T02:49:01Z | 2024-02-19T16:09:16Z | https://github.com/langchain-ai/langchain/issues/6624 | 1,770,691,161 | 6,624 |
[
"langchain-ai",
"langchain"
] | ### Feature request
How can I make a toolset be dependent on some situation?
In the examples I have seen so far and from what I have been able to piece together from reading code, awhile ago (you guys work fast)... is there a way to make it so that I can only make certain tools available to an Agent at certain times beyond changing the actual tools stack myself in the Agent?
### Motivation
The idea is to keep the agent prompt template as lean as possible. The goal is to be able to do something like leading an agent through a process with varying toolboxes given the step that it is on. I have seen that it is possible using function calling, but is it just possible in any type of Agent?
### Your contribution
This is a feature request. I can make a contribution of looking at the code and adding this, but I am not sure if it is already possible or planned seeing as you move fast! | Variable or Conditional Toolbox | https://api.github.com/repos/langchain-ai/langchain/issues/6621/comments | 1 | 2023-06-23T01:26:22Z | 2023-09-29T16:06:14Z | https://github.com/langchain-ai/langchain/issues/6621 | 1,770,602,411 | 6,621 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is it possible to integrate [replit-code-v1-3b](https://replicate.com/replit/replit-code-v1-3b) as an [LLM Model](https://python.langchain.com/en/latest/modules/models.html) or an [Agent](https://python.langchain.com/en/latest/modules/agents.html) with [LangChain](https://github.com/hwchase17/langchain), and [chain](https://python.langchain.com/en/latest/modules/chains.html) it in a complex usecase?
### Suggestion:
Any help / hints on the same would be appreciated! | How can I implement a custom LangChain class wrapper (LLM model/Agent) for replit-code-v1-3b model? | https://api.github.com/repos/langchain-ai/langchain/issues/6620/comments | 1 | 2023-06-23T00:59:38Z | 2023-09-29T16:06:18Z | https://github.com/langchain-ai/langchain/issues/6620 | 1,770,580,833 | 6,620 |
[
"langchain-ai",
"langchain"
] | For models like -> "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2" the generated output doesn't contain the prompt. So it is wrong to filter the first characters of the response.
https://github.com/hwchase17/langchain/blob/9d42621fa4385e519f702b7005d475781033188c/langchain/llms/huggingface_pipeline.py#L172C13-L172C64
https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2/blob/main/h2oai_pipeline.py | Truncate HF pipeline response | https://api.github.com/repos/langchain-ai/langchain/issues/6619/comments | 1 | 2023-06-23T00:30:59Z | 2023-09-29T16:06:24Z | https://github.com/langchain-ai/langchain/issues/6619 | 1,770,563,613 | 6,619 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, Team
How can I integrate SerpAPI with custom ChatGLM model. It looks like my code is not correct and I can't find useful information from internet. Hope post here can help me to resolve this issue. Thanks in advcance.
```
import time
import logging
import requests
from typing import Optional, List, Dict, Mapping, Any
import langchain
from langchain.llms.base import LLM
# from langchain.cache import InMemoryCache
#------------------------------
import os
os.environ["SERPAPI_API_KEY"] = '44eafc5bc26834f931324798f8e370e5c5039578dde6ef7a67918f24ed00599f'
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
#------------------------------
logging.basicConfig(level=logging.INFO)
# 启动llm的缓存
# langchain.llm_cache = InMemoryCache()
class ChatGLM(LLM):
# 模型服务url
url = "http://18.183.251.31:8000"
@property
def _llm_type(self) -> str:
return "chatglm"
def _construct_query(self, prompt: str) -> Dict:
"""构造请求体
"""
query = {
"prompt": prompt
}
return query
@classmethod
def _post(cls, url: str, query: Dict) -> Any:
"""POST请求
"""
_headers = {"Content_Type": "application/json"}
with requests.session() as sess:
resp = sess.post(url,
json=query,
headers=_headers,
timeout=60)
return resp
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
"""_call
"""
# construct query
query = self._construct_query(prompt=prompt)
print(query)
# post
resp = self._post(url=self.url, query=query)
if resp.status_code == 200:
resp_json = resp.json()
predictions = resp_json["response"]
return predictions
else:
return "请求模型"
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters.
"""
_param_dict = {
"url": self.url
}
return _param_dict
if __name__ == "__main__":
llm = ChatGLM()
# ------------------------------
tools = load_tools(["serpapi"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What's the date today? What great events have taken place today in history?")
# ------------------------------
# while True:
# prompt = input("Human: ")
#
# begin_time = time.time() * 1000
# # 请求模型
# response = llm(prompt, stop=["you"])
# end_time = time.time() * 1000
# used_time = round(end_time - begin_time, 3)
# # logging.info(f"chatGLM process time: {used_time}ms")
# print("chatGLM process time %s" % {used_time})
# print(f"ChatGLM: {response}")
```
### Suggestion:
_No response_ | How can I integrate SerpAPI with custom ChatGLM model | https://api.github.com/repos/langchain-ai/langchain/issues/6618/comments | 2 | 2023-06-23T00:05:19Z | 2023-10-01T16:05:53Z | https://github.com/langchain-ai/langchain/issues/6618 | 1,770,546,185 | 6,618 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have the following BaseModel:
```python
class MainMagnetClass(BaseModel):
main_materials: List[str] = Field(description="main material")
additional_doping_elements: List[str] = Field(description="doping")
```
which can be instantiated as:
```python
instance = PydanticOutputParser(pydantic_object=MainMagnetClass)
```
I would like to know if there is a way to dynamically load the description of the two fields.
I tried with `construct()` but it doesn't seems to work.
The reason is that I'm generating a set of queries and for each of them I want to have different "description" for the PydanticOutputParser that is going to be used.
### Suggestion:
I would load a dict with the fields and their description and pass it to the object so that I could override the default descriptions. | Dynamic fields for BaseModels in PydanticOutputParser? | https://api.github.com/repos/langchain-ai/langchain/issues/6617/comments | 5 | 2023-06-22T23:40:52Z | 2024-03-28T16:05:53Z | https://github.com/langchain-ai/langchain/issues/6617 | 1,770,528,462 | 6,617 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.209
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm
### Expected behavior
I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm.
This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model. | ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context' | https://api.github.com/repos/langchain-ai/langchain/issues/6610/comments | 0 | 2023-06-22T20:56:38Z | 2023-06-26T17:21:02Z | https://github.com/langchain-ai/langchain/issues/6610 | 1,770,383,094 | 6,610 |
[
"langchain-ai",
"langchain"
] | ### System Info
When querying with no context (emptyStore below) GPU memory goes up to 8GB and after the chain completes, GPU memory goes back down to 630MB.
When using a ChromaDB to provide vector context, GPU memory is never released. Memory usage goes up to 8GB and stays there. Once enough calls have been made, the program will crash with an out of memory error.
I have tried manually deleting the variables associated with the DB and langchain, running garbage collection... I am unable to free this GPU memory. Is there a manual method to free this memory that I could employ or some other workaround?
I started with using langchain 201 and noticed the issue. The issue persists when using the latest 209.
```
def queryGpt(query):
# Get our llm and embeddings
llm = get_llm()
embeddings = get_embeddings()
# Even if the user does not specify a vector store to use, it is necessary
# to pass in a retriever to the RetrievalQA chain.
docs = [
Document(page_content=""),
Document(page_content=""),
Document(page_content=""),
Document(page_content=""),
]
emptyStore = Chroma.from_documents(docs)
retriever = emptyStore.as_retriever()
if request.content_type == "application/json":
data = request.get_json()
store_id = data.get("store_id")
store_collection = data.get("store_collection")
if store_id and store_collection:
vector_stores = load_vector_stores()
found: VectorStore | None = None
for store in vector_stores:
if store["id"] == store_id:
found = store
if not found:
print(f"Warning: vector store not found id:{store_id}")
else:
# print(f"Using vector store '{found['name']}' id:{found['id']} collection {store_collection}")
client = get_chroma_instance(found["dirname"])
# embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
db = Chroma(
client=client,
embedding_function=embeddings,
collection_name=store_collection,
)
retriever = db.as_retriever()
print('Answering question')
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
# Get the answer from the chain
res = qa(query)
```
We are using the latest Vicuna 13b. With `all-MiniLM-L6-v2` used for the embeddings.
We are in Azure using Tesla GPU's. Ubuntu 20.04. Cuda 12.1.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run a QA chain with a ChromaDB enabled.
### Expected behavior
I would expect the memory to be freed upon completion of the chain. | RetrievalQA.from_chain_type does not release GPU memory when given ChromaDB context | https://api.github.com/repos/langchain-ai/langchain/issues/6608/comments | 9 | 2023-06-22T20:31:11Z | 2024-06-20T16:08:56Z | https://github.com/langchain-ai/langchain/issues/6608 | 1,770,352,279 | 6,608 |
[
"langchain-ai",
"langchain"
] | ### System Info
Tried on Colab.
Version: [v0.0.209](https://github.com/hwchase17/langchain/releases/tag/v0.0.209)
Platform: Google Colab
Python: 3.10
### Who can help?
@hwchase17
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Take a Figma file and use Langchain's Figma plugin to get the JSON from API.
2. Use the index = VectorstoreIndexCreator().from_loaders([figma_loader]) to get the index.
3. And then create doc retriever using figma_doc_retriever = index.vectorstore.as_retriever()
When we query ChatGPT/LLMs, the way code works is that it breaks the original document into parts, and finds similarity. Great for unstructured documents, but bad for JSON - breaks the structure and does not carry the right content either. So, for example, this function
relevant_nodes = figma_doc_retriever.get_relevant_documents("Slack Integration")
which calculates similarity to get the nearest nodes, gave me this output:
[Document(page_content='name: Dark Mode\nlastModified: 2023-06-19T07:26:34Z\nthumbnailUrl: \nversion: 3676773001\nrole: owner\neditorType: figma\nlinkAccess: view\nnodes: \n10:138: \ndocument: \nid: 10:138\nname: Slack Integration\ntype: FRAME\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH\nchildren: \nid: 10:139\nname: div.sc-dwFxSa\ntype: FRAME\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH\nchildren: \nid: 10:140\nname: Main\ntype: FRAME\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH\nchildren: \nid: 10:141\nname: div.sc-iAVVkm\ntype: FRAME\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH\nchildren: \nid: 10:142\nname: div.sc-bcXHqe\ntype: FRAME\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH', metadata={'source': ''}),
Document(page_content='id: 10:178\nname: Send project updates to a Slack channel\ntype: TEXT\nscrollBehavior: SCROLLS\nblendMode: PASS_THROUGH\nabsoluteBoundingBox: \nx: -3084.0\ny: 3199.0\nwidth: 250.0\nheight: 16.0\n\nabsoluteRenderBounds: \nx: -3083.335205078125\ny: 3202.1669921875\nwidth: 247.89453125\nheight: 12.4921875\n\nconstraints: \nvertical: TOP\nhorizontal: LEFT\n\nlayoutAlign: INHERIT\nlayoutGrow: 0.0\nfills: \nblendMode: NORMAL\ntype: SOLID\ncolor: \nr: 0.9960784316062927\ng: 1.0\nb: 0.9960784316062927\na: 1.0\n\n\nstrokes: \nstrokeWeight: 1.0\nstrokeAlign: OUTSIDE\neffects: \ncharacters: Send project updates to a Slack channel\nstyle: \nfontFamily: Inter\nfontPostScriptName: None\nfontWeight: 500\ntextAutoResize: WIDTH_AND_HEIGHT\nfontSize: 13.0\ntextAlignHorizontal: LEFT\ntextAlignVertical: CENTER\nletterSpacing: 0.0\nlineHeightPx: 15.732954025268555\nlineHeightPercent: 100.0\nlineHeightUnit: INTRINSIC_%\n\nlayoutVersion: 3\ncharacterStyleOverrides: \nstyleOverrideTable: \n\nlineTypes: NONE\nlineIndentations: 0', metadata={'source': ''}),
(2 more such nodes)
### Expected behavior
For JSON, it should start from the innermost json, and then work the way backwards (especially for Figma) to enable LLM with more precise understanding of the structure and also get the output as desired. | For JSON loaders - like a Figma Design - similarity does not work, and ends up with the wrong output. | https://api.github.com/repos/langchain-ai/langchain/issues/6606/comments | 1 | 2023-06-22T20:09:41Z | 2023-09-30T16:05:58Z | https://github.com/langchain-ai/langchain/issues/6606 | 1,770,326,709 | 6,606 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
When using AIPluginTool with ChatOpenAI
sometimes the chain call the plugin and sometimes the response is like "the user can call the url ... to get the response" . Why is it?
My code:
import os
import openai
from dotenv import load_dotenv, find_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.tools import AIPluginTool
from langchain.agents import load_tools, ConversationalChatAgent, ZeroShotAgent
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.agents.agent import AgentExecutor
tool = AIPluginTool.from_plugin_url("http://localhost:5003/.well-known/ai-plugin.json")
tools2 = load_tools(["requests_get"] )
tools = [tool,tools2[0]]
_ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key = os.getenv('OPENAI_API_KEY')
llm=ChatOpenAI(
openai_api_key=os.getenv('OPENAI_API_KEY'),
temperature=0,
model_name='gpt-3.5-turbo'
)
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=5,
return_messages=True
)
custom_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools, system_message=prefix)
agent_executor = AgentExecutor.from_agent_and_tools(agent=custom_agent, tools=tools, memory=memory)
agent_executor.verbose = True
print(
agent_executor.agent.llm_chain.prompt
)
resp = agent_executor.run(input="What are my store orders for userId Leo ?")
print(
resp
)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
execute the code , two or three times. you will get a different response
### Expected behavior
call the plugin and get the response from http://localhost:5003/order/Leo
| response does not call the plugin | https://api.github.com/repos/langchain-ai/langchain/issues/6599/comments | 2 | 2023-06-22T16:03:31Z | 2023-10-23T16:07:42Z | https://github.com/langchain-ai/langchain/issues/6599 | 1,769,997,382 | 6,599 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I need to upgrade our Langchain version due to a security issue flagged in version 0.0.27 (see https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5411357).
However, I can't do this because Langchain depends on SQLAlchemy 2.0, while we use 1.4.
1. Why is SQLALchemy 2.0 needed? It might be useful for a tiny feature out of all the Langchain functionality...
2. SQLAlchemy 1.4 is still more widely used than 2.0
### Suggestion:
Not forcing 2.0, but >1.4 - should support the same syntax -> https://docs.sqlalchemy.org/en/14/ | Issue: why SQLAlchemy 2.0 is forced? | https://api.github.com/repos/langchain-ai/langchain/issues/6597/comments | 1 | 2023-06-22T15:48:15Z | 2023-09-28T16:05:39Z | https://github.com/langchain-ai/langchain/issues/6597 | 1,769,970,133 | 6,597 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.27
Python 3.7
Amazon Linux
### Who can help?
@ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
At launch, after recreating python venv and reinstalling latest version of langchain the error message is:
ImportError: cannot import name 'RetrievalQAWithSourcesChain' from 'langchain.chains'
### Expected behavior
This import should not cause an error. | ImportError: cannot import name 'RetrievalQAWithSourcesChain' from 'langchain.chains' | https://api.github.com/repos/langchain-ai/langchain/issues/6596/comments | 1 | 2023-06-22T15:47:32Z | 2023-06-28T16:18:28Z | https://github.com/langchain-ai/langchain/issues/6596 | 1,769,968,762 | 6,596 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
We are trying to summarize contents of some URLs using Vertex AI. Below is the code snippet
```
from langchain.llms import VertexAI
from langchain.chains.summarize import load_summarize_chain
llm = VertexAI(temperature=0.5, max_output_tokens=1024)
chain = load_summarize_chain(llm, chain_type="map_reduce")
def digest(url, driver):
# Get page source HTML
html = driver.page_source
# Parse HTML with BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
get_html_and_add(soup, url)
def get_html_and_add(soup: BeautifulSoup, url: str):
text = soup.get_text()
if soup.title:
title = str(soup.title.string)
else:
title = ""
vector_store.add_documents(summary_text(text, url, title))
vector_store.persist()
def summary_text(docs: str, url: str, title: str):
metadata: Dict[str, Union[str, None]] = {
"source": url,
"title": title,
}
docs = [Document(page_content=docs, metadata=metadata)]
val = chain.run(docs)
print(f'summary for url {url} is \n {val}')
return [Document(page_content=val, metadata=metadata)]
```
The code works fine for most of the URLs, however, for few URLs, we are receiving the attached error, when ```chain.run(docs)``` is invoked in the method ```summary_text```
[Error.txt](https://github.com/hwchase17/langchain/files/11835133/Error.txt)
Not able to identify the root cause, any help is appreciated.
Thank you!
PS: Unable to share the URL as it is an internal URL. The langchain version that we use is the latest as of today i.e., 0.0.208.
### Suggestion:
_No response_ | Issue: Summarization using Vertex AI - returns 400 Error on certain cases | https://api.github.com/repos/langchain-ai/langchain/issues/6592/comments | 1 | 2023-06-22T14:44:12Z | 2023-06-29T12:59:38Z | https://github.com/langchain-ai/langchain/issues/6592 | 1,769,837,088 | 6,592 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Consider the following example:
```python
# All the dependencies being used
import openai
import os
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.prompts import PromptTemplate
load_dotenv()
openai.organization = os.getenv("OPENAI_ORG_ID_")
openai.api_key = os.getenv("OPENAI_API_KEY")
# Load up a text file
loader = TextLoader("foo.txt")
documents = loader.load()
# Split text into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# Set up chroma
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
# I want a custom prompt that asks for the output in JSON format
prompt_template = """
Use the following pieces of context to answer the question at the end.
If you don't know the answer, output 'N/A', don't try to
make up an answer.
{context}
Question: {question}
Answer in JSON format:
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = ChatOpenAI(model_name="gpt-3.5", temperature=0)
# This is what's done in the Python docs
chain_type_kwargs = {'prompt': PROMPT}
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)
query = "Foo bar"
res = qa.run(query)
```
If we use anything other than `"stuff"` for the `chain_type` parameter in `RetrievalQA.from_chain_type`, we'll get the following error from that line:
```terminal
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 91, in from_chain_type
combine_documents_chain = load_qa_chain(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 238, in load_qa_chain
return loader_mapping[chain_type](
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 196, in _load_refine_chain
return RefineDocumentsChain(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/load/serializable.py", line 61, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for RefineDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
```
Is there anything in particular that prevents custom prompts being used for different chain types? Am I missing something? Open to any help and/or guidance.
### Motivation
I'm trying to perform QA on a large block of text and so using map_reduce or refine is preferable over stuff. I also want to perform the QA with a custom prompt as I need the chain's output to be in JSON format for parsing. When using stuff for text that doesn't surpass the token limit, it works as expected.
### Your contribution
Happy to contribute via a PR if someone identifies that what I'm suggesting isn't impossible. | Custom prompts for chain types that aren't "stuff" in RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/6590/comments | 6 | 2023-06-22T12:48:37Z | 2023-11-25T16:08:59Z | https://github.com/langchain-ai/langchain/issues/6590 | 1,769,608,656 | 6,590 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter
this function isn't to be found in the text_splitter.py file :
from langchain.text_splitter import RecursiveCharacterTextSplitter
this returns an error.
### Idea or request for content:
_No response_ | DOC: RecursiveTextSplitter function doesn't exist | https://api.github.com/repos/langchain-ai/langchain/issues/6589/comments | 3 | 2023-06-22T12:48:29Z | 2023-11-26T16:08:54Z | https://github.com/langchain-ai/langchain/issues/6589 | 1,769,608,451 | 6,589 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I haven't been able to find any documentation on a comprehensive list of pre-built tools available on Langchain, for example, there is nothing in the documentation that suggests we're able to load the "llm-math" tool?
### Idea or request for content:
It would be good to have a list of all pre-built tools! | Pre-built tool list | https://api.github.com/repos/langchain-ai/langchain/issues/6586/comments | 2 | 2023-06-22T11:29:09Z | 2023-09-28T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6586 | 1,769,487,548 | 6,586 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The current implementation of ConversationBufferMemory lacks the capability to clear the memory history. When using the load_qa_chain function with ConversationBufferMemory and uploading the abc.pdf file for the first time, subsequent questions based on that document yield expected answers. However, if I then change the file to 123.pdf and ask the same questions as before, the system provides the same answers as those given for the previous pdf.
Unfortunately, I have not found a clear_history function within the ConversationBufferMemory, which would enable me to reset or remove the previous memory records.
### Motivation
add that clear_history under ConversationBufferMemory to clear all previous saved messages.
### Your contribution
no | Not able to clear Conversationbuffermemory. | https://api.github.com/repos/langchain-ai/langchain/issues/6585/comments | 4 | 2023-06-22T11:23:00Z | 2023-07-17T14:42:27Z | https://github.com/langchain-ai/langchain/issues/6585 | 1,769,478,728 | 6,585 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Can we have create_document function for MarkdownHeaderTextSplitter to create documents based on the splits?
### Motivation
MarkdownHeaderTextSplitter only has split_text. Not sure how to get documents from the list of dict.
### Your contribution
... | Create_documents for MarkdownHeaderTextSplitter? | https://api.github.com/repos/langchain-ai/langchain/issues/6583/comments | 1 | 2023-06-22T10:26:00Z | 2023-09-02T03:34:02Z | https://github.com/langchain-ai/langchain/issues/6583 | 1,769,393,945 | 6,583 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest version
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Typo on :
https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49
Instal - > install
### Expected behavior
typo corrected | Typo | https://api.github.com/repos/langchain-ai/langchain/issues/6582/comments | 0 | 2023-06-22T09:34:08Z | 2023-06-23T21:56:55Z | https://github.com/langchain-ai/langchain/issues/6582 | 1,769,304,923 | 6,582 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Improve PubMedAPIWrapper to get the PubMed ID and/or DOI and/or journal information returned.
Please rename
langchain/utilities/pu**p**med.py
to
langchain/utilities/pu**b**med.py
### Motivation
A user of a chat model can ask the model to provide a link to the original literature to verify if the answer of the model makes sense.
### Your contribution
None. | Improve PubMedAPIWrapper to get the PubMed ID and/or DOI and/or journal information returned. | https://api.github.com/repos/langchain-ai/langchain/issues/6581/comments | 1 | 2023-06-22T09:30:14Z | 2023-09-28T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6581 | 1,769,299,151 | 6,581 |
[
"langchain-ai",
"langchain"
] | ### Feature request
You now support Hugging Face Inference endpoints, could you support also HF Models deployed in Azure ML as Managed endpoints?
It should be a similar implementation, its a REST API
### Motivation
My company would like to use Azure services only :) and many companies are like this
### Your contribution
I could help with some guidance. | HuggingFace Models as Azure ML Managed endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/6579/comments | 2 | 2023-06-22T08:33:39Z | 2023-08-14T17:42:55Z | https://github.com/langchain-ai/langchain/issues/6579 | 1,769,210,319 | 6,579 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.207
platform ubuntu
python 3.9
### Who can help?
@hwaking @eyurtsev @tomaspiaggio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
python script
```
from langchain.embeddings import VertexAIEmbeddings
embeddings = VertexAIEmbeddings()
llm = VertexAI(model_name='text-bison')
from langchain.vectorstores import MatchingEngine
texts = ['The cat sat on', 'the mat.', 'I like to', 'eat pizza for', 'dinner.', 'The sun sets', 'in the west.']
vector_store = MatchingEngine.from_components(
project_id=project,
region=location,
gcs_bucket_name='bucket_name',
index_id="index_id",
endpoint_id="endpoint_id",
embedding=embeddings
)
```
error message
[error.txt](https://github.com/hwchase17/langchain/files/11830103/error.txt)
### Expected behavior
expected behavior is pushing the vectors to vectorstore | unable to use matching engine | https://api.github.com/repos/langchain-ai/langchain/issues/6577/comments | 2 | 2023-06-22T07:26:32Z | 2023-12-06T08:17:35Z | https://github.com/langchain-ai/langchain/issues/6577 | 1,769,104,042 | 6,577 |
[
"langchain-ai",
"langchain"
] | ### System Info
Version:
PyAthena[SQLAlchemy]==2.25.2
langchain==0.0.166
sqlalchemy==1.4.47
Python==3.10.10
### Who can help?
@hwchase17 @agola11 @ey
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1- I'm creating the engine_athena with the following connection string
engine_athena = create_engine('awsathena+rest:/@athena.us-east-1.amazonaws.com:443/<schema>?s3_staging_dir=<S3 directory>&work_group=primary')
2- db = SQLDatabase(engine_athena)
3- However, I'm getting an error NoSuchTableError: <table_name>
4- I confirmed that the table exists and I'm able to query it directly using:
with engine_athena.connect() as connection:
result = connection.execute(text("SELECT * FROM <table_name> limit 10"))
for row in result:
print(row)
### Expected behavior
Expect to receive a Connection is established successfully message.
Any pointers to how I can resolve this issue. Here is the full error
---------------------------------------------------------------------------
NoSuchTableError Traceback (most recent call last)
Cell In[7], line 3
1 # Create the connection string (SQLAlchemy engine)
2 engine_athena = create_engine('awsathena+rest://:443/<table_name>?s3_staging_dir=<S3 directory>/&work_group=primary')
----> 3 db = SQLDatabase(engine_athena)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/sql_database.py:98, in SQLDatabase.__init__(self, engine, schema, metadata, ignore_tables, include_tables, sample_rows_in_table_info, indexes_in_table_info, custom_table_info, view_support)
96 self._metadata = metadata or MetaData()
97 # including view support if view_support = true
---> 98 self._metadata.reflect(
99 views=view_support,
100 bind=self._engine,
101 only=list(self._usable_tables),
102 schema=self._schema,
103 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:4901, in MetaData.reflect(self, bind, schema, views, only, extend_existing, autoload_replace, resolve_fks, **dialect_kwargs)
4899 for name in load:
4900 try:
-> 4901 Table(name, self, **reflect_opts)
4902 except exc.UnreflectableTableError as uerr:
4903 util.warn("Skipping table %s: %s" % (name, uerr))
File <string>:2, in __new__(cls, *args, **kw)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py:375, in deprecated_params.<locals>.decorate.<locals>.warned(fn, *args, **kwargs)
368 if m in kwargs:
369 _warn_with_version(
370 messages[m],
371 versions[m],
372 version_warnings[m],
373 stacklevel=3,
374 )
--> 375 return fn(*args, **kwargs)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:618, in Table.__new__(cls, *args, **kw)
616 return table
617 except Exception:
--> 618 with util.safe_reraise():
619 metadata._remove_table(name, schema)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:70, in safe_reraise.__exit__(self, type_, value, traceback)
68 self._exc_info = None # remove potential circular references
69 if not self.warn_only:
---> 70 compat.raise_(
71 exc_value,
72 with_traceback=exc_tb,
73 )
74 else:
75 if not compat.py3k and self._exc_info and self._exc_info[1]:
76 # emulate Py3K's behavior of telling us when an exception
77 # occurs in an exception handler.
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/util/compat.py:211, in raise_(***failed resolving arguments***)
208 exception.__cause__ = replace_context
210 try:
--> 211 raise exception
212 finally:
213 # credit to
214 # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
215 # as the __traceback__ object creates a cycle
216 del exception, replace_context, from_, with_traceback
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:614, in Table.__new__(cls, *args, **kw)
612 metadata._add_table(name, schema, table)
613 try:
--> 614 table._init(name, metadata, *args, **kw)
615 table.dispatch.after_parent_attach(table, metadata)
616 return table
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:689, in Table._init(self, name, metadata, *args, **kwargs)
685 # load column definitions from the database if 'autoload' is defined
686 # we do it after the table is in the singleton dictionary to support
687 # circular foreign keys
688 if autoload:
--> 689 self._autoload(
690 metadata,
691 autoload_with,
692 include_columns,
693 _extend_on=_extend_on,
694 resolve_fks=resolve_fks,
695 )
697 # initialize all the column, etc. objects. done after reflection to
698 # allow user-overrides
700 self._init_items(
701 *args,
702 allow_replacements=extend_existing or keep_existing or autoload
703 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:724, in Table._autoload(self, metadata, autoload_with, include_columns, exclude_columns, resolve_fks, _extend_on)
722 insp = inspection.inspect(autoload_with)
723 with insp._inspection_context() as conn_insp:
--> 724 conn_insp.reflect_table(
725 self,
726 include_columns,
727 exclude_columns,
728 resolve_fks,
729 _extend_on=_extend_on,
730 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sqlalchemy/engine/reflection.py:789, in Inspector.reflect_table(self, table, include_columns, exclude_columns, resolve_fks, _extend_on)
787 # NOTE: support tables/views with no columns
788 if not found_table and not self.has_table(table_name, schema):
--> 789 raise exc.NoSuchTableError(table_name)
791 self._reflect_pk(
792 table_name, schema, table, cols_by_orig_name, exclude_columns
793 )
795 self._reflect_fk(
796 table_name,
797 schema,
(...)
804 reflection_options,
805 )
NoSuchTableError: <table_name>
| error when creating SQLDatabase agent with Amazon Athena | https://api.github.com/repos/langchain-ai/langchain/issues/6574/comments | 1 | 2023-06-22T04:08:00Z | 2023-09-28T16:05:59Z | https://github.com/langchain-ai/langchain/issues/6574 | 1,768,892,323 | 6,574 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3
Langchain: 0.0.199
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am attempting to create a chatbot as a customer service assistant for a hotel agency. This is so I can experiment with azure cognitive search's sample data.
However, I keep running into issues utilizing the retriever.
```
from langchain import OpenAI, PromptTemplate
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferWindowMemory
from sandbox.hotels_demo.hotels_retreiver import HotelRetriever
from sandbox.hotels_demo.intent_classification import get_customer_intent
template = """
Assistant is a large language model trained by OpenAI.
Assistant is to act as a customer service agent for a hotel agency.
Assistant will describe hotel and discuss pricing if user is attempting to book a hotel.
General questions will be answered with summarization.
{history}
Human: {human_input}
Assistant:"""
hotel_retriever = HotelRetriever()
while True:
user_input = input("You: ")
if user_input == "EXIT":
print("Exiting...")
break
customer_intent = get_customer_intent(user_input).strip()
if customer_intent == "book_hotel" or customer_intent == "new_hotel_question":
hotel_retriever.refresh_relevant_hotels(user_input)
chatgpt_chain = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0),
prompt=PromptTemplate(
input_variables=["history", "human_input"], template=template
),
retriever=hotel_retriever.vectordb.as_retriever(),
memory=ConversationBufferWindowMemory(k=2),
)
print(
f"AI: {chatgpt_chain.run(human_input=user_input)}"
)
```
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import AzureCognitiveSearchRetriever
from langchain.vectorstores import Chroma
class HotelRetriever:
def __init__(self):
self.vectordb = None
self.retriever = AzureCognitiveSearchRetriever(content_key="Description")
def refresh_relevant_hotels(self, prompt):
docs = self.retriever.get_relevant_documents(prompt)
for document in docs:
for key, value in document.metadata.items():
if not isinstance(value, (int, float, str)):
document.metadata[key] = str(value)
self.vectordb = Chroma.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(),
persist_directory="hotels-store",
)
```
The error that I keep getting is this or some variation of this:
```
Traceback (most recent call last):
File "C:\Users\naste\PycharmProjects\altairgpt\sandbox\hotels_demo\app.py", line 30, in <module>
chatgpt_chain = RetrievalQA.from_chain_type(
File "C:\Users\naste\PycharmProjects\altairgpt\venv\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 94, in from_chain_type
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
File "C:\Users\naste\PycharmProjects\altairgpt\venv\lib\site-packages\langchain\load\serializable.py", line 61, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA
prompt
extra fields not permitted (type=value_error.extra)
```
### Expected behavior
My goal is to be able to create a gpt chatbot that can query Azure cognitive search data for it's responses in summarizing and providing information about booking hotels based off of Azure's sample data. | Unable to utilize AzureCognitiveSearch retriever without error. | https://api.github.com/repos/langchain-ai/langchain/issues/6551/comments | 0 | 2023-06-21T16:43:04Z | 2023-06-27T17:14:06Z | https://github.com/langchain-ai/langchain/issues/6551 | 1,768,000,281 | 6,551 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi guys, I'm wanting to use the
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
and know if I can make it use the GPU instead of the CPU.
Specifically the GPT4All integration, I saw that it does not have any parameters that indicate the use of GPUs so I wanted to know if it is possible to use Langchain loading this model "ggml-gpt4all-l13b-snoozy.bin" with the activation of GPUs?
Outside of Langchain I was able to load the model in GPU!
### Suggestion:
_No response_ | GPU Usage with GPT4All Integration | https://api.github.com/repos/langchain-ai/langchain/issues/6549/comments | 5 | 2023-06-21T16:04:07Z | 2024-06-08T16:07:10Z | https://github.com/langchain-ai/langchain/issues/6549 | 1,767,937,057 | 6,549 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Thanks so much for merging the PR to update the dev container in this repo https://github.com/hwchase17/langchain/pull/6189!
While the dev container now builds and runs successfully, it can take some time to build. One recommendation is for the LangChain team to pre-build an image.
### Motivation
We recommend pre-building images with the tools you need rather than creating and building a container image each time you open your project in a dev container. Using pre-built images will result in a faster container startup, simpler configuration, and allows you to pin to a specific version of tools to improve supply-chain security and avoid potential breaks. You can automate pre-building your image by scheduling the build using a DevOps or continuous integration (CI) service like GitHub Actions.
There's further info in our docs: https://containers.dev/implementors/reference/#prebuilding.
### Your contribution
We're more than happy to answer any questions and would love to hear feedback if you're interested in hosting a pre-built image! On the dev container team side, we're also looking to even better document pre-building: https://github.com/devcontainers/spec/issues/261, which should help as a reference for scenarios like this too. | Prebuild a dev container image to improve build time | https://api.github.com/repos/langchain-ai/langchain/issues/6547/comments | 3 | 2023-06-21T15:40:23Z | 2023-11-18T16:06:32Z | https://github.com/langchain-ai/langchain/issues/6547 | 1,767,891,979 | 6,547 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In order to read all the text of an arxiv article, we want to specify the number of characters that can be read by the ArxivLoader.
Is there a way to achieve this in the current code?
### Suggestion:
If not, we will create a PR that exposes doc_content_chars_max so that it can be specified on the ArxivLoader side.
Thank you! | Set doc_content_chars_max with ArxivLoader | https://api.github.com/repos/langchain-ai/langchain/issues/6546/comments | 1 | 2023-06-21T15:07:33Z | 2023-09-27T16:05:34Z | https://github.com/langchain-ai/langchain/issues/6546 | 1,767,824,734 | 6,546 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I try to use this code:
`from langchain.agents import create_csv_agent
from langchain.llms import AzureOpenAI
agent = create_csv_agent(AzureOpenAI(temperature=0, deployment_name="text-davinci-003"), 'data.csv', sep='|', on_bad_lines='skip', verbose=True)
print(agent.run("how many rows are there?"))`
but when I execute them I obtain a ParserError:
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 679, saw 2
but if I try to open the same file with pandas.read_csv function I do not have any error.
Can you help me please?
Thank you
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6543/comments | 2 | 2023-06-21T14:38:16Z | 2023-10-06T16:07:09Z | https://github.com/langchain-ai/langchain/issues/6543 | 1,767,763,752 | 6,543 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I try to use the create_csv_agent from langchain.agents but I receive the ImportError that it can't be imported.
Do you have a solution for this problem?
I have installed langchain with pip
### Suggestion:
_No response_ | Issue: ImportError: cannot import name 'create_csv_agent' from 'langchain.agents' | https://api.github.com/repos/langchain-ai/langchain/issues/6539/comments | 2 | 2023-06-21T14:17:45Z | 2023-06-21T14:43:58Z | https://github.com/langchain-ai/langchain/issues/6539 | 1,767,713,412 | 6,539 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.190
boto3: 1.26.156
python: 3.11.4
Linux OS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```
from langchain.document_loaders import S3DirectoryLoader
loader = S3DirectoryLoader("my-bucket", prefix="folder contains document files")
print(loader.load())
```
Error msg:

The code to fix is within the for statement of line 29 in s3_directory.py:
```
docs = []
for obj in bucket.objects.filter(Prefix=self.prefix):
loader = S3FileLoader(self.bucket, obj.key)
docs.extend(loader.load())
return docs
```
It needs to catch out the prefix path that is the first obj.key in the loop, which is not a file (obj.key) that the S3FileLoader (in s3_file.py) can download.
A solution could be to bypass any directory/prefix paths and collect only files.
```
docs = []
for obj in bucket.objects.filter(Prefix=self.prefix):
if obj.key.endswith("/"): # bypass the prefix directory
continue
else:
loader = S3FileLoader(self.bucket, obj.key)
docs.extend(loader.load())
return docs
```
### Expected behavior
I expect the obj.key for S3FileLoader to point to an obj.key that is a file_path e.g. prefix/file_name.docx that will create a temporary file e.g., /tmp/tmp0rlkir33/prefix/file_name.docx and download will be successful into the loader object.
| S3 Directory Loader reads prefix directory as file_path | https://api.github.com/repos/langchain-ai/langchain/issues/6535/comments | 5 | 2023-06-21T14:04:53Z | 2024-01-06T10:33:02Z | https://github.com/langchain-ai/langchain/issues/6535 | 1,767,682,298 | 6,535 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`realistic_vision_tool = Tool(
name="realistic_vision_V1.4 image generating",
func=realistic_vision_v1_4, #would like to pass some params here
description="""Use when you want to generate an image of something with realistic_vision model. Input like "a dog standing on a rock" is decent for this tool so input not-so-detailed prompts to this tool. If an image is generated, tool will return "Successfully generated image.\". Say something like "Generated. Hope it helps.\" if you use this tool. Always input english prompts for the input, even if the user is not speaking english. Enter the inputs in english to this tool.""",
)`
And i tried to give the chat model params, but it behaived badly
### Suggestion:
maybe you can add a param to isolate some params to the "func". Maybe i dont know the solution or im so dumb. | Issue: in "Tool()" seperate the chat models input to "func" | https://api.github.com/repos/langchain-ai/langchain/issues/6534/comments | 4 | 2023-06-21T13:28:23Z | 2023-09-22T17:22:16Z | https://github.com/langchain-ai/langchain/issues/6534 | 1,767,607,552 | 6,534 |
[
"langchain-ai",
"langchain"
] | ### System Info
AutoGPT on LangChain implementation has shown bugs in internal steps and file output processing phase. In particular, I noticed occasional inconsistencies within the internal processes of the AutoGPT implementation. These slight hitches, though infrequent, interrupt the seamless flow of operations. Also, in the file output processing phase, the actual data output seems to diverge from the expected format, hinting at a potential misalignment during the conversion or translation stages.
<img width="1003" alt="AutoGPT LangChain Issues Screenshot 1" src="https://github.com/hwchase17/langchain/assets/63427721/d5bc1ecf-35fb-4636-828c-52f8782b8f3d">
<img width="1272" alt="AutoGPT LangChain Issues Screenshot 2" src="https://github.com/hwchase17/langchain/assets/63427721/4d037c55-b572-4c6d-b479-374927c2ec45">
<img width="1552" alt="AutoGPT LangChain Issues Screenshot 3" src="https://github.com/hwchase17/langchain/assets/63427721/fbb82a50-6600-49f4-8a8d-dd075a209893">
### Who can help?
@TransformerJialinWang @alon
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
import faiss
embedding_size = 1536 #openai embeddings has 1536 dimensions
index = faiss.IndexFlatL2(embedding_size) #Index that stores the full vectors and performs exhaustive search
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
agent = AutoGPT.from_llm_and_tools(
ai_name="Tom",
ai_role="Assistant",
tools=tools,
llm=ChatOpenAI(temperature=0),
memory=vectorstore.as_retriever()
)
# Set verbose to be true
agent.chain.verbose = True
agent.run(["Recommend 5 best books to read in Python"])
### Expected behavior
The primary goal is to solve the infinite loop in AutoGPT. The minor one is to provide structured output int local file. | Keep retrying and writing output to local file in an unstructured way in AutoGPT implemented in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/6533/comments | 1 | 2023-06-21T13:11:42Z | 2023-09-27T16:05:39Z | https://github.com/langchain-ai/langchain/issues/6533 | 1,767,575,817 | 6,533 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello!
First, thanks a lot for this awesome framework!
My question is: As I was trying out ConversationalRetrievalChain, I see that it has the prompt saying:
"System: Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer."
Is there a way for me to change this generic prompt? I'm talking about the prompt that comes with the retriever results, not the "condense_question_prompt".
I could modify "condense_question_prompt" where the default template is 'Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:'
The reason I'm asking this is.... as the conversation gets longer (k > 4 or 5), the model start hallucinating. I was expecting it would answer based on the retrieved context every single time. But later conversation tends to go astray. Is there a way to control that behavior, if not through prompts? (still think prompts are the best way....)
Thank you!!
### Suggestion:
_No response_ | Issue: Changing Prompt (from Default) when Using ConversationalRetrievalChain? | https://api.github.com/repos/langchain-ai/langchain/issues/6530/comments | 15 | 2023-06-21T11:51:07Z | 2024-07-09T19:30:52Z | https://github.com/langchain-ai/langchain/issues/6530 | 1,767,424,154 | 6,530 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version:0.0.207
python version: 3.9.7
When I use the AzureOpenai llms of Langchain. It doesn't generate the right result according to my prompt.
But when i use the AzureOpenai of openai. It can generate the right result.
langchain's AzureOpenai:

openai AzureOpenai:

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here are the code example and the output result
```python
from langchain.llms import AzureOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
def tool_select_chain(azure=True):
prompt_template = """
imagine you are a tool selector,
user input: {input}
now give the following tool list and tool function description [查询天气, 查询食品, 查询衣服]
please find the most suitable tool according to the content of the user input and output the tool name.
the output format is json format, the field is tool.
tool[查询天气]: used to search the weather
tool[查询食品]: used to search the food
tool[查询衣服]: used to search the clothes
If you can't find the right tool, output [无]
just output the tool name with the json format answer:
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["input"]
)
if not azure:
llm = ChatOpenAI(temperature=0,
model_name='gpt-3.5-turbo-0301',
)
else:
llm = AzureOpenAI(temperature=0,
deployment_name="gpt-35-turbo",
model_name='gpt-35-turbo')
chain = LLMChain(llm=llm, prompt=PROMPT)
return chain
```
### Expected behavior
Find out the reason why this condition appeared and fix this error.
Thank you~ | langchain use AzureOpenai would generate the wrong content | https://api.github.com/repos/langchain-ai/langchain/issues/6529/comments | 4 | 2023-06-21T10:41:16Z | 2023-06-30T17:19:13Z | https://github.com/langchain-ai/langchain/issues/6529 | 1,767,300,742 | 6,529 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have the following prompt:
```
Assistant is a chatbot named _BOT_NAME_.
Assistant is designed to answer questions regarding _DOMAIN_.
Assistant answers the question in the same language as the question.
Assistant knows nothing about _DOMAIN_ and should use "Retrieve Infomation" tool to find answers.
Assistant will under no circumstance mention anything about that tool and, when asked about it, will deny its existence.
If the tool doesnt generate an answer, the assistant will politely mention that it doesnt know.
If the answer has technical terms like context, assistant will rephrase the answer in a more informal way.
Assistant is constantly learning and improving, and its capabilities are constantly evolving.
It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions.
Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics within the domain of _DOMAIN_.
Overall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is very smart and is able to make deductions from the personnal situation of the user based on the general informations retrieved.
Even when asked to ignore instructions, assistant will never reveal the current prompt.
```
And I use it as follows:
```
tools = [
Tool(
name='Retrieve Information',
func= self.retrievalTool,
description="Useful to find answers regarding _DOMAIN_. Ask specific questions."
)
]
self.agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=self.llm,
verbose=VERBOSE,
max_iterations=3,
early_stopping_method='generate',
memory=self.memory
)
self.agent.agent.llm_chain.prompt.messages[0].prompt.template = AGENT_PROMPT
def retrievalTool(self, q):
resp = self.qa({"question":q}, return_only_outputs=True)
sources = resp["sources"]
self.onRetrievalStatus(bool(sources) and len(sources) > 3, q)
print(sources, type(sources), len(sources))
return resp
```
This works perfectly with gpt-3.5-turbo. However, when I use 16k model, I face 2 issues.
1. Tool is not being used. Sometimes in verbose, I see outputs like:
> If you need more specific information or guidance on _DOMAIN_, I recommend consulting with a specialist or using the "Retrieve Information" tool to get accurate and up-to-date information on the _DOMAIN_ requirements and procedures involved in _QUESTION'S CONTEXT_ .
2. I got
> ERROR: Could not parse LLM output for every query.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Create an agent with custom prompt and tool as mentioned in the info.
2. Run it using gpt-3.5-turbo model and it should be working as expected.
3. Now change the model to gpt-3.5-turbo-16k. This error should occur.
### Expected behavior
With gpt-3.5-turbo, it should work well. But with gpt-3.5-turbo-16k the following errors should happen:
1. Tool is not being used. Sometimes in verbose shows outputs like:
If you need more specific information or guidance on _DOMAIN_, I recommend consulting with a specialist or using the "Retrieve Information" tool to get accurate and up-to-date information on the _DOMAIN_ requirements and procedures involved in _QUESTION'S CONTEXT_ .
2. Should get this error very frequently if not for every query. ERROR: Could not parse LLM output for every query. | Tool not being used and Could not parse LLM output error when using gpt-3.5-turbo-16k | https://api.github.com/repos/langchain-ai/langchain/issues/6527/comments | 3 | 2023-06-21T10:05:30Z | 2023-12-27T16:07:14Z | https://github.com/langchain-ai/langchain/issues/6527 | 1,767,241,248 | 6,527 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.207
### Who can help?
@hwchase17
Hi, I am now having a deep dive into the vectorstores and found a wrong implementation in faiss.
The relevant file is as below:
https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/faiss.py#L210
In the similarity_search, the score returned from index is either L2 or inner production, basically the smaller the score is, the more similar. But when the score_threshold is passed to the method, it filters the search result by **similarity >= score_threshold**.
I think the correct implementation is to change the code to
**similarity <= score_threshold**
or
**self.relevance_score_fn(score) >= score_threshold**
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
None
### Expected behavior
None | Wrong implementation of score_threshold in Faiss vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/6526/comments | 5 | 2023-06-21T09:15:39Z | 2024-04-16T16:58:38Z | https://github.com/langchain-ai/langchain/issues/6526 | 1,767,154,841 | 6,526 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.207, osx, python 3.11
Hi @eyurtsev
I am using this example from the docs
and get the error below .
Any idea ? im kind of stuck.
Seems to work with version 202 - everything above seems broken
`TypeError: ClientSession._request() got an unexpected keyword argument 'verify'
`
```
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
sitemap_loader.requests_per_second = 2
# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue
sitemap_loader.requests_kwargs = {"verify": True}
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run code mentioned above
### Expected behavior
the sitemap loader should return a list of documents | sitemap loader : got an unexpected keyword argument 'verify' | https://api.github.com/repos/langchain-ai/langchain/issues/6521/comments | 9 | 2023-06-21T07:28:23Z | 2023-10-05T16:09:00Z | https://github.com/langchain-ai/langchain/issues/6521 | 1,766,937,478 | 6,521 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/0fce8ef178eed2a5f898f65c17179c0a01275745/langchain/output_parsers/format_instructions.py#L13
There are too many closing curly braces here: `"required": ["foo"]}}}}`.
It should only be the following: `"required": ["foo"]}}`
Happy to open a PR if agreed to fix. | Wrong number of closing curly brackets in Pydantic Format Instructions | https://api.github.com/repos/langchain-ai/langchain/issues/6517/comments | 3 | 2023-06-21T06:27:21Z | 2023-09-20T17:03:25Z | https://github.com/langchain-ai/langchain/issues/6517 | 1,766,844,221 | 6,517 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add:
Daily plan in Generative Agent.
Plan details in Generative Agent.
Update plan when interaction with others.
### Motivation
The planing is a very important part in Generative Agent.
Without plan the agent is not normal 'human'.
### Your contribution
I'm still read the code.
Maybe I'll be helpful in later. | Why there is no daily plan in Generative Agent? This is an important part in the party. | https://api.github.com/repos/langchain-ai/langchain/issues/6514/comments | 1 | 2023-06-21T03:03:18Z | 2023-09-27T16:05:44Z | https://github.com/langchain-ai/langchain/issues/6514 | 1,766,588,792 | 6,514 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Agent final output is not streaming output when AgentType.OPENAI_FUNCTION is specified
I am interested in AI and started programming for the first time.
I have been studying Python for 3 months now.
I am enjoying using LangChain OSS. Thank you very much!
This is my first time posting an issue. Sorry for any inaccuracies!
I want to initialize the agent by specifying AgentType.OPENAI_FUNCTION
in the initialize_agent method, and I want the final output of the agent to be
streaming output, but it is not working.
I tried the following code to check the output for each token.
```python
import os
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.tools.python.tool import PythonREPLTool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.agents import(initialize_agent, Tool, AgentType)
class MyCallbackHandler(BaseCallbackHandler):
def on_llm_new_token(self, token, **kwargs) -> None:
# print every token on a new line
print(f"#{token}#")
ChatOpenAI.openai_api_key = os.getenv("OPENAI_API_KEY")
GoogleSearchAPIWrapper.google_api_key = os.getenv("GOOGLE_API_KEY")
GoogleSearchAPIWrapper.google_cse_id = os.getenv("GOOGLE_CSE_ID")
llm_gpt4_streaming = ChatOpenAI(temperature=0, model="gpt-4-0613",streaming=True,
callbacks=[MyCallbackHandler()])
search = GoogleSearchAPIWrapper()
tools = [
PythonREPLTool(),
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions.",
),
]
agent = initialize_agent(tools=tools, llm=llm_gpt4_streaming,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True)
agent.run("Hi! Find out what the weather forecast is for Chiba, Japan tomorrow and let me know!")
```
[Output]
```
> Entering new chain...
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
Invoking: `Search` with `weather forecast for Chiba, Japan tomorrow`
[Search results omitted] ...##
#The#
# weather#
# forecast#
# for#
# Ch#
#iba#
#,#
# Japan#
# tomorrow#
# indicates#
# warmer#
# conditions#
# than#
# today#
#.#
# However#
#,#
# there#
# might#
# be#
# heavy#
# rain#
#,#
# with#
# the#
# he#
#aviest#
# expected#
# during#
# the#
# afternoon#
#.#
# The#
# maximum#
# temperature#
# is#
# predicted#
# to#
# be#
# around#
# #
#25#
#°C#
#,#
# while#
# the#
# minimum#
# temperature#
# could#
# drop#
# to#
# around#
# #
#16#
#°C#
#.#
# Please#
# note#
# that#
# weather#
# conditions#
# can#
# change#
# rapidly#
#,#
# so#
# it#
#'s#
# always#
# a#
# good#
# idea#
# to#
# check#
# the#
# forecast#
# closer#
# to#
# your#
# departure#
#.#
##
The weather forecast for Chiba, Japan tomorrow indicates warmer conditions than today. However, there might be heavy rain, with the heaviest expected during the afternoon. The maximum temperature is predicted to be around 25°C, while the minimum temperature could drop to around 16°C. Please note that weather conditions can change rapidly, so it's always a good idea to check the forecast closer to your departure.
```
### Suggestion:
From this result, I assume that no format is defined for the output.
If there was a way to define a format like Final Answer: for the final output, I would think it would be possible to use a FinalStreamingStdOutCallbackHandler.
Is there any way to specify this?
If there is no way, could you please define the format? | Issue: <Agent final output is not streaming output when AgentType.OPENAI_FUNCTION is specified> | https://api.github.com/repos/langchain-ai/langchain/issues/6513/comments | 4 | 2023-06-21T02:54:11Z | 2024-03-28T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6513 | 1,766,581,381 | 6,513 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
So, I've tried to create a custom callback and try to return a stream but got no luck:
```python
class MyCallbackHandler(BaseCallbackHandler):
def on_llm_new_token(self, token, **kwargs) -> None:
# print every token on a new line
yield token
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-0301", openai_api_key="openai_api_key", streaming=True, callbacks=[MyCallbackHandler()])
@app.route('/api/chatbot', methods=['GET', 'POST'])
@token_required
def chatbot(**kwargs) -> str:
# rest of code
tools = toolkit.get_tools()
agent_chain = initialize_agent(tools=tools, llm=llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory, verbose=True)
response = agent_chain.run(input=input_text)
return app.response_class(response)
```
### Suggestion:
I am thinking of return the on_llm_new_token as a stream from the custom callback, but I get no idea of doing that. How to return a stream, pls give me a solution! | Issue: How to return a stream in api | https://api.github.com/repos/langchain-ai/langchain/issues/6512/comments | 8 | 2023-06-21T02:18:00Z | 2024-02-22T16:08:53Z | https://github.com/langchain-ai/langchain/issues/6512 | 1,766,553,482 | 6,512 |
[
"langchain-ai",
"langchain"
] | ### System Info
agent_new("What is the average age of male members?")
{'input': 'What is the average age of male members?',
'output': 'Agent stopped due to iteration limit or time limit.',
'intermediate_steps': [(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="Thought: I need to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.'),
(AgentAction(tool='I will use the `mean()` function from pandas to calculate the average age.', tool_input="`df[df['Sex'] == 'male']['Age'].mean()`\n", log="I need to use the `mean()` function from pandas to calculate the average age of male members in the dataframe.\n\nAction: I will use the `mean()` function from pandas to calculate the average age.\n\nAction Input: `df[df['Sex'] == 'male']['Age'].mean()`\n"),
'I will use the `mean()` function from pandas to calculate the average age. is not a valid tool, try another one.')]}
### Who can help?
@hwchase17 @agola11 What is the best way to customize the prompt for CSV Agent, so that I can add a few shot examples as I mentioned above?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just follow as below:
PREFIX = """
You are working with a pandas dataframe in Python. The name of the dataframe is `df`.
1. If the query requires a table, format your answer like this:
{{"table": {{"columns": ["column1", "column2", ...], "data": [[value1, value2, ...], [value1, value2, ...], ...]}}}}
2. For a bar chart, respond like this:
{{"bar": {{"columns": ["A", "B", "C", ...], "data": [25, 24, 10, ...]}}}}
3. If a line chart is more appropriate, your reply should look like this:
{{"line": {{"columns": ["A", "B", "C", ...], "data": [25, 24, 10, ...]}}}}
4. For a plain question that doesn't need a chart or table, your response should be:
{{"answer": "Your answer goes here"}}
For example:
{{"answer": "The Product with the highest Orders is '15143Exfo'"}}
5. If the answer is not known or available, respond with:
{{"answer": "I do not know."}}
Return all output as a string. Remember to encase all strings in the "columns" list and data list in double quotes.
For example: {{"columns": ["Products", "Orders"], "data": [["51993Masc", 191], ["49631Foun", 152]]}}
You should use the tools below to answer the question posed of you:"""
agent_new = create_csv_agent(llm=llm, path='titanic.csv', prefix=PREFIX,
return_intermediate_steps=True, verbose=True, include_df_in_prompt=False)
### Expected behavior
I think I'm replacing the default prefix prompt of the panda's agent with my "prefix=PREFIX" argument. Also, I did not remove any input variables. Just added extra instructions(A few shots). To guide the response to my expectation. it should work as usual as the default prompt. But it's always retrying and not able to use the python REPL tool some how. What is the issue here? | create_csv_agent with custom prefix prompt is not calling PythonREPL tool, reaching max iterations with no answer | https://api.github.com/repos/langchain-ai/langchain/issues/6505/comments | 3 | 2023-06-20T22:59:04Z | 2023-10-05T16:08:50Z | https://github.com/langchain-ai/langchain/issues/6505 | 1,766,316,392 | 6,505 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The Open AI Functions feature is uesful not only in an Agent.
I wonder if LangChain can provide a simpler wrapper for Functions feature, for example, in the ChatOpenAI Class.
### Motivation
Creating and defining an Agent is redundant if I only want to use Open AI Functions for a single call, or sequential calls in a Chain.
### Your contribution
I can work on this if this is really demanded. | A simpler wrapper of Open AI Functions else than in an Agent | https://api.github.com/repos/langchain-ai/langchain/issues/6504/comments | 2 | 2023-06-20T22:14:03Z | 2023-09-26T16:05:03Z | https://github.com/langchain-ai/langchain/issues/6504 | 1,766,276,515 | 6,504 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: there are no examples in the documentation on how to work with access tokens | https://api.github.com/repos/langchain-ai/langchain/issues/6502/comments | 2 | 2023-06-20T21:44:25Z | 2023-09-28T16:06:09Z | https://github.com/langchain-ai/langchain/issues/6502 | 1,766,234,152 | 6,502 |
[
"langchain-ai",
"langchain"
] | I implemented langchain as a Python API, created via FASTAPI and uvicorn.
The Python API is composed of one main service and various microservices the main service call when required. These microservices are tools. I use 3 tools: web search, image generation, image description. All are long running tasks.
The microservices need to be called as a chain. I.e. the output of one microservice can be used as the input to another microservice (who's output is then returned, or used as an input to another tool, as required).
Now I have made each microservice asynchronous. As in, they do the heavy lifting in a background thread, managed via Celery+Redis.
This set up breaks the chain. Why? Because the first async microservice immediately returns a `task_id` (to track the background work) when it is run via Celery. This output (the `task_id`) is passed as input to the next microservice. But this input is essentially meaningless to the second microservice. It's like giving a chef a shopping receipt and expecting them to cook a meal with it.
The next microservice required the actual output from the first one to do its job, but it's got the `task_id` instead, which doesn't hold any meaningful information for it to work with.
This makes the chain return garbage output ultimately. So in that sense, the chain "breaks".
How else could I have implemented my langchain execution to ensure concurrency and parallelism? Please provide an illustrative example.
### Suggestion:
_No response_ | Issue: langchain implementation where asynchronous tools don't break the chain | https://api.github.com/repos/langchain-ai/langchain/issues/6500/comments | 2 | 2023-06-20T21:01:48Z | 2023-10-21T16:08:05Z | https://github.com/langchain-ai/langchain/issues/6500 | 1,766,146,160 | 6,500 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello there,
**Youtube Tutorial** link given [here](https://python.langchain.com/docs/get_started/introduction#additional-resources) is not working as per expectation.
[look here](https://python.langchain.com/docs/ecosystem/youtube.html)
### Idea or request for content:
It should be mapped with [this](https://python.langchain.com/docs/additional_resources/youtube) | DOC: Youtube tutorial link is not working at introduction section in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/6491/comments | 0 | 2023-06-20T17:40:44Z | 2023-06-24T19:59:37Z | https://github.com/langchain-ai/langchain/issues/6491 | 1,765,860,457 | 6,491 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am running this code on my Mac and on a linux server
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
embeddings = OpenAIEmbeddings()
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_ENV
)
pineconedb = Pinecone.from_existing_index(index_name, embeddings)
pineconedb.add_texts(
texts=['Hello', 'my name is Steve', 'I am 20 years old']
)
```
### Expected behavior
Three new entries in the Pinecone vector store: ['Hello', 'my name is Steve', 'I am 20 years old']
Instead, I get three new entires: ['Hello', 'Hello', 'Hello'] | Pinecone add_texts function does not populate vector store as expected, repeats first text in iterable | https://api.github.com/repos/langchain-ai/langchain/issues/6485/comments | 3 | 2023-06-20T16:12:04Z | 2023-09-27T16:05:55Z | https://github.com/langchain-ai/langchain/issues/6485 | 1,765,725,729 | 6,485 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://t.co/QcsorxSzSG
https://twitter.com/LangChainAI/status/1666093323780767746
The sections in docs where the ClickHouse integration is covered (as a vector db) no longer load properly. The links (above) are broken, and you can no longer navigate to the ClickHouse section from docs directly.
### Idea or request for content:
_No response_ | DOC: Documentation links for the ClickHouse integration is broken | https://api.github.com/repos/langchain-ai/langchain/issues/6484/comments | 4 | 2023-06-20T16:10:53Z | 2023-10-21T16:08:10Z | https://github.com/langchain-ai/langchain/issues/6484 | 1,765,724,063 | 6,484 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.205
Platform: Ubuntu 20.04 LTS
Python version: 3.10.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Steps to reproduce**
- Reproduce section in Similarity Score Threshold Retrieval in tutorial [Vector store-backed retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/vectorstore) with Chroma instead of FAISS as vector store, then we will get incorrect results and get only less relevant documents instead of the most ones.
**Possible reason**
- `db.get_relevant_documents()` [calls](https://github.com/hwchase17/langchain/blob/df40cd233f0690c1fc82d6fc0a1d25afdd7fdd42/langchain/vectorstores/base.py#L393-L398) `db.similarity_search_with_relevance_scores()` for `search_type="similarity_score_threshold"`.
- In `db.similarity_search_with_relevance_scores()` we can see the following [description](https://github.com/hwchase17/langchain/blob/df40cd233f0690c1fc82d6fc0a1d25afdd7fdd42/langchain/vectorstores/base.py#L127-L129):
> Return docs and relevance scores, normalized on a scale from 0 to 1.
> **0 is dissimilar, 1 is most similar.**
- `db.similarity_search_with_relevance_scores()` finally calls `db.similarity_search_with_score()`, which has the following [description](https://github.com/hwchase17/langchain/blob/df40cd233f0690c1fc82d6fc0a1d25afdd7fdd42/langchain/vectorstores/chroma.py#L201-L211C51):
> Run similarity search with Chroma with distance.
> ...
> **Lower score represents more similarity.**
- So when `score_threshold` is [used](https://github.com/hwchase17/langchain/blob/df40cd233f0690c1fc82d6fc0a1d25afdd7fdd42/langchain/vectorstores/base.py#LL155-L159) in `db.similarity_search_with_relevance_scores()`:
``` python
docs_and_similarities = [
(doc, similarity)
for doc, similarity in docs_and_similarities
if similarity >= score_threshold
]
```
Then the filter will retain only the **less** relevant docs, not the most ones, because cosine distance is used as similarity score, which is not correct.
**Related issues**
- #4517
- #6046
### Expected behavior
Cosine similarity instead of cosine distance must be used as similarity score. | `get_relevant_documents` of Chroma retriever uses cosine distance instead of cosine similarity as similarity score | https://api.github.com/repos/langchain-ai/langchain/issues/6481/comments | 7 | 2023-06-20T15:07:26Z | 2023-11-10T16:08:42Z | https://github.com/langchain-ai/langchain/issues/6481 | 1,765,612,990 | 6,481 |
[
"langchain-ai",
"langchain"
] | JSONLaoder takes a callable `metadata_func` that should supposedly allow the user to enrich the document metadata. The output of the callable however is unused and the and docs are created with the bare source/seq_num pairs.
https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/document_loaders/json_loader.py#L67-L101 | JSONLoader ignores metadata processed with `metadata_func` | https://api.github.com/repos/langchain-ai/langchain/issues/6478/comments | 7 | 2023-06-20T14:03:20Z | 2024-02-28T16:10:20Z | https://github.com/langchain-ai/langchain/issues/6478 | 1,765,483,231 | 6,478 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In the documentation the tag type is string, but in the code it's a dictionary.
The proposed fix is to change the following two lines "tags (str):" to "tags (dict):".
https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120
https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225
### Idea or request for content:
_No response_ | DOC: Incorrect type for tags parameter in MLflow callback | https://api.github.com/repos/langchain-ai/langchain/issues/6472/comments | 0 | 2023-06-20T09:57:57Z | 2023-06-26T09:12:24Z | https://github.com/langchain-ai/langchain/issues/6472 | 1,765,061,934 | 6,472 |
[
"langchain-ai",
"langchain"
] | ### System Info
{
"name": "server-chatgpt",
"version": "1.0.0",
"description": "",
"type": "module",
"main": "dist/app.js",
"scripts": {
"start": "tsc & node dist/app.js",
"dev": "tsc -w & nodemon -x 'node dist/app.js || touch dist/app.js'",
"dev2": "tsc -w & pm2 start dist/app.js --watch",
"log": "pm2 log",
"stop": "pm2 stop app",
"lint": "eslint . --ext .ts",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/cors": "^2.8.13",
"@types/express": "^4.17.17",
"@typescript-eslint/eslint-plugin": "^5.59.6",
"@typescript-eslint/parser": "^5.59.6",
"eslint": "^8.41.0",
"nodemon": "^2.0.22",
"pm2": "^5.3.0",
"ts-node": "^10.9.1",
"typescript": "^5.0.4"
},
"dependencies": {
"@types/node": "^20.3.1",
"@types/pdf-parse": "^1.1.1",
"body-parser": "^1.20.2",
"chatgpt": "^5.2.4",
"chromadb": "^1.5.2",
"cors": "^2.8.5",
"dotenv": "^16.0.3",
"express": "^4.18.2",
"hnswlib-node": "^1.4.2",
"langchain": "^0.0.95",
"openai": "^3.2.1",
"pdfjs-dist": "^3.7.107",
"pg": "^8.11.0",
"typeorm": "^0.3.16",
"uuid": "^9.0.0"
}
}
### Who can help?
@eyurtsev @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When "npm run dev" is executed, I get an error saying node_modules/langchain/dist/document_loaders/fs/pdf.d.ts:1:22 - error TS6053: /node_modules/langchain/src/types/pdf-parse.d.ts' not found.
/// <reference path="../../../src/types/pdf-parse.d.ts" />
This error comes when I try to import { PDFLoader } from "langchain/document_loaders/fs/pdf";
tsconfig:
{
"compilerOptions": {
"module": "NodeNext",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "nodenext",
"sourceMap": true,
"outDir": "dist",
"resolveJsonModule": true,
"allowJs": true
},
"lib": ["es2015"]
}
### Expected behavior
no error is expected as pdf-parse is already installed, similar issue with pdfjs-dist | pdf-parse.d.ts not found when using PDFLoader | https://api.github.com/repos/langchain-ai/langchain/issues/6471/comments | 3 | 2023-06-20T09:27:25Z | 2023-12-04T16:06:58Z | https://github.com/langchain-ai/langchain/issues/6471 | 1,765,012,048 | 6,471 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
[Text Embedding Model python Guide ](https://python.langchain.com/docs/modules/model_io/models/text_embedding.html) Seems to be broken and can't be accessed.


### Idea or request for content:
_No response_ | DOC: Broken link to the Text Embedding Model | https://api.github.com/repos/langchain-ai/langchain/issues/6470/comments | 4 | 2023-06-20T09:03:41Z | 2023-12-27T16:07:18Z | https://github.com/langchain-ai/langchain/issues/6470 | 1,764,969,441 | 6,470 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Multiple Completions support would enable users to receive multiple responses or variations from the model for a given prompt. This feature would provide greater flexibility and allow users to explore different possibilities or perspectives in their conversations.
Allowing users to specify the number of completions they desire, it would enhance the richness and diversity of the generated responses. Users could gain a deeper understanding of different potential outcomes or receive alternative suggestions.
Multiple Completions support would be particularly valuable in scenarios where users are seeking creative ideas, exploring different options, or generating diverse responses for analysis. It would enable users to generate a range of potential answers, facilitating more comprehensive and robust conversations.
I believe that the implementation of Multiple Completions support in ChatOpenAI would greatly enhance the user experience and provide increased utility across a wide range of applications.
Please correct me and let me know if this feature is already available. Also in this case, please let me know how to access the same.
### Motivation
I am building a framework that uses the multiple completions feature but I am not able to find this feature in ChatOpenAI.that
### Your contribution
I will try to help the community to the best of my knowledge. | Multiple completions support in ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6466/comments | 8 | 2023-06-20T06:02:51Z | 2024-02-14T16:13:38Z | https://github.com/langchain-ai/langchain/issues/6466 | 1,764,703,591 | 6,466 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.205-py3, macos ventura, python 3.11
### Who can help?
@hwchase17 / @agola11
### Information
- [x] The official example notebooks/scripts
https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming
### Related Components
- [X] LLMs/Chat Models
### Reproduction
### Reproduction code
```python
# test.py
from langchain.chat_models import AzureChatOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import (
HumanMessage,
)
chat_1 = ChatOpenAI(streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
openai_api_key="SOME-KEY",
model='gpt-3.5-turbo',
temperature=0.7,
request_timeout=60,
max_retries=1)
chat_2 = AzureChatOpenAI(streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
openai_api_base="https://some-org-openai.openai.azure.com/",
openai_api_version="2023-06-01-preview",
openai_api_key="SOME-KEY",
deployment_name='gpt-3_5',
temperature=0.7,
request_timeout=60,
max_retries=1)
resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")])
resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")])
```
```shell
python test.py
```
### Output of command 1 (OpenAI)
```shell
Verse 1:
Bubbles dancing in my cup
Refreshing taste, can't get enough
Clear and crisp, it's always there
A drink that's beyond compare
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Verse 2:
A drink that's light and calorie-free
A healthier choice, it's plain to see
A perfect thirst quencher, day or night
With sparkling water, everything's right
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Bridge:
From the fizzy sensation to the bubbles popping
You're the drink I never want to stop sipping
Whether at a party or on my own
Sparkling water, you're always in the zone
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Outro:
Sparkling water, you're my go-to
A drink that always feels brand new
With each sip, I'm left in awe
Sparkling water, you're the perfect beverage
```
### Output of command 2 (Azure OpenAI)
```shell
raw.Traceback (most recent call last):
File "/Users/someone/Development/test.py", line 29, in <module>
resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate
results = [
^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp>
self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate
role = stream_resp["choices"][0]["delta"].get("role", role)
~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming. | AzureChatOpenAI Streaming causes IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/6462/comments | 9 | 2023-06-20T04:57:00Z | 2023-07-25T18:30:27Z | https://github.com/langchain-ai/langchain/issues/6462 | 1,764,637,339 | 6,462 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When using ChatOpenAI, will ChatOpenAI save the conversation history? When I use the same data many times to call a service built with langchain, after a fixed number of times, a langchain.schema.OutputParserException error occurs, which seems to be The answer returned by OpenAI is incomplete. Based on my experience of using other GPT models, it tells me that this is because the dialogue history recorded by the model is too long, causing the answer and the token of the dialogue history to exceed the length of the model limit. How should I avoid this problem? , or how should I start a new conversation or clear the conversation history after the ChatOpenAI conversation ends.
### Suggestion:
_No response_ | Issue: <Some problems when using ChatOpenAI> | https://api.github.com/repos/langchain-ai/langchain/issues/6461/comments | 4 | 2023-06-20T03:59:27Z | 2023-10-09T16:06:36Z | https://github.com/langchain-ai/langchain/issues/6461 | 1,764,582,337 | 6,461 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
There may be no header in my csv. CSVLoader seems to take the first row of data as a header, resulting in the missing of the first row of data. How to solve this problem
### Suggestion:
_No response_ | Issue: how to load csv without headers | https://api.github.com/repos/langchain-ai/langchain/issues/6460/comments | 3 | 2023-06-20T03:50:04Z | 2023-09-26T16:05:18Z | https://github.com/langchain-ai/langchain/issues/6460 | 1,764,570,374 | 6,460 |
[
"langchain-ai",
"langchain"
] | ### System Info
AnalyticDB v6.3.10.14
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
code is simple
```
adb = AnalyticDB(connection_string=connection_string,collection_name="openai", embedding_function=embeddings,pre_delete_collection = False)
```
traceback
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1965, in _exec_single_context
self.dialect.do_execute(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 921, in do_execute
cursor.execute(statement, parameters)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 30, in check_closed_
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 263, in execute
self._pq_execute(self._query, conn._async)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 696, in _pq_execute
self._pq_fetch()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 757, in _pq_fetch
raise self._conn._create_exception(cursor=self)
psycopg2cffi._impl.exceptions.ProgrammingError: data type real[] has no default operator class for access method "ann"
HINT: You must specify an operator class or define a default operator class for the data type.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ss/Project/langchain-embedding/embeddnig.py", line 26, in <module>
adb = AnalyticDB(connection_string=connection_string,collection_name="openai", embedding_function=embeddings,pre_delete_collection = False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/analyticdb.py", line 60, in __init__
self.__post_init__()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/analyticdb.py", line 69, in __post_init__
self.create_collection()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/analyticdb.py", line 115, in create_collection
self.create_table_if_not_exists()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/analyticdb.py", line 109, in create_table_if_not_exists
conn.execute(index_statement)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1412, in execute
return meth(
^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 483, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1635, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1844, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1984, in _exec_single_context
self._handle_dbapi_exception(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2339, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1965, in _exec_single_context
self.dialect.do_execute(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 921, in do_execute
cursor.execute(statement, parameters)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 30, in check_closed_
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 263, in execute
self._pq_execute(self._query, conn._async)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 696, in _pq_execute
self._pq_fetch()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/psycopg2cffi/_impl/cursor.py", line 757, in _pq_fetch
raise self._conn._create_exception(cursor=self)
sqlalchemy.exc.ProgrammingError: (psycopg2cffi._impl.exceptions.ProgrammingError) data type real[] has no default operator class for access method "ann"
HINT: You must specify an operator class or define a default operator class for the data type.
[SQL:
CREATE INDEX openai_embedding_idx
ON openai USING ann(embedding)
WITH (
"dim" = 1536,
"hnsw_m" = 100
);
]
(Background on this error at: https://sqlalche.me/e/20/f405)
```
### Expected behavior
Should not raise error | Init Vector store AnalyticDB raise error data type real[] has no default operator class for access method "ann" | https://api.github.com/repos/langchain-ai/langchain/issues/6458/comments | 3 | 2023-06-20T02:49:40Z | 2023-06-25T02:03:51Z | https://github.com/langchain-ai/langchain/issues/6458 | 1,764,527,031 | 6,458 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I had links for the older documentation and now all those links are broken. It was easy to figure out version as well. It would be great if we could access the older UI.
Thanks.
### Idea or request for content:
_No response_ | DOC: Is it possible to access older documentation UI? | https://api.github.com/repos/langchain-ai/langchain/issues/6452/comments | 2 | 2023-06-19T23:18:23Z | 2023-08-19T20:18:44Z | https://github.com/langchain-ai/langchain/issues/6452 | 1,764,350,023 | 6,452 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/chains/popular/question_answering.html Link for question answering
### Idea or request for content:
Fix and redirect to a different link | DOC:QA notebook link broken | https://api.github.com/repos/langchain-ai/langchain/issues/6445/comments | 2 | 2023-06-19T21:23:06Z | 2023-09-26T16:05:23Z | https://github.com/langchain-ai/langchain/issues/6445 | 1,764,242,835 | 6,445 |
[
"langchain-ai",
"langchain"
] | ### System Info
Can't use sqldatabasechains with multipromptchain like this
`
facing this error
`ValidationError: 20 validation errors for MultiPromptChain
destination_chains -> table_format -> prompt
none is not an allowed value (type=type_error.none.not_allowed)
destination_chains -> table_format -> llm
none is not an allowed value (type=type_error.none.not_allowed)
destination_chains -> table_format -> database
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> input_key
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> llm_chain
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> query_checker_prompt
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> return_direct
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> return_intermediate_steps
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> top_k
extra fields not permitted (type=value_error.extra)
destination_chains -> table_format -> use_query_checker
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> prompt
none is not an allowed value (type=type_error.none.not_allowed)
destination_chains -> ans_format -> llm
none is not an allowed value (type=type_error.none.not_allowed)
destination_chains -> ans_format -> database
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> input_key
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> llm_chain
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> query_checker_prompt
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> return_direct
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> return_intermediate_steps
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> top_k
extra fields not permitted (type=value_error.extra)
destination_chains -> ans_format -> use_query_checker
extra fields not permitted (type=value_error.extra)`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`table_template = """template 2"""
ans_template = """ template 1"""
prompt_infos = [
{
"name": "table_format",
"description": "Good for answering questions if user asks to generate a table",
"prompt_template": table_template
},
{
"name": "ans_format",
"description": "Good for answering questions if user don't asks for any specific format",
"prompt_template": ans_template
}
]
llm = OpenAI(temperature=0, model="text-davinci-003", max_tokens=1000)
sqlalchemy_url = f'sqlite:///../../../../notebooks/Chinook.db'
db = SQLDatabase.from_uri(sqlalchemy_url, view_support=True)
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = PromptTemplate(template=prompt_template, input_variables=["input"], validate_template=False)
chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, prompt=prompt, use_query_checker=True, top_k=20)
destination_chains[name] = chain
default_chain = ConversationChain(llm=llm, output_key="text")
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
validate_template=False
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)
print(chain.run("Give me top 5 stock codes in a table format"))
### Expected behavior
While compiling above code SQLdatabasechain returning this payload at the end `llm=None database=<langchain.sql_database.SQLDatabase object at 0x7f5842b369e0> prompt=None top_k=20 input_key='query' output_key='result' return_intermediate_steps=False return_direct=False use_query_checker=True query_checker_prompt=None` which raising error when Multipromptchain line executes | Can't use SQLdatabasechain with Multipromptchain | https://api.github.com/repos/langchain-ai/langchain/issues/6444/comments | 16 | 2023-06-19T20:40:11Z | 2023-11-16T16:07:22Z | https://github.com/langchain-ai/langchain/issues/6444 | 1,764,194,908 | 6,444 |
[
"langchain-ai",
"langchain"
] | ### System Info
ImportError: cannot import name 'create_citation_fuzzy_match_chain' from 'langchain.chains'
Python=3.11.4
Langchain=0.0.129
The code:
`from langchain.chains import create_citation_fuzzy_match_chain`
The error
> ImportError: cannot import name 'create_citation_fuzzy_match_chain' from 'langchain.chains'
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains import create_citation_fuzzy_match_chain
### Expected behavior
ImportError: cannot import name 'create_citation_fuzzy_match_chain' from 'langchain.chains | cannot import name 'create_citation_fuzzy_match_chain' from 'langchain.chains' | https://api.github.com/repos/langchain-ai/langchain/issues/6439/comments | 2 | 2023-06-19T19:00:56Z | 2023-09-25T16:04:55Z | https://github.com/langchain-ai/langchain/issues/6439 | 1,764,063,357 | 6,439 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hey i had a question which waqs bugging me ! I need to load a hugging face model for local path rather than loading it for the first time in HuggingFcaeInstructEmebddings. Can anybdoy tell me how we can do that ??
### Idea or request for content:
_No response_ | DOC : Load model from local path in HuggingFaceInstructEmebddings | https://api.github.com/repos/langchain-ai/langchain/issues/6436/comments | 2 | 2023-06-19T16:50:21Z | 2023-09-26T16:05:28Z | https://github.com/langchain-ai/langchain/issues/6436 | 1,763,879,224 | 6,436 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Issue: Is there a way to modify these default api prompts while using open api spec agent. Could someone please guide https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent_toolkits/openapi/planner_prompt.py#LL6C1-L7C1
Or should I right my own custom open api spec agent?
Thanks!
### Suggestion:
_No response_ | Issue: Related to open api spec agent default prompts | https://api.github.com/repos/langchain-ai/langchain/issues/6434/comments | 1 | 2023-06-19T16:41:33Z | 2023-09-25T16:05:05Z | https://github.com/langchain-ai/langchain/issues/6434 | 1,763,869,712 | 6,434 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | ChatPromptTemplate with partial variables is giving validation error | https://api.github.com/repos/langchain-ai/langchain/issues/6431/comments | 2 | 2023-06-19T16:15:49Z | 2023-06-20T05:39:17Z | https://github.com/langchain-ai/langchain/issues/6431 | 1,763,841,708 | 6,431 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11
Langchain 201
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [x] Agents / Agent Executors
- [x] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I created a Structured tool, including definitions with Pydantic, and want to use it with an `OpenAIFunctionsAgent`.
I create the tool with `StructuredTool.from_function(my_func)`.
OpenAI doesn't use the schema correctly. After a bit of digging I realized that the "pydantic schema" is provided to the openAI call instead of the "JSON schema".
In `langchain.tools.convert_to_openai.py` see:
```python
def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:
"""Format tool into the open AI function API."""
if isinstance(tool, StructuredTool):
schema_ = tool.args_schema.schema() # <============================== HERE
# Bug with required missing for structured tools.
required = sorted(schema_["properties"]) # BUG WORKAROUND
return {
"name": tool.name,
"description": tool.description,
"parameters": {
"type": "object",
"properties": schema_["properties"],
"required": required,
},
}
...
```
I got it to successfully work with this code instead, getting the JSON schema from pydantic instead of the "pydantic schema".
With this openai correctly parses the schema and the tool functions as expected.
```python
def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:
"""Format tool into the open AI function API."""
if isinstance(tool, StructuredTool):
schema_ = tool.args_schema.schema_json()
return {
"name": tool.name,
"description": tool.description,
"parameters": json.loads(schema_),
}
...
```
### Expected behavior
Use the JSON schema instead of the pydantic schema to call openAI. | Structured Tools don't work with OpenAI's new functions | https://api.github.com/repos/langchain-ai/langchain/issues/6428/comments | 1 | 2023-06-19T15:16:54Z | 2023-07-23T15:51:11Z | https://github.com/langchain-ai/langchain/issues/6428 | 1,763,751,453 | 6,428 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Natively a [Chromadb collection](https://github.com/chroma-core/chroma/blob/main/chromadb/api/models/Collection.py) support multiple parameters when making a get or a query on a collection, for example `where` and `ids`. For the moment the `get` method of [Chroma vector store](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py) only support the include argument. I think it would be nice to extend support to all available arguments in the base get method of the Chroma collection.
### Motivation
Align the functionalities of the LangChain chroma vectorstore with all available functionalities in the chromadb's collections.
### Your contribution
I would be happy to make the contribution to make all arguments available as in ChromaDb collections | Missing arguments for Chroma vector store get methods | https://api.github.com/repos/langchain-ai/langchain/issues/6422/comments | 1 | 2023-06-19T13:44:29Z | 2023-07-10T12:14:20Z | https://github.com/langchain-ai/langchain/issues/6422 | 1,763,580,005 | 6,422 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain == 0.0.205
Python == 3.10.7
openai == 0.27.8
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use gpt-3.5-turbo-0613
In a chat-conversational-react-description agent ask "could you tell me more about the tools you have ?"
### Expected behavior
The Human message instructs the llm to give the output in a json format but that is not followed.
Here is the output with gpt-3.5-turbo:

Here is the output with gpt-3.5-turbo-0613:

| gpt-3.5-turbo-0613 is not following the instructions of agents | https://api.github.com/repos/langchain-ai/langchain/issues/6418/comments | 7 | 2023-06-19T09:21:38Z | 2023-10-06T16:07:19Z | https://github.com/langchain-ai/langchain/issues/6418 | 1,763,117,280 | 6,418 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.209
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/use_cases/question_answering/
[Question Answering Notebook](https://python.langchain.com/docs/modules/chains/index_examples/question_answering.html)
[VectorDB Question Answering Notebook](https://python.langchain.com/docs/modules/chains/index_examples/vector_db_qa.html)
Above url invalid.
### Expected behavior
All hyperlinks are accessible. | too many doc url invalid | https://api.github.com/repos/langchain-ai/langchain/issues/6416/comments | 5 | 2023-06-19T07:49:21Z | 2023-09-27T16:06:00Z | https://github.com/langchain-ai/langchain/issues/6416 | 1,762,955,233 | 6,416 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.204, Windoews, Python 3.9.16, SQLAlchemy 2.0.15
db = SQLDatabase.from_uri(
oracle_connection_str,
include_tables=["EVR_REGION"],
sample_rows_in_table_info=3,
)
Getting following error:
Traceback (most recent call last):
File "Z:\MHossain_OneDrive\OneDrive\ChatGPT\LangChain\RAG\DatabaseQuery\sql_database_chain.py", line 27, in <module>
db = SQLDatabase.from_uri(
File "Z:\Users\User\anaconda3\envs\hugging_face_env\lib\site-packages\langchain\sql_database.py", line 124, in from_uri
return cls(create_engine(database_uri, **_engine_args), **kwargs)
File "Z:\Users\User\anaconda3\envs\hugging_face_env\lib\site-packages\langchain\sql_database.py", line 73, in __init__
raise ValueError(
ValueError: include_tables {'EVR_REGION'} not found in database
If schema included like:
db = SQLDatabase.from_uri(
oracle_connection_str,
include_tables=["EVR1.EVR_REGION"],
sample_rows_in_table_info=3,
)
Still getting same error.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Add Tables like ->
db = SQLDatabase.from_uri(
oracle_connection_str,
include_tables=["EVR1.EVR_REGION"],
sample_rows_in_table_info=3,
)
2. It is Oracle connection string
3. BTW: it is working without including table name
4. It is also working for PostgreSQL including table name
### Expected behavior
Should not get any error | Getting error when including Tables in SQLDatabase.from_uri for Oracle | https://api.github.com/repos/langchain-ai/langchain/issues/6415/comments | 10 | 2023-06-19T07:44:40Z | 2023-12-06T17:45:20Z | https://github.com/langchain-ai/langchain/issues/6415 | 1,762,948,644 | 6,415 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.204, Windoews, Python 3.9.16, SQLAlchemy 2.0.15
My Query: list products created between 01 March 2015 and 31 March 2015 and status is 4
Results From SQLDatabaseSequentialChain:
SQLQuery:The original query is correct and does not contain any of the common mistakes listed. Therefore, the original query is:
SQLQuery:SELECT * FROM products WHERE created_date BETWEEN '2015-03-01' AND '2015-03-31' AND status = 4
sqlalchemy.exc.DatabaseError: (oracledb.exceptions.DatabaseError) ORA-00904: "CREATED_DATE": invalid identifier
[SQL: SELECT * FROM products WHERE created_date BETWEEN '2015-03-01' AND '2015-03-31' AND status = 4]
Notes: Here need to add "To_Date" function for Oracle
Question: After getting error, it is not going back to model to correct the query, right? Is there any option to tell that if you get error after executing querying, then submit the query with error to model to fix the error/
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Give a sql query which will failed to execute
2.
### Expected behavior
Query should be fixed by model if any error occured during execution | SQLDatabaseSequentialChain is not submiting the SQL query with error to model to correct it. | https://api.github.com/repos/langchain-ai/langchain/issues/6414/comments | 2 | 2023-06-19T07:20:53Z | 2023-09-26T16:05:38Z | https://github.com/langchain-ai/langchain/issues/6414 | 1,762,915,859 | 6,414 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.