issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.354
LangChain Community version: 0.0.8
Platform: Apple M3 Pro chip on MacOS Sonoma (MacOS 14.2.1)
Python version: 3.11.7
### Who can help?
@baskaryan has the most recent commits on this section of the code, but it was for moving items to the `langchain_community` package. I'm not sure who is the original author
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a list of file paths
2. Attempt to load using `UnstructuredFileLoader`
```python
from langchain_community.document_loaders import UnstructuredFileLoader
files = [
"file_1.txt",
"file_2.txt"
]
loader = UnstructuredFileLoader(file_path=files)
documents = loader.load() # Error occurs on this line
```
Contents of `file_1.txt`
```txt
some stuff
```
Contents of `file_2.txt`
```txt
some more stuff
```
Stack trace
```
Traceback (most recent call last):
File "/Users/joshl/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch.py", line 9, in <module>
documents = loader.load() # Error occurs on this line
^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 87, in load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 173, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/unstructured/partition/auto.py", line 278, in partition
filetype = detect_filetype(
^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/unstructured/file_utils/filetype.py", line 248, in detect_filetype
_, extension = os.path.splitext(_filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 118, in splitext
TypeError: expected str, bytes or os.PathLike object, not list
Process finished with exit code 1
```
### Expected behavior
A list of strings should be able to be passed to the `UnstructuredFileLoader` class based on the `__init__` arguments
```python
class UnstructuredFileLoader(UnstructuredBaseLoader):
def __init__(
self,
file_path: Union[str, List[str]],
mode: str = "single",
**unstructured_kwargs: Any,
):
``` | `UnstructuredFileLoader` shows `TypeError: expected str, bytes or os.PathLike object, not list` when a list of files is passed in | https://api.github.com/repos/langchain-ai/langchain/issues/15607/comments | 4 | 2024-01-05T22:14:50Z | 2024-01-24T03:37:38Z | https://github.com/langchain-ai/langchain/issues/15607 | 2,068,110,708 | 15,607 |
[
"langchain-ai",
"langchain"
] | ### System Info
## System Info
**LangChain Version:** 0.0.354
**Platform:** MacOS Sonoma 14.2.1
**Python Version:** 3.11.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code sample that utilizes the Playwright Toolkit:** [https://python.langchain.com/docs/integrations/toolkits/playwright](https://python.langchain.com/docs/integrations/toolkits/playwright)
For instance, when you attempt to run:
```Python
await navigate_tool.arun(
{"url": "https://web.archive.org/web/20230428131116/https://www.cnn.com/world"}
)
```
Nothing happens after 20 minutes.
The only way I have been able to get a response was by waiting a few seconds and then turning off my computer's wifi, which would return the expected:
```Bash
'Navigating to https://web.archive.org/web/20230428131116/https://www.cnn.com/world returned status code 200'
```
I am utilizing the browser in an agent, and here is the code:
```Python
import asyncio
from typing import Type, Optional
from langchain.agents import AgentExecutor
from langchain.schema import SystemMessage, HumanMessage, AIMessage
from long-chain.agents.format_scratchpad.openai_functions import format_to_openai_function_messages
from langchain.agents.output_parsers.openai_functions import OpenAIFunctionsAgentOutputParser
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.tools.playwright.utils import create_async_playwright_browser
from langchain.tools.tavily_search import TavilySearchResults
from langchain.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.agent_toolkits.playwright import PlayWrightBrowserToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool
from auto_sentry.src.chat_models.enums.OpenAI import OpenAI as openai_enums
from auto_sentry.src.chat_models.Memory import Memory
from auto_sentry.src.chat_models.OpenAI import GPT
class AIQuestionInput(BaseModel):
query: str = Field(description="should contain your question for the user including specific aspects or details from the user's input that require further clarification or elaboration")
class AIQuestionTool(BaseTool):
name = "ai_question"
description = "useful for when you need to clairify user requests or when you need to ask them a question"
args_schema: Type[BaseModel] = AIQuestionInput
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
return query
llm = GPT(model=openai_enums.GPT_4_1106_PREVIEW)
memory = Memory("Test_Conversation")
memory.clear()
search = [TavilySearchResults(api_wrapper=TavilySearchAPIWrapper(), handle_tool_error=True, verbose=True)]
playwright_async_browser = create_async_playwright_browser(headless=True)
toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=playwright_async_browser)
playwright_tools = toolkit.get_tools()
tools:list = search + playwright_tools
MEMORY_KEY = "chat_history"
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(
content="You are a helpful assistant."
),
MessagesPlaceholder(variable_name=MEMORY_KEY),
HumanMessage(content="{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
memory.add(prompt.messages[0])
functions=[format_tool_to_openai_function(tool) for tool in tools]
#"""
llm_with_tools = llm.llm.bind(
functions=functions
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executer = AgentExecutor(
agent=agent, tools=tools, verbose=True, return_intermediate_steps=True
)
query = HumanMessage(content=input("> "))
memory.add(query)
async def run_agent():
print(memory.memory())
response = await agent_executer.ainvoke(
{"input": query, MEMORY_KEY: memory.memory()}
)
return response
response = asyncio.run(run_agent())
agent_messages = format_to_openai_function_messages(response["intermediate_steps"])
staged_messages = []
staged_messages.extend(agent_messages)
staged_messages.append(AIMessage(content=response["output"]))
memory.add(staged_messages)
print(memory.memory())
```
And when it executes a Playwright browser-related command, it just freezes and does nothing.
Here is an example that runs a Playwright browser-related command:
```Bash
> Summarize today's financial news from Google Finance.
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content="Summarize today's financial news from Google Finance.")]
> Entering new AgentExecutor chain...
Invoking: `navigate_browser` with `{'url': 'https://www.google.com/finance'}`
```
But when it utilizes any other tool, such as Tavily Search, it works successfully:
```Bash
> What is the weather supposed to be like in Miami tomorrow?
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content='What is the weather supposed to be like in Miami tomorrow?')]
> Entering new AgentExecutor chain...
Invoking: `tavily_search_results_json` with `{'query': 'Miami weather forecast for tomorrow'}`
[{'url': 'https://www.weather.gov/mfl/\xa0', 'content': 'Read More > Privacy Policy Miami - South Florida Weather Forecast Office NWS Forecast Of
fice Miami - South Florida Aviation Weather International Weather RADAR IMAGERY National Miami Radar Key West Radar Across Florida CLIMATE Miami - South Florida11691 SW 17th StreetMiami, FL 33165305-229-4522Comments? Questions? Please Contact Us. Last Map Update: Fri, Jan. 5, 2024 at 3:54:05 pm EST Text Product Selector: CURRENT HAZARDS OutlooksMiami - South Florida. Weather Forecast Office. NWS Forecast Office Miami - South Florida. Weather.gov > Miami ... Fri, Jan. 5, 2024 at 2:20:05 am EST. Watches, Warnings & Advisories. Zoom Out: Gale Warning: Small Craft Advisory: Rip Current Statement: ... National Weather Service Miami - South Florida 11691 SW 17th Street Miami, FL 33165 305 ...'}]The weather forecast for Miami tomorrow can be found on the National Weather Service's website for the Miami - South Florida region. You can visit [this link](https://www.weather.gov/mfl/) for the most up-to-date information on the weather forecast, including any watches, warnings, or advisories that may be in effect.
> Finished chain.
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content='What is the weather supposed to be like in Miami tomorrow?'), AIMes
sage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"Miami weather forecast for tomorrow"}', 'name': 'tavily_search_results_json'}}), FunctionMessage(content='[{"url": "https://www.weather.gov/mfl/\xa0", "content": "Read More > Privacy Policy Miami - South Florida Weather Forecast Office NWS Forecast Office Miami - South Florida Aviation Weather International Weather RADAR IMAGERY National Miami Radar Key West Radar Across Florida CLIMATE Miami - South Florida11691 SW 17th StreetMiami, FL 33165305-229-4522Comments? Questions? Please Contact Us. Last Map Update: Fri, Jan. 5, 2024 at 3:54:05 pm EST Text Product Selector: CURRENT HAZARDS OutlooksMiami - South Florida. Weather Forecast Office. NWS Forecast Office Miami - South Florida. Weather.gov > Miami ... Fri, Jan. 5, 2024 at 2:20:05 am EST. Watches, Warnings & Advisories. Zoom Out: Gale Warning: Small Craft Advisory: Rip Current Statement: ... National Weather Service Miami - South Florida 11691 SW 17th Street Miami, FL 33165 305 ..."}]', name='tavily_search_results_json'), AIMessage(content="The weather forecast for Miami tomorrow can be found on the National Weather Service's website for the Miami - South Florida region. You can visit [this link](https://www.weather.gov/mfl/) for the most up-to-date information on the weather forecast, including any watches, warnings, or advisories that may be in effect.")]
```
### Expected behavior
The behavior of the Playwright Browser should return results to an input in a relatively prompt manner, but it currently freezes and returns. | Playwright Browser Freezing | https://api.github.com/repos/langchain-ai/langchain/issues/15605/comments | 6 | 2024-01-05T20:59:22Z | 2024-07-06T11:44:10Z | https://github.com/langchain-ai/langchain/issues/15605 | 2,068,033,254 | 15,605 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15598
<div type='discussions-op-text'>
<sup>Originally posted by **MahdiJafari1** January 5, 2024</sup>
OpenAI deprecated its `text-davinci-003` completion model. I've updated the model to `gpt-3.5-turbo-instruct`. I am encountering an issue with the LangChain where it incorrectly classifies the `gpt-3.5-turbo-instruct` model as a chat model. This is causing initialization problems in my code.
**Environment:**
```
python = "^3.10"
langchain = "^0.0.130"
```
OS: Ubuntu
**Expected Behavior:**
The expected behavior is that the gpt-3.5-turbo-instruct model should be recognized as a completion model by LangChain and initialized appropriately without warnings or errors.
**Actual Behavior:**
When attempting to initialize the gpt-3.5-turbo-instruct model, I receive warnings suggesting that this model is being misclassified as a chat model. The specific warnings are:
```shell
/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
```
**My simplified code:**
```python
from langchain import OpenAI
llm = OpenAI({
"model_name": "gpt-3.5-turbo-instruct",
"temperature": 0.0,
"top_p": 1,
"openai_api_key": "API_KEY",
})
print(llm)
```
Output:
```shell
OpenAIChat[Params: {'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.0, 'top_p': 1}
```</div> | Issue with LangChain Misclassifying gpt-3.5-turbo-instruct as Chat Model | https://api.github.com/repos/langchain-ai/langchain/issues/15604/comments | 3 | 2024-01-05T20:55:57Z | 2024-01-06T18:29:34Z | https://github.com/langchain-ai/langchain/issues/15604 | 2,068,029,209 | 15,604 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.354, Windows 10,Python 3.11.5
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
input_data = {
"chat_history": chat_history,
"question": question
}
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model=current_model),
retriever=retriever,
memory=memory,
return_source_documents=True,
get_chat_history=lambda h: h
)
result = chain(input_data)
### Expected behavior
I am writing to seek assistance with an issue I've encountered while working with the ConversationalRetrievalChain in LangChain. I have been developing a Discord bot using LangChain and have run into a consistent error that I'm struggling to resolve.
I am trying to use ConversationalRetrievalChain for a chatbot application. However, I keep encountering a ValueError related to input keys. The error message states: "ValueError: Missing some input keys: {'chat_history', 'question'}". This error occurs when I attempt to pass a dictionary containing 'chat_history' and 'question' as separate keys to the ConversationalRetrievalChain.
I have tried various approaches to format the input data correctly, including combining 'chat_history' and 'question' into a single string and passing them as separate keys. Despite these efforts, the error persists. I have also searched for solutions on platforms like Stack Overflow and GitHub but haven't found a resolution that addresses this specific issue.
Could you please provide guidance on how to correctly structure the input data for ConversationalRetrievalChain, or suggest any alternative approaches to resolve this issue? Any insights or recommendations would be greatly appreciated.
Thank you for your time and assistance. I look forward to your response. | Issue with ConversationalRetrievalChain in LangChain - ValueError: Missing Input Keys | https://api.github.com/repos/langchain-ai/langchain/issues/15601/comments | 3 | 2024-01-05T20:09:10Z | 2024-04-15T16:25:10Z | https://github.com/langchain-ai/langchain/issues/15601 | 2,067,974,482 | 15,601 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is no way to view the old documentation on [the official site](https://python.langchain.com/). This makes it extremely difficult to develop. It seems as though every week there is another feature that is deleted, thus another page being deleted.
How is this acceptable? It is becoming almost unusable in the Enterprise world due to the constant changes and lack of documentation.
### Idea or request for content:
Implement a selector that lets you choose which langchain version you're on so that you can view the documentation for that specific version. | DOC: Lack of Documentation Versioning on Langchain Website | https://api.github.com/repos/langchain-ai/langchain/issues/15597/comments | 2 | 2024-01-05T19:17:59Z | 2024-04-13T16:11:50Z | https://github.com/langchain-ai/langchain/issues/15597 | 2,067,913,011 | 15,597 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello Langchain team,
I have encountered an error while using `AgentTokenBufferMemory` and `RedisChatMessageHistory`. The problem occurs because the buffer is not removing old messages when new ones are added. This causes an issue with OpenAI as the context window exceeds the token limit. Upon investigation, I found that the issue is in the `save_context()` method of the `AgentTokenBufferMemory` class.
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
"""Save context from this conversation to buffer. Pruned."""
input_str, output_str = self._get_input_output(inputs, outputs)
self.chat_memory.add_user_message(input_str)
steps = format_to_openai_function_messages(outputs[self.intermediate_steps_key])
for msg in steps:
self.chat_memory.add_message(msg)
self.chat_memory.add_ai_message(output_str)
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
if curr_buffer_length > self.max_token_limit:
while curr_buffer_length > self.max_token_limit:
buffer.pop(0)
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
```
In the `save_context()` method, the input and output messages are retrieved and added to the chat memory. The intermediate steps are then converted to OpenAI function messages and added to the chat memory as well. If the maximum token limit is exceeded, the buffer is pruned by removing the oldest messages until the current buffer length is below the limit.
However, the problem arises from the line `buffer.pop(0)`, which removes a message from a local variable rather than removing it from the Redis list. Since the list is not pruned, the agent becomes stuck and fails permanently.
Here is the error I am receiving:
```json
{'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16602 tokens (16529 in the messages, 73 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}
```
Here is the information from LangSmith -> Metadata -> Runtime:
<img width="422" alt="image" src="https://github.com/langchain-ai/langchain/assets/101429097/eb3105f2-fd18-4527-b4b2-f89e2e87bdb7">
### Who can help?
@baskarya
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import uuid
from langchain.memory.chat_message_histories import RedisChatMessageHistory
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent, AgentExecutor
from langchain.prompts import MessagesPlaceholder
prompt: SystemMessage = None # my prompt
chat_message_history = RedisChatMessageHistory(
session_id=uuid.uuid4(),
url="redis://localhost:6379",
key_prefix="my_feature_",
ttl=3600,
)
memory = AgentTokenBufferMemory(
chat_memory=chat_message_history,
memory_key="chat_history",
return_messages=True,
llm=llm,
max_token_limit=10000,
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=prompt, extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")]
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True)
agent_response = agent_executor.invoke({"input": user_message})
```
### Expected behavior
Once the current messages have been saved, if the tokens exceed the maximum token limit, the `AgentTokenBufferMemory` should remove old messages from the Redis list (RPOP) until they are below the maximum token limit. | AgentTokenBufferMemory does not remove old messages, leading to the "context_length_exceeded" error from OpenAI. | https://api.github.com/repos/langchain-ai/langchain/issues/15593/comments | 2 | 2024-01-05T18:25:18Z | 2024-05-01T16:06:03Z | https://github.com/langchain-ai/langchain/issues/15593 | 2,067,842,408 | 15,593 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be really great to enhance the VectorStoreRetriever class, by allowing additional (optional) search kwargs to be passed directly to the invoke method. Right now the input type of the invoke method is str, it would be interesting to be able to receive a custom object with "query" and "filters".
Ideally one would be able to do:
```python
chain = (
vector_store.as_retriever()
| parse_documents_to_str
| llm
)
chain.invoke({'query': some_question, 'filter': filter_expression})
```
### Motivation
This change would enable dynamic querying capabilities, such as metadata filtering, which are not currently possible due to the requirement of defining search_kwargs at the constructor level or altering them in the instantiated object. I know I can work this out using a custom retriever class and override some methods, but metadata filtering is a very powerful option to enhance search, and it would be really helpful for developers to have this built in as a default behaviour.
### Your contribution
I can submit a PR if someone confirms this makes sense | Enhance Flexibility in VectorStoreRetriever by Allowing Dynamic search args in invoke Method | https://api.github.com/repos/langchain-ai/langchain/issues/15590/comments | 3 | 2024-01-05T17:29:34Z | 2024-04-12T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15590 | 2,067,752,188 | 15,590 |
[
"langchain-ai",
"langchain"
] | ### System Info
python: 3.11
langchain: latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in my chatbot which interact with sql db, if im typing hi its giving me the output as the entity of 1st row and 1st column, instead of answering with nothing or invalid question. and some time the response is generated in ascending order and when re runed the query in descending order, how to validate the answer before its given out?
and how can i make my chatbot user friendly, like when user ask hi, it should greet back or when there is informal question it should just reply
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.prompts import ChatPromptTemplate
# import ChatOpenAI
from langchain.memory import ConversationBufferMemory
# import sql_cmd
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
# from langchain import PromptTemplate
from langchain.prompts import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
def chat(question,sql_format):
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db2 = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(question)
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
answer = result_df.to_dict()
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
it uses "\Lib\site-packages\langchain_experimental\sql\base.py"
"\Lib\site-packages\langchain_experimental\sql\vector_sql.py"
"\Lib\site-packages\langchain_experimental\sql\prompt.py"
### Expected behavior
in my chatbot which interact with sql db, if im typing hi its giving me the output as the entity of 1st row and 1st column, instead of answering with nothing or invalid question. and some time the response is generated in ascending order and when re runed the query in descending order, how to validate the answer before its given out?
and how can i make my chatbot user friendly, like when user ask hi, it should greet back or when there is informal question it should just reply
| In a Chatbot to chat with SQL using openai and langchain, how to integrate the chatbot to make simple conversations | https://api.github.com/repos/langchain-ai/langchain/issues/15587/comments | 7 | 2024-01-05T14:26:07Z | 2024-04-15T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15587 | 2,067,440,671 | 15,587 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
template = """
**Role:**
You are a helpful assistant.
**Context:**
You have to only use a reference stored document to generate a response.
CONTEXT: {context}
**Task:**
1. task 1
- some important requirements for task 1
2. task 2
- some important requirements for task 2
3. task 3
- some important requirements for task 3
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | retriever | format_docs
)
| qa_prompt
| llm
)
response = rag_chain.invoke({"question": message, "chat_history": memory.get_history()})
print(response)
```
### Expected behavior
I expect to get the response which is composed of only a proper response sentence. Sometimes, it does well as I expected.
But frequently, it returns "\n AI:" in front of the actual response which is not suppose to do so.
For example,
```
"\n AI: The reference document says blah blah.",
``` | RAG chain response often includes "\n AI:" in front of actual response | https://api.github.com/repos/langchain-ai/langchain/issues/15586/comments | 4 | 2024-01-05T14:12:29Z | 2024-01-16T00:49:32Z | https://github.com/langchain-ai/langchain/issues/15586 | 2,067,418,916 | 15,586 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.354
This problem appears since commit 62d32bd
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Comit 62d32bd allowed to pass kwarg to ChromaDb. This is really nice but I my case it leads to an error ...
I don't know if it is relevant or a workaround exists so I raise the issue
Here is how to reproduce
``` python
from langchain.vectorstores import Chroma
# load a simple Chroma DB
vectordb = Chroma(persist_directory=my_chroma_db_directory,
embedding_function=my_embedding)
# init a retriever function
retriever = vectordb.as_retriever(search_kwargs={"k": retriever_output_number, "fetch_k": retriever_documents_used, "lambda_mult": retriever_diversity})
# call the retriever in a qa
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=False,
verbose=True)
# ask something
answer = qa_chain.run("find something in doc ...")
```
### Expected behavior
kwargs are needed by the `as_retriever` function, in the example above the argument `fetch_k` is mandatory
https://github.com/langchain-ai/langchain/blob/fd5fbb507dd3b1c189aa1e4b601b8669217b0f78/libs/core/langchain_core/vectorstores.py#L553
Since commit 62d32bd kwargs are passed up to Chroma `function similarity_search_with_score` which calls `__query_collection` leading to the error
https://github.com/langchain-ai/langchain/blob/fd5fbb507dd3b1c189aa1e4b601b8669217b0f78/libs/community/langchain_community/vectorstores/chroma.py#L408
TypeError: Collection.query() got an unexpected keyword argument 'fetch_k'
Is there a way to avoid that ? | Chroma as_retriever function with kwargs leads to unexpected keyword argument | https://api.github.com/repos/langchain-ai/langchain/issues/15585/comments | 7 | 2024-01-05T13:35:08Z | 2024-06-26T20:11:46Z | https://github.com/langchain-ai/langchain/issues/15585 | 2,067,361,958 | 15,585 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.11
Mac M1
Langchain Version: 0.0.353
openai Version: 0.28.0
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.llms.openai import OpenAI
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("clickhouse+http://clickhouse_admin:P%21%40ssword42%21@localhost:8123/idfyDB")
toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))
agent_executor = create_sql_agent(
llm=OpenAI(model_name='gpt-4', temperature=0),
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.run("Describe table sample_tbl")
### Expected behavior
The SQLDatabase toolkit should be able to query the clickhouse database as I have installed and provided the right dialect and connecting directly via sqlalchemy works fine. The issue is langchain's sql database toolkit somehow is unable to query the clickhouse database, it works fine for the chinook db sqlite file. Logs for reference -
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input:
Observation:
Thought:I need to check the tables in the database to see if there is an "eve_tasks_executed" table.
Action: sql_db_list_tables
Action Input:
Observation:
Thought:Traceback (most recent call last):
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/agent.py", line 636, in plan
return self.output_parser.parse(full_output)
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py", line 63, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse LLM output: `I don't know the answer to the question because I don't have access to the list of tables in the database.`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
agent_executor.run("Run select * query on eve_tasks_executed table")
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
raise e
self._call(inputs, run_manager=run_manager)
next_step_output = self._take_next_step(
[
[
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `I don't know the answer to the question because I don't have access to the list of tables in the database.` [hidden file names]
| Clickhouse SQL Database Agent | https://api.github.com/repos/langchain-ai/langchain/issues/15584/comments | 2 | 2024-01-05T13:31:32Z | 2024-01-06T07:38:01Z | https://github.com/langchain-ai/langchain/issues/15584 | 2,067,357,036 | 15,584 |
[
"langchain-ai",
"langchain"
] | Hi,
I am using langchain and llama-cpp-python to do some QA on a text file. When using the llama-2-13b-chat quantized model from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_K_M.gguf). I am able to create a RetrievalQA chain passing the vectorstore and prompt, but when I use the chain.run(query), it crashes the anaconda kernel.
I tried using the [7b variant](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) and this works fine without any issue.
### Package Version
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.1
langsmith==0.0.70
llama_cpp_python==0.2.19
### Code Snippet
from langchain.text_splitter import RecursiveCharacterTextSplitter,SentenceTransformersTokenTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings, OpenAIEmbeddings, SentenceTransformerEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import LlamaCpp
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
model_path="Model Path/llama-2-13b-chat.Q5_K_M.gguf"
llm = LlamaCpp(
model_path = model_path,
max_tokens=700,f16_kv=True,model_kwargs={'context_length':4000,'temparature':0.3}
)
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2', model_kwargs={'device':'cpu'})
vectorstore = FAISS.load_local(f'Text File', embeddings=embeddings)
template="""
Answer the question using only the context provided to you.
Context: {context}
Question:{question}
"""
qa_prompt=PromptTemplate(template=template, input_variables=["context","question"])
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
return_source_documents=False,
chain_type_kwargs={'prompt': qa_prompt})
chain.run("Who are the people in the conversation?")
### Suggestion:
_No response_ | Issue: ipykernel kernel crashes when using llama-2-13b model | https://api.github.com/repos/langchain-ai/langchain/issues/15583/comments | 1 | 2024-01-05T13:08:21Z | 2024-04-12T16:12:43Z | https://github.com/langchain-ai/langchain/issues/15583 | 2,067,325,836 | 15,583 |
[
"langchain-ai",
"langchain"
] | ### System Info
In the langchain_community/vectorstores/azuresearch.py on line 656 the filed name is used explicitly, which leads to an error if the index does not have the mentioned filed. The suggestion is to replace
`json.loads(result["metadata").get("key"), ""),`
with `json.loads(result[FIELDS_METADATA]).get("key"), "") if FIELDS_METADATA in result else "",`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Configure Azure Search index without metadata field.
2. execute the following code `docs = vector_store.semantic_hybrid_search_with_score_and_rerank(
query=query, k=3
)`
### Expected behavior
successful execution of the above mentioned code | metadata is not properly processed when the field does not exists | https://api.github.com/repos/langchain-ai/langchain/issues/15581/comments | 1 | 2024-01-05T11:39:32Z | 2024-01-07T01:05:01Z | https://github.com/langchain-ai/langchain/issues/15581 | 2,067,198,112 | 15,581 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
class ChatbotHistory:
def __init__(self, max_size=5):
self.buffer = deque(maxlen=max_size)
def add_interaction(self, user_message, bot_response):
# Assuming HumanMessage is a class that stores a message content
self.buffer.append(HumanMessage(content=user_message))
self.buffer.append(bot_response)
def get_history(self):
return list(self.buffer)
def get_history_as_string(self):
history_string = ""
for message in self.buffer:
if isinstance(message, HumanMessage):
history_string += f"User: {message.content}\n"
else: # Assuming bot responses are strings
history_string += f"Bot: {message}\n"
return history_string.strip()
def clear_history(self):
self.buffer.clear()
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def contextualized_question(input: dict):
llm = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
contextualize_q_system_prompt = """Given a chat history and the latest user question \
which might reference context in the chat history, formulate a standalone question \
which can be understood without the chat history. Do NOT answer the question, \
just reformulate it if needed and otherwise return it as is."""
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
contextualize_q_chain = contextualize_q_prompt | llm | StrOutputParser()
if input.get("chat_history"):
return contextualize_q_chain
else:
return input["question"]
if not message:
message = request.form.get('userInput')
template = """
CONTEXT: {context}
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
# rag_prompt_custom = PromptTemplate.from_template(template)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | temp_retriever | format_docs
)
| qa_prompt
| llm
)
response = rag_chain.invoke({"question": message, "chat_history": memory.get_history()})
memory.add_interaction(message, response)
```
### Expected behavior
I expected to have string or json format output response as an output of invoke(),
but it showed me an error saying,
```
ValueError: variable chat_history should be a list of base messages, got [HumanMessage(content='input message'), "output response"]
```
| ValueError: variable chat_history should be a list of base messages, got [HumanMessage(content='input message'), "output response"] | https://api.github.com/repos/langchain-ai/langchain/issues/15580/comments | 1 | 2024-01-05T11:34:20Z | 2024-01-05T13:53:05Z | https://github.com/langchain-ai/langchain/issues/15580 | 2,067,191,288 | 15,580 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/5701
<div type='discussions-op-text'>
<sup>Originally posted by **rdhillbb** June 5, 2023</sup>
Newbie here.
I found an issue while importing 'VectorstoreIndexCreator'
ImportError: cannot import name 'URL' from 'sqlalchemy' (/Users/tst/anaconda3/lib/python3.10/site-packages/sqlalchemy/__init__.py)
Error Log:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[88], line 2
1 from langchain.docstore.document import Document
----> 2 from langchain.indexes import VectorstoreIndexCreator
3 from langchain_community.utilities import ApifyWrapper
5 apify = ApifyWrapper()
File ~/anaconda3/lib/python3.10/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File ~/anaconda3/lib/python3.10/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
Thank You
Vj | Cannot import name 'URL' from 'sqlalchemy' | https://api.github.com/repos/langchain-ai/langchain/issues/15579/comments | 5 | 2024-01-05T11:32:28Z | 2024-05-13T16:09:17Z | https://github.com/langchain-ai/langchain/issues/15579 | 2,067,188,914 | 15,579 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
below is my code
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document,"*****************************************")
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format: {{chat_history}}
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = PromptTemplate(input_variables=["context_text","check","chat_history"],template=custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
# QA_CHAIN_PROMPT = PromptTemplate(
# input_variables=["context_text","check"],template=custom_prompt_template,)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(memory_key='chat_history', output_key='answer', return_messages=True)
# qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True, combine_docs_chain_kwargs={"prompt": custom_prompt})
# prompt_qa={"qa": qa, "formatted_prompt": formatted_prompt}
return qa
The error I am getting is:
File "/usr/lib/python3.8/string.py", line 272, in get_field
obj = self.get_value(first, args, kwargs)
File "/usr/lib/python3.8/string.py", line 229, in get_value
return kwargs[key]
KeyError: 'chat_history'
_No response_
### Suggestion:
_No response_ | Issue:How can I resolve memory with conversation retreival chain error? | https://api.github.com/repos/langchain-ai/langchain/issues/15577/comments | 1 | 2024-01-05T10:33:22Z | 2024-04-12T16:18:52Z | https://github.com/langchain-ai/langchain/issues/15577 | 2,067,106,521 | 15,577 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
we are trying to use pypdf to get the text out from the pdf use the chunk for embedding(details are there in attached code snippet. while using, i have installed all the required packages. its working fine in my local(windows 10). same code snippet and requirement.txt if i use on docker that uses ubuntu OS(its in prod), i am getting below error:
**Error while chunking the file: Error while chunking the file, Errored while loading the document: `rapidocr-onnxruntime` package not found, please install it with `pip install rapidocr-onnxruntime**`
The strage part here is this **rapidocr-onnxruntime** package is already installed on the ubuntu system**(i re-verified by seeing the github action runner logs where it installs all the packages from requirement.txt)**
Not able to understand why on prod it **pypdf package with extract_image=True** is throwing above error
It will be helpful, if you can provide any insight or workaround to this issue.

### Suggestion:
_No response_ | Issue: Pypdf extract_image=True is not working on docker(production) | https://api.github.com/repos/langchain-ai/langchain/issues/15576/comments | 8 | 2024-01-05T09:42:50Z | 2024-06-12T07:02:00Z | https://github.com/langchain-ai/langchain/issues/15576 | 2,067,029,408 | 15,576 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
` if file_path.lower().endswith(".xlsx") or file_path.lower().endswith(".xls"):
loader = UnstructuredExcelLoader(file_path, mode="elements")
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=10)
texts = text_splitter.split_documents(documents=document)`
what can i modify so that I get answer from the column?
### Suggestion:
_No response_ | Issue: Not able to get the expected answers when asking answer of other column corresponding to other column | https://api.github.com/repos/langchain-ai/langchain/issues/15573/comments | 4 | 2024-01-05T08:49:04Z | 2024-04-12T16:16:29Z | https://github.com/langchain-ai/langchain/issues/15573 | 2,066,955,284 | 15,573 |
[
"langchain-ai",
"langchain"
] | ### System Info
I want to use the news-api tool, and I have these setting for api key:
```
os.environ["NEWS_API_KEY"] = "9ed***"
tools = load_tools(["news-api"], llm=llm, news_api_key="9ed****", memory=memory)
```
But when the action is activated, the error is:
```
Action: Search
https://newsapi.org/v2/top-headlines?country=us&category=sports&pageSize=2&apiKey=YOUR_API_KEY
Replace `YOUR_API_KEY` with your actual API key from NewsAPI.org to authenticate the request.
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. load news-api tool
2. run the tool in the agent
### Expected behavior
the news-api tool can run successfully | How to set api key for news-api? | https://api.github.com/repos/langchain-ai/langchain/issues/15572/comments | 2 | 2024-01-05T08:21:08Z | 2024-04-12T22:37:11Z | https://github.com/langchain-ai/langchain/issues/15572 | 2,066,921,236 | 15,572 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11
Langchain 0.0.354
Windows 11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Getting exception httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol. while executing SQLDatabaseToolkit
Execute below code and described environment:
`import os
from dotenv import load_dotenv
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.chat_models import ChatOpenAI
llm_model = "gpt-3.5-turbo"
#load secrets and keys from .env
load_dotenv()
database_user_local = os.getenv("DATABASE_USERNAME_LOCAL")
database_password_local = os.getenv("DATABASE_PASSWORD_LOCAL")
database_server_local = os.getenv("DATABASE_SERVER_LOCAL")
database_db_local = os.getenv("DATABASE_DB_LOCAL")
llm = ChatOpenAI(temperature = 0.0, model_name=llm_model)
connection_string = f"mssql+pymssql://{database_user_local}:{database_password_local}@{database_server_local}/{database_db_local}"
db = SQLDatabase.from_uri(connection_string)
user_id = 118
query = "Select top 5 * from dbo.Users where Id = " + str(user_id)
toolkit = SQLDatabaseToolkit(db=db, llm=llm, reduce_k_below_max_tokens=True)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
input_variables=["query"]
)
agent_executor.run("Get lastlogin from dbo.Users for user_id 118")
`
Error:
` Entering new AgentExecutor chain...
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 67, in map_httpcore_exceptions
yield
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpcore\_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\openai\_base_client.py", line 866, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 230, in handle_request
with map_httpcore_exceptions():
File "D:\Program\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 67, in map_httpcore_exceptions
yield
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpcore\_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\openai\_base_client.py", line 866, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 230, in handle_request
with map_httpcore_exceptions():
File "D:\Program\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 84, in map_httpcore_exceptions`
### Expected behavior
It should execute sql agent and return the result | SQLDatabaseToolkit - httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol | https://api.github.com/repos/langchain-ai/langchain/issues/15567/comments | 1 | 2024-01-05T04:54:04Z | 2024-04-12T16:19:05Z | https://github.com/langchain-ai/langchain/issues/15567 | 2,066,727,007 | 15,567 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain:0.0.353
platform:windows10
python:3.10
I am a beginner in Langchain. I am a I want to use ConversationTokenBufferMemory to manually save the context, but an error occurred. My code is as follows
`import os
from lc.api_key import OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationTokenBufferMemory
llm = ChatOpenAI(temperature=0.0)
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=100
)
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
memory.save_context({"input": "what is space?"}, {"output": "just like a stage."})
memory.save_context({"input": "what can i do?"}, {"output": "workers of the world, unite!"})
print(memory.load_memory_variables())
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
print(chain.predict(input="what is 1+1?"))
print(chain.predict(input="what is my name?"))
print(chain.predict(input="what can i do?"))`
---------------------------------
---------------------------------
The problem is that the code is executed to memory.save_ Context, an error will be reported.The stack information is as follows:
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain\memory\token_buffer.py", line 54, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 610, in get_num_tokens_from_messages
model, encoding = self._get_encoding_model()
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 588, in _get_encoding_model
encoding = tiktoken_.encoding_for_model(model)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 117, in load_tiktoken_bpe
return {
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 117, in <dictcomp>
return {
ValueError: too many values to unpack (expected 2)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from lc.api_key import OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationTokenBufferMemory
llm = ChatOpenAI(temperature=0.0)
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=100
)
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
memory.save_context({"input": "what is space?"}, {"output": "just like a stage."})
memory.save_context({"input": "what can i do?"}, {"output": "workers of the world, unite!"})
print(memory.load_memory_variables())
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
print(chain.predict(input="what is 1+1?"))
print(chain.predict(input="what is my name?"))
print(chain.predict(input="what can i do?"))
### Expected behavior
Traceback (most recent call last):
File "E:\Project\pythonProject\langChain\lc\memory\ConversationTokenBufferMemoryUseCaseScript.py", line 14, in <module>
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain\memory\token_buffer.py", line 54, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 610, in get_num_tokens_from_messages
model, encoding = self._get_encoding_model()
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 588, in _get_encoding_model
encoding = tiktoken_.encoding_for_model(model)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 124, in load_tiktoken_bpe
return {
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 124, in <dictcomp>
return {
ValueError: too many values to unpack (expected 2)
Process finished with exit code 1
| ValueError: too many values to unpack (expected 2) | https://api.github.com/repos/langchain-ai/langchain/issues/15564/comments | 1 | 2024-01-05T03:05:15Z | 2024-04-12T16:16:44Z | https://github.com/langchain-ai/langchain/issues/15564 | 2,066,656,658 | 15,564 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using Langchain 0.0.354 and ChatOpenAI. I want to use parameter "n" in OpenAI API to return "n" completions. However, ChatOpenAI always returns a single output. Ultimately, I would like to build my chain using LCEL as follows: `chain = prompt | ChatOpenAI (n=10) | MyCustomParser`. Can someone help me how can I achieve this?
### Suggestion:
I think it would be nice to return a list of strings by default if n > 1. | Issue: How to use "n" completions with LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15560/comments | 1 | 2024-01-04T21:31:34Z | 2024-04-11T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15560 | 2,066,364,259 | 15,560 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to run the below:
> import requests
> import json
> from langchain.agents import AgentType, initialize_agent, load_tools
> from langchain_community.llms import OpenAI
> from langchain.chat_models import ChatOpenAI
>
>
> llm = ChatOpenAI(temperature=0,model= 'gpt-3.5-turbo', openai_api_key="...")
>
> token='...'
> tools = load_tools(
> ["graphql"],
> custom_headers={"Authorization": token, "Content-Type": "application/json","API-Version" : "2024-01"},
> graphql_endpoint="https://api.monday.com/v2",
> llm=llm, fetch_schema_from_transport=False
> )
>
>
> agent = initialize_agent(
> tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
> )
>
>
>
> graphql_fields = """
>
> mutation{
> create_item (board_id: 3573920662, item_name: "New ItemX"){
> id
> name
> }
> }
>
> """
>
> suffix = "Create the item specified"
>
>
> print(suffix + graphql_fields)
>
> agent.run(suffix + graphql_fields)
But I keep getting the error:
>
> TransportQueryError: Error while fetching schema: Not Authenticated
> If you don't need the schema, you can try with: "fetch_schema_from_transport=False"
The authorisation is correct (token and API key censored here) and so is the end point. How can I fix this?
### Suggestion:
_No response_ | Authentican error | https://api.github.com/repos/langchain-ai/langchain/issues/15555/comments | 1 | 2024-01-04T19:48:30Z | 2024-04-11T16:22:16Z | https://github.com/langchain-ai/langchain/issues/15555 | 2,066,241,880 | 15,555 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
from langchain.vectorstores.pgvector import PGVector
db = PGVector.from_documents(
documents= docs,
embedding = embeddings,
collection_name= "blog_posts",
distance_strategy = DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING
)
```
This code will create a table in public schema. How to specify a custom schema other than public?
### Suggestion:
_No response_ | How to specify a custom schema in PGVector.from_documents? | https://api.github.com/repos/langchain-ai/langchain/issues/15553/comments | 2 | 2024-01-04T19:08:46Z | 2024-06-16T16:07:39Z | https://github.com/langchain-ai/langchain/issues/15553 | 2,066,194,527 | 15,553 |
[
"langchain-ai",
"langchain"
] | ### System Info
Error stack
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File <command-3066972537097411>, line 1
----> 1 issue_recommendation(
2 review_title="Terrible",
3 review_text="This baking sheet is terrible. It stains so easily and i've tried everything to get it clean. I've maybe used it 5 times and it looks like it's 20 years old. The side of the pan also hold water, so when you pick it up off the drying rack, water runs out. I would never purchase these again.",
4 product_num="8888999"
5
6 )
File <command-3066972537097410>, line 44, in issue_recommendation(review_title, review_text, product_num)
36 retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={'filter': {'product_num': product_num}})
38 retrieval_chain = (
39 {"context": retriever | format_docs, "review_text": RunnablePassthrough()}
40 | rag_prompt
41 | llm
42 | StrOutputParser()
43 )
---> 44 return retrieval_chain.invoke({"review_title":review_title, "review_text": review_text})
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in RunnableParallel.invoke(self, input, config)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in <dictcomp>(.0)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File /usr/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /usr/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /usr/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:121, in BaseRetriever.invoke(self, input, config)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get("callbacks"),
124 tags=config.get("tags"),
125 metadata=config.get("metadata"),
126 run_name=config.get("run_name"),
127 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/vectorstores.py:654, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == "similarity":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == "similarity_score_threshold":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:348, in Chroma.similarity_search(self, query, k, filter, **kwargs)
331 def similarity_search(
332 self,
333 query: str,
(...)
336 **kwargs: Any,
337 ) -> List[Document]:
338 """Run similarity search with Chroma.
339
340 Args:
(...)
346 List[Document]: List of documents most similar to the query text.
347 """
--> 348 docs_and_scores = self.similarity_search_with_score(
349 query, k, filter=filter, **kwargs
350 )
351 return [doc for doc, _ in docs_and_scores]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:437, in Chroma.similarity_search_with_score(self, query, k, filter, where_document, **kwargs)
429 results = self.__query_collection(
430 query_texts=[query],
431 n_results=k,
(...)
434 **kwargs,
435 )
436 else:
--> 437 query_embedding = self._embedding_function.embed_query(query)
438 results = self.__query_collection(
439 query_embeddings=[query_embedding],
440 n_results=k,
(...)
443 **kwargs,
444 )
446 return _results_to_docs_and_scores(results)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:691, in OpenAIEmbeddings.embed_query(self, text)
682 def embed_query(self, text: str) -> List[float]:
683 """Call out to OpenAI's embedding endpoint for embedding query text.
684
685 Args:
(...)
689 Embedding for the text.
690 """
--> 691 return self.embed_documents([text])[0]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:662, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
659 # NOTE: to keep things simple, we assume the list may contain texts longer
660 # than the maximum context and use length-safe embedding function.
661 engine = cast(str, self.deployment)
--> 662 return self._get_len_safe_embeddings(texts, engine=engine)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:465, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
459 if self.model.endswith("001"):
460 # See: https://github.com/openai/openai-python/
461 # issues/418#issuecomment-1525939500
462 # replace newlines, which can negatively affect performance.
463 text = text.replace("\n", " ")
--> 465 token = encoding.encode(
466 text=text,
467 allowed_special=self.allowed_special,
468 disallowed_special=self.disallowed_special,
469 )
471 # Split tokens into chunks respecting the embedding_ctx_length
472 for j in range(0, len(token), self.embedding_ctx_length):
File /databricks/python/lib/python3.10/site-packages/tiktoken/core.py:116, in Encoding.encode(self, text, allowed_special, disallowed_special)
114 if not isinstance(disallowed_special, frozenset):
115 disallowed_special = frozenset(disallowed_special)
--> 116 if match := _special_token_regex(disallowed_special).search(text):
117 raise_disallowed_special_token(match.group())
119 try:
TypeError: expected string or buffer
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to create RAG pattern using the product manuals in PDF which are spitted, indexed, and stored in Chroma persisted on a disk. When I try the function that classifies the reviews using the document context, below is the error I get:
```
from langchain import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores import Chroma
llm = AzureChatOpenAI(
azure_deployment="ChatGPT-16K",
openai_api_version="2023-05-15",
azure_endpoint=endpoint,
api_key=result["access_token"],
temperature=0,
seed = 100
)
embedding_model = AzureOpenAIEmbeddings(
api_version="2023-05-15",
azure_endpoint=endpoint,
api_key=result["access_token"],
azure_deployment="ada002",
)
vectordb = Chroma(
persist_directory=vector_db_path,
embedding_function=embedding_model,
collection_name="product_manuals",
)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def classify (review_title, review_text, product_num):
template = """
You are a customer service AI Assistant that handles responses to negative product reviews.
Use the context below and categorize {review_title} and {review_text} into defect, misuse or poor quality categories based only on provided context. If you don't know, say that you do not know, don't try to make up an answer. Respond back with an answer in the following format:
poor quality
misuse
defect
{context}
Category:
"""
rag_prompt = PromptTemplate.from_template(template)
retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={'filter': {'product_num': product_num}})
retrieval_chain = (
{"context": retriever | format_docs, "review_title: RunnablePassthrough(), "review_text": RunnablePassthrough()}
| rag_prompt
| llm
| StrOutputParser()
)
return retrieval_chain.invoke({"review_title": review_title, "review_text": review_text})
classify(review_title="Terrible", review_text ="This baking sheet is terrible. It stains so easily and i've tried everything to get it clean", product_num ="8888999")
```
### Expected behavior
Embeddings seems to work fine when I test. It also works fine when I remove the context and retriever from the chain. It seems to be related to embeddings. Examples on Langchain [website (https://python.langchain.com/docs/use_cases/question_answering/sources) instantiates retriver from Chroma.from_documents() whereas I load Chroma vector store from a persisted path. I also tried invoking with review_text only (instead of review title and review text) but the error persists. Not sure why this is happening. These are the package versions I work:
Name: openai Version: 1.6.1
Name: langchain Version: 0.0.354 | TypeError: expected string or buffer | https://api.github.com/repos/langchain-ai/langchain/issues/15552/comments | 2 | 2024-01-04T19:02:22Z | 2024-06-08T16:08:45Z | https://github.com/langchain-ai/langchain/issues/15552 | 2,066,185,557 | 15,552 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a large Agent with lots of memory/observations but initializing takes to much time. Is there a way to save the memory and load it again? Whats the best way to achieve this? Ideally I would like to reuse the vector store for memory and then for each new user save/load the memory/conversations specific to that agent/user connection.
I'm using the examples in the cookbook here: https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb
### Suggestion:
_No response_ | Question: Using TimeWeightedVectorStoreRetriever and GenerativeAgentMemory is there a way to save the memory and load it again? | https://api.github.com/repos/langchain-ai/langchain/issues/15549/comments | 3 | 2024-01-04T16:39:30Z | 2024-01-04T16:59:31Z | https://github.com/langchain-ai/langchain/issues/15549 | 2,065,981,904 | 15,549 |
[
"langchain-ai",
"langchain"
] | ### System Info
I try to load pdf in from langchain.document_loaders import PyPDFDirectoryLoader
got error this WARNING:pypdf._reader:incorrect startxref pointer(3)
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFDirectoryLoader("/content/pdfs/Carina Lueschen Masterarbeit Ryan Trecartin (1).pdf")
pages = loader.load_and_split()
will return blank array and warning error
### Who can help?
@hwchase17 @agola11 @sbusso
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just load the pdf which having images
### Expected behavior
output should be in array of pdf data | WARNING:pypdf._reader:incorrect startxref pointer(3) | https://api.github.com/repos/langchain-ai/langchain/issues/15548/comments | 4 | 2024-01-04T16:25:57Z | 2024-04-12T16:12:41Z | https://github.com/langchain-ai/langchain/issues/15548 | 2,065,959,358 | 15,548 |
[
"langchain-ai",
"langchain"
] | ### System Info
python==3.10
langchain==0.0.326
langdetect==1.0.9
langsmith==0.0.54
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. define `GraphCypherQAChain` chain to query over Neo4j graph db and construct cypher query then return answer to user's question while setting `return_source_documents` to `True`:
` graph = Neo4jGraph(
url=NEO4J_URL, username=NEO4J_USERNAME, password=NEO4J_PASSWORD
)
EXAMPLES_PROMPT_TEMPLATE = """
Input: {db_question},
Output: {db_query}
"""
example_prompt = PromptTemplate(input_variables=["db_question", "db_query"], template=EXAMPLES_PROMPT_TEMPLATE)
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
query_examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=2
)
prompt_cypher = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="The question is:\n{question}",
prefix="""Task: Generate a Cypher query to query a graph database based on the user's question.
Instructions:
Use the provided schema for node types, relationship types, and properties in the graph database. Only incorporate these defined elements.
Avoid utilizing any other node types, relationship types, or properties not present in the provided schema. Here's the schema definition:
{schema}
if the question matches one of the sample questions in the knowledge base then just return the query used to answer it.
if the user asks to retrieve a piece of information about a document or section given their name, then use a `WHERE` statement
and a cypher regular expression matching without case sensitivity like in the queries in your knowledge base when filtering by the name.
Use the statement `(t:transaction)-[:CONTAINS*]->(s)` in the cypher query
with the `*` sign next to the relationship label `CONTAINS` and where s is the section node you are looking for.
Ensure the generated query captures relevant information from the graph database without reducing the retrieved data due to variations in user wording.
Note: do not include any explanations or apologies in your responses.
Do not respond to inquiries seeking information other than the construction of a Cypher statement.
Do not include any text except the generated Cypher statement.
""",
input_variables=["schema", "question"]
)
QA_GENERATION_TEMPLATE = """
Task: answer the question you are given based on the context provided.
Instructions:
You are an assistant that helps to form nice and human understandable answers.
Use the context information provided to generate a well organized and comprehensve answer to the user's question.
When the provided information contains multiple elements, structure your answer as a bulleted or numbered list to enhance clarity and readability.
You must use the information to construct your answer.
The provided information is authoritative; do not doubt it or try to use your internal knowledge to correct it.
Make the answer sound like a response to the question without mentioning that you based the result on the given information.
Here's the information:
{context}
Question: {question}
Answer:
"""
prompt_qa = PromptTemplate(input_variables=["context", "question"], template=QA_GENERATION_TEMPLATE)
chain = GraphCypherQAChain.from_llm(
cypher_llm=ChatOpenAI(temperature=0, model="gpt-4"),
qa_llm=ChatOpenAI(temperature=0, model="gpt-4"),
graph=graph,
verbose=True,
return_intermediate_steps=True,
return_source_documents=True,
validate_cypher=True,
cypher_prompt=prompt_cypher,
qa_prompt=prompt_qa
)`
2. run chain and retrieve source docs used to answer question:
` res = chain.run({)) # chatbot response
answer = res['result']
print("source_documents" in res)
print(res.get("source_documents"))`
### Expected behavior
output is expected to be the source files that were queried or a similar output indicating the high level graph elements that were used to construct the context passed in the prompt. | GraphCypherQAChain doesn't support returning source documents with `return_source_documents` param like the `BaseQAWithSourcesChain` chains | https://api.github.com/repos/langchain-ai/langchain/issues/15543/comments | 3 | 2024-01-04T14:32:14Z | 2024-04-17T16:33:13Z | https://github.com/langchain-ai/langchain/issues/15543 | 2,065,766,207 | 15,543 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-bigquery = "^3.14.1"
google-api-core = "^2.15.0"
google-cloud-core = "^2.4.1"
grpcio = "^1.60.0"
grpcio-tools = "^1.60.0"
langchain-google-genai = "^0.0.5"
langchain-core = "^0.1.5"
google-cloud-aiplatform = "^1.38.1"
langchain-community = "^0.0.8"
### Who can help?
when test the BigQuery Vector Search as website demo,https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search
it caused Not found: Table xxx.xxx.INFORMATION_SCHEMA.VECTOR_INDEXES was not found in location US. in google big-query
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
the procedures steps are in the page of https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search
,official langchain demo page.
### Expected behavior
it was official langchain demo page. and result should be shown but error.
probably something went wrong with big-query, please help, thanks a lot! | INFORMATION_SCHEMA.VECTOR_INDEXES was not found in google big-query | https://api.github.com/repos/langchain-ai/langchain/issues/15538/comments | 5 | 2024-01-04T11:51:00Z | 2024-04-16T16:18:40Z | https://github.com/langchain-ai/langchain/issues/15538 | 2,065,516,520 | 15,538 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When using asynchronous loading the `RecursiveUrlLoader`, it would be nice to be able to set a limit for the number of parallel HTTP requests when scraping a website.
Right now, when using async loading it is very likely to get errors like the following:
```
04-01-24 12:02:53 [WARNING] recursive_url_loader.py: Unable to load https://docs.llamaindex.ai/en/stable/module_guides/querying/querying.html.
Received error Cannot connect to host docs.llamaindex.ai:443 ssl:default [Network is unreachable] of type ClientConnectorError
```
### Motivation
This feature allows to:
- Reduce the probability of receiving an error due to the excessive number of requests.
- Make the loader more robust.
### Your contribution
I have already implemented and tested a solution using `asyncio.Semaphore`. This class allows to set a limit for the maximum number of parallel tasks that can be performed in a program. | feat: limit the number of concurrent requests in the RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15536/comments | 1 | 2024-01-04T11:08:15Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15536 | 2,065,453,597 | 15,536 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python version: 3.9.7
Langchain version: 0.0.352
Argilla version: 1.20.0
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
ArgillaCallbackHandler does not set the DEFAULT_API_KEY properly while initiliazing. This might cause problems with some setups. Link to line: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/argilla_callback.py#L137
### Expected behavior
DEFAULT_API_KEY to be correctly set. | ArgillaCallback doesn't properly set DEFAULT_API_KEY | https://api.github.com/repos/langchain-ai/langchain/issues/15531/comments | 1 | 2024-01-04T09:53:47Z | 2024-04-11T16:18:21Z | https://github.com/langchain-ai/langchain/issues/15531 | 2,065,338,280 | 15,531 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.351
langchain-community 0.0.4
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain_community.chat_models.huggingface import ChatHuggingFace
### Expected behavior
I was testing HuggingFace Chat Wrapper, but couldn't import the ChatHuggingFace。Has ChatHuggingFace changed paths? | The ChatHuggingFace package cannot be found. "from langchain_community.chat_models.huggingface import ChatHuggingFace",Has ChatHuggingFace changed paths? | https://api.github.com/repos/langchain-ai/langchain/issues/15530/comments | 3 | 2024-01-04T09:24:18Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15530 | 2,065,294,377 | 15,530 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
below is my code where I am implementing Memory with Prompt Template
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document,"*****************************************")
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm,output_key='answer',memory_key='chat_history',return_messages=True)
# qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True, combine_docs_chain_kwargs={"prompt": formatted_prompt})
# prompt_qa={"qa": qa, "formatted_prompt": formatted_prompt}
return qa
#below is my error I am getting:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for StuffDocumentsChain__root__
document_variable_name context was not found in llm_chain input_variables: ['check', 'context_text'] (type=value_error)
### Suggestion:
_No response_ | Issue: document_variable_name context was not found in llm_chain input_variables: | https://api.github.com/repos/langchain-ai/langchain/issues/15528/comments | 4 | 2024-01-04T08:18:07Z | 2024-06-08T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15528 | 2,065,203,802 | 15,528 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from sentence_transformers import SentenceTransformer
os.environ['OPENAI_API_KEY'] = "key"
database = "MemoryChatBot"
user = "xxxxxxx"
password = "xxx"
host = "1x2.xxx.0x.xx"
port = "5432"
conn = psycopg2.connect(database=database, user=user, password=password, host=host, port=port)
print("ok!")
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
all_history = []
while True:
chat_history = memory.load_memory_variables({})
question = input('user:')
result = conversation.run({'question': question, 'chat_history': chat_history})
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
all_history.append(talk_all)
print(result)
if question.lower() == 'bye':
st_history = ' '.join(map(str, all_history))
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12') #384emb
# model = SentenceTransformer('BAAI/bge-large-zh-v1.5') #1024emb
res = model.encode(st_history)
model.query_instruction = "test"
res_str = str(res.tolist())
cursor = conn.cursor()
sql = f"INSERT INTO tmp04 (embedding) VALUES ('{res_str}')"
cursor.execute(sql)
conn.commit()
print(f'embedding: {res[:4].tolist()}...')
print("ok!")
break

### Suggestion:
Hope for more support packages for pgvector.
| Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15527/comments | 4 | 2024-01-04T08:14:43Z | 2024-04-11T16:14:05Z | https://github.com/langchain-ai/langchain/issues/15527 | 2,065,199,886 | 15,527 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.352 and 0.0.353
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Custom _call wrapper
### Expected behavior
No import error | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/usr/local/lib/python3.11/site-packages/langchain_core/tracers/context.py | https://api.github.com/repos/langchain-ai/langchain/issues/15526/comments | 5 | 2024-01-04T08:08:19Z | 2024-01-04T19:50:14Z | https://github.com/langchain-ai/langchain/issues/15526 | 2,065,192,495 | 15,526 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from sentence_transformers import SentenceTransformer
os.environ['OPENAI_API_KEY'] = "key"
database = "MemoryChatBot"
user = "xxxxxxx"
password = "xxx"
host = "1x2.xxx.0x.xx"
port = "5432"
conn = psycopg2.connect(database=database, user=user, password=password, host=host, port=port)
print("ok!")
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
all_history = []
while True:
chat_history = memory.load_memory_variables({})
question = input('user:')
result = conversation.run({'question': question, 'chat_history': chat_history})
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
all_history.append(talk_all)
print(result)
if question.lower() == 'bye':
st_history = ' '.join(map(str, all_history))
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12') #384emb
# model = SentenceTransformer('BAAI/bge-large-zh-v1.5') #1024emb
res = model.encode(st_history)
model.query_instruction = "test"
res_str = str(res.tolist())
cursor = conn.cursor()
sql = f"INSERT INTO tmp04 (embedding) VALUES ('{res_str}')"
cursor.execute(sql)
conn.commit()
print(f'embedding: {res[:4].tolist()}...')
print("ok!")
break
How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?

### Suggestion:
The usage of pgvector's Retrievers is unclear.
Trying to write in another way, but the documents keep turning out wrong:
---------------------------------------------------------------------------------------
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain_community.vectorstores.pgvector import PGVector
from langchain.vectorstores.pgvector import DistanceStrategy
from langchain_community.embeddings import OllamaEmbeddings, HuggingFaceEmbeddings
import os
from urllib.parse import quote_plus
os.environ['OPENAI_API_KEY'] = "xxxxxx"
database = "xxx"
user = "xxxxx"
password = quote_plus("x@xxx")
host = "xx.xxx.0.xx"
port = "5432"
print("ok")
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}"
documents = []
db = PGVector.from_documents(
documents=documents,
embedding=embeddings,
collection_name="tmp04",
distance_strategy=DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING)
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
while True:
chat_history = memory.load_memory_variables({})
question = input('ask:')
embedding = embeddings.embed_query(question)
documents.append(embedding)
print(documents)
result = conversation.run({'question': question, 'chat_history': chat_history, })
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
documents.append(talk_all)
print(result)
if question.lower() == 'bye':
print("ok")
break
| Issue: <"How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15525/comments | 2 | 2024-01-04T07:51:08Z | 2024-05-20T16:08:11Z | https://github.com/langchain-ai/langchain/issues/15525 | 2,065,169,209 | 15,525 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.9.13
langchain==0.0.316
langchain-community==0.0.1
langchain-core==0.0.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import pprint
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import AsyncChromiumLoader
from langchain_community.document_transformers import BeautifulSoupTransformer
from langchain.chains import create_extraction_chain
from langchain_community.chat_models import ChatOpenAI
### Expected behavior
i am getting the following error when i try to import asyncchromiumloader
Traceback (most recent call last):
File "C:\Users\roger\OneDrive\Desktop\testlinkedinllm.py", line 5, in <module>
from langchain_community.document_loaders import AsyncChromiumLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\__init__.py", line 51, in <module>
from langchain_community.document_loaders.blackboard import BlackboardLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\blackboard.py", line 10, in <module>
from langchain_community.document_loaders.pdf import PyPDFLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\pdf.py", line 18, in <module>
from langchain_community.document_loaders.parsers.pdf import (
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\__init__.py", line 5, in <module>
from langchain_community.document_loaders.parsers.language import LanguageParser
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\language\__init__.py", line 1, in <module>
from langchain_community.document_loaders.parsers.language.language_parser import (
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\language\language_parser.py", line 24, in <module>
"cobol": Language.COBOL,
File "C:\Users\roger\anaconda3new\lib\enum.py", line 429, in __getattr__
raise AttributeError(name) from None
AttributeError: COBOL | AsyncChromiumloader gives attribute error : COBOL | https://api.github.com/repos/langchain-ai/langchain/issues/15524/comments | 6 | 2024-01-04T07:46:03Z | 2024-02-22T00:38:56Z | https://github.com/langchain-ai/langchain/issues/15524 | 2,065,163,019 | 15,524 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/7a93356cbc5d89cc0f7dd746d8f1bb52666fd0f1/libs/community/langchain_community/document_loaders/chromium.py#L78C40-L78C44
Hello,
I encountered a RuntimeError when running the code that uses the AsyncChromiumLoader class. The error message is asyncio.run() cannot be called from a running event loop.
Here is the relevant part of the traceback:
```python
File ~/.../.venv/lib/python3.10/site-packages/langchain_community/document_loaders/chromium.py:78, in AsyncChromiumLoader.lazy_load(self)
77 for url in self.urls:
---> 78 html_content = asyncio.run(self.ascrape_playwright(url))
```
It seems that asyncio.run() is being called inside a running event loop, which is not allowed. This happens in the lazy_load method of the AsyncChromiumLoader class.
I think the issue could be resolved by refactoring the code to ensure that asyncio.run() is not called from a running event loop. One possible solution could be to use await instead of asyncio.run() to run the self.ascrape_playwright(url) coroutine, and then use asyncio.run() to run the lazy_load method in the main program.
Could you please look into this issue and confirm if this is a bug or if there's something I'm missing in my usage of the AsyncChromiumLoader class?
Thank you for your time and help. | RuntimeError when calling asyncio.run() from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/15523/comments | 6 | 2024-01-04T07:35:01Z | 2024-06-15T16:06:51Z | https://github.com/langchain-ai/langchain/issues/15523 | 2,065,149,238 | 15,523 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened?
### Suggestion:
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | https://api.github.com/repos/langchain-ai/langchain/issues/15522/comments | 1 | 2024-01-04T06:29:52Z | 2024-04-11T16:20:13Z | https://github.com/langchain-ai/langchain/issues/15522 | 2,065,084,378 | 15,522 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have been trying to use mixtral-7B as LLM agent with langchain. Agent has been provided a PythonREPL tool for any kind of code execution.
While providing the output, it either runs into agent timeout error or else it provide wrong answer (but in correct format). By further analysis I got to know that It is trying to use the REPL tool but due to invalid input and output it is unable to.
please find the debug logs for whole LLM chain attached:
[debug.txt](https://github.com/langchain-ai/langchain/files/13826496/debug.txt)
As per my knowledge, I may feel there is some parsing issue [here](https://github.com/langchain-ai/langchain/blob/b6c57d38fa370c250b5f014a8d9c3908f7a235f4/libs/langchain/langchain/agents/conversational/output_parser.py#L25).
It is unable to parse the agent input properly due to the occurrence of the \n post agent input in the LLM response. This is just a thought. I am open to any other suggestions as well.
Hence, I wanted to add the custom parsing to LLM Output, So that I may able to parse Action & Action Input Accordingly. I have tried using the custom parser to achieve same, but I am unable to view any changes in the logs.
Configurations for custom parser are mentioned below:
```
from langchain.schema.output_parser import BaseLLMOutputParser
class MyOutputParser(BaseLLMOutputParser):
def __init__(self):
super().__init__()
def parse_result(self, output):
output = output.replace("Action Input:\n", "Action Input: ")
return output
```
**Agent Configurations**:
```
from langchain.agents import AgentType, Tool, initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
from langchain.utilities import SerpAPIWrapper
memory = ConversationBufferMemory(memory_key="chat_history",input_key="input")
con_agent = initialize_agent(
tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
handle_parsing_errors=True,
output_parser=MyOutputParser(),
agent_kwargs={'input_variables': ['input','chat_history','data'],
'format_instructions': INSTRUCTIONS,
'suffix': SUFFIX, 'prefix': PREFIX}
)
```
### Suggestion:
_No response_ | Issue: Unable to use custom parser to parse the (intermediate) LLM chain output | https://api.github.com/repos/langchain-ai/langchain/issues/15521/comments | 1 | 2024-01-04T05:18:04Z | 2024-04-11T16:14:02Z | https://github.com/langchain-ai/langchain/issues/15521 | 2,065,015,603 | 15,521 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.353.
python : 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
<img width="685" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/8ae76db3-ee47-4994-81a1-d587d75e27f7">
<img width="643" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/726640b5-b7b6-4734-b6eb-5b5c50a7106c">
<img width="622" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/011d05f2-1a6f-4fc1-abd1-08a8241c4a49">
### Expected behavior
Should be able to load the module | text_splitter module not found in langchain version 0.0.353. | https://api.github.com/repos/langchain-ai/langchain/issues/15520/comments | 2 | 2024-01-04T05:02:15Z | 2024-04-28T16:25:28Z | https://github.com/langchain-ai/langchain/issues/15520 | 2,065,004,679 | 15,520 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone I'm trying to use the langchain LCEL for the autogen script assembly pipeline. The first point that I'm trying to implement is for AI to determine which roles are needed in the autogen group chat to solve the user's task. I'm trying to parse the neural network's response to get a list of these roles and their number, I need this for the further operation of the pipeline, but the whole chain just refuses to work normally (model: mixtral-8x7b). Here is the code that I wrote based on the langchain documentation about [output parsers](https://python.langchain.com/docs/modules/model_io/output_parsers/quick_start):
```
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain_community.llms import OpenAI
from langchain_core.pydantic_v1 import BaseModel, Field, validator
from typing import List
model = llm
# Define your desired data structure.
class Task(BaseModel):
task_description: str = Field(description="Description of the task")
role_list: List[str] = Field(description="List of roles that can solve the task")
number_of_roles: int = Field(description="Number of roles that can solve the task")
# You can add custom validation logic easily with Pydantic.
@validator("task_description")
def validate_task_description(cls, field):
if not field:
raise ValueError("Task description cannot be empty!")
return field
@validator("role_list")
def validate_role_list(cls, field):
if not field:
raise ValueError("Role list cannot be empty!")
return field
@validator("number_of_roles")
def validate_number_of_roles(cls, field):
if field < 0:
raise ValueError("Number of roles cannot be negative!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Task)
prompt = PromptTemplate(
template="Enter your task description:\n{format_instructions}\n{task_description}\n",
input_variables=["task_description"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
# And a query intended to prompt a language model to populate the data structure.
prompt_and_model = prompt | model
output = prompt_and_model.invoke({"task_description": "Write code for game on python."})
parser.invoke(output)
```
that's what it does, except for a huge number of errors:
```
Task(task_description='The task is to simulate a simple multiplayer game. Players take turns to play. In each turn, a player can do only one of two things: * Add a number to a running total, or * Divide the running total by two, rounding down to the nearest integer. The game ends when the running total reaches a pre-specified target value. ', role_list=['admin', 'player'], number_of_roles=2)
```
here is one of the errors:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain\output_parsers\pydantic.py:29, in PydanticOutputParser.parse(self, text)
28 json_str = match.group()
---> 29 json_object = json.loads(json_str, strict=False)
30 return self.pydantic_object.parse_obj(json_object)
File C:\Program Files\Python311\Lib\json\__init__.py:359, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
358 kw['parse_constant'] = parse_constant
--> 359 return cls(**kw).decode(s)
File C:\Program Files\Python311\Lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File C:\Program Files\Python311\Lib\json\decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[76], line 47
45 prompt_and_model = prompt | model
46 output = prompt_and_model.invoke({"task_description": "Write code for game on python."})
---> 47 parser.invoke(output)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:179, in BaseOutputParser.invoke(self, input, config)
170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
(...)
176 run_type="parser",
177 )
178 else:
--> 179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\base.py:886, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
879 run_manager = callback_manager.on_chain_start(
880 dumpd(self),
881 input,
882 run_type=run_type,
883 name=config.get("run_name"),
884 )
885 try:
--> 886 output = call_func_with_variable_args(
887 func, input, config, run_manager, **kwargs
888 )
889 except BaseException as e:
890 run_manager.on_chain_error(e)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\config.py:308, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
306 if run_manager is not None and accepts_run_manager(func):
307 kwargs["run_manager"] = run_manager
--> 308 return func(input, **kwargs)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:180, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
(...)
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
--> 180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:222, in BaseOutputParser.parse_result(self, result, partial)
209 def parse_result(self, result: List[Generation], *, partial: bool = False) -> T:
210 """Parse a list of candidate model Generations into a specific format.
211
212 The return value is parsed from only the first Generation in the result, which
(...)
220 Structured output.
221 """
--> 222 return self.parse(result[0].text)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain\output_parsers\pydantic.py:35, in PydanticOutputParser.parse(self, text)
33 name = self.pydantic_object.__name__
34 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 35 raise OutputParserException(msg, llm_output=text)
OutputParserException: Failed to parse Task from completion
Create a class `Game` that has the following attributes:
* `task_description`: a string attribute that describes the task that the game is about
* `role_list`: a list of strings that contains the names of the roles that can solve the task
* `number_of_roles`: an integer attribute that contains the number of roles that can solve the task
The `Game` class should have the following methods:
* `__init__`: the constructor should take three parameters: `task_description`, `role_list`, and `number_of_roles` and initialize the corresponding attributes.
* `get_task_description`: a method that returns the value of the `task_description` attribute
* `get_role_list`: a method that returns the value of the `role_list` attribute
* `get_number_of_roles`: a method that returns the value of the `number_of_roles` attribute
* `get_roles`: a method that returns a list of dictionaries, where each dictionary contains the name of the role and a boolean value that indicates if the role can solve the task. The list should contain `number_of_roles` dictionaries.
Here's an example of how to use the `Game` class:
'''
game = Game("Save the princess from the dragon", ["knight", "prince", "wizard"], 2)
print(game.get_task_description())
print(game.get_role_list())
print(game.get_number_of_roles())
roles = game.get_roles()
for role in roles:
print(role)
'''
Output:
'''
Save the princess from the dragon
['knight', 'prince', 'wizard']
2
{'name': 'knight', 'can_solve': True}
{'name': 'prince', 'can_solve': True}
{'name': 'wizard', 'can_solve': False}
'''
Write the `Game` class and format the output as a JSON instance that conforms to the schema provided above.
'''json
{
"task_description": "Save the princess from the. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```
### Suggestion:
_No response_ | Issue: LCEL output parser error | https://api.github.com/repos/langchain-ai/langchain/issues/15518/comments | 7 | 2024-01-04T02:26:08Z | 2024-02-05T17:20:35Z | https://github.com/langchain-ai/langchain/issues/15518 | 2,064,893,373 | 15,518 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Problem Statement:
I am currently working on tracking token consumption for asynchronous chain calls in my application. I am utilizing the AsyncIteratorCallbackHandler and its aiter() method to stream tokens to my client. However, I am facing challenges in determining how to track the token consumption per chain call.
Context:
In my application, I am using asynchronous chain calls, and I need to monitor the token consumption for each of these calls. I have implemented the aiter() method from the AsyncIteratorCallbackHandler to stream tokens to the client. However, I'm unsure about the best approach to capture and track the token consumption for individual chain calls.
Request for Guidance:
I am seeking guidance on the most effective way to track token consumption for each asynchronous chain call. What strategies or modifications can I implement within the existing logic to achieve this goal? Any insights or recommendations would be greatly appreciated.
Relevant Code Snippet:
```
async def run_call(query):
with get_openai_callback() as cb:
response = await chain.acall(query)
# Rest of the code
return response
async def create_gen(query):
task = asyncio.create_task(run_call(query))
try:
async for token in handler.aiter():
print("2. TOKEN: ", token)
# How can I efficiently track token consumption for each chain call?
yield f"data: {json.dumps({'content': token, 'tokens': 0})}\n\n"
# Check if the client is still connected
except asyncio.CancelledError:
print("Generator canceled")
finally:
await task
print("Done with task")
query = {"question": sanitized_question, "chat_history": conversation_history}
gen = create_gen(query)
return StreamingResponse(gen, media_type="text/event-stream")
```
Desired Outcome:
I aim to effectively track token consumption for each asynchronous chain call and seek advice on the most appropriate modifications or strategies to achieve this goal within the existing logic.
### Suggestion:
_No response_ | Issue: Tracking Token Consumption for Async Chain Calls | https://api.github.com/repos/langchain-ai/langchain/issues/15517/comments | 1 | 2024-01-04T01:43:22Z | 2024-04-11T16:07:51Z | https://github.com/langchain-ai/langchain/issues/15517 | 2,064,865,668 | 15,517 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.354
text_generation==0.6.1
python:3.10-slim
### Who can help?
@agola11 @hwaking
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Model: TheBloke/Llama-2-7B-Chat-GPTQ, but I've also tried TheBloke/Mistral-7B-OpenOrca-GPTQ
FastAPI example with HuggingFaceTextGenInference streaming:
```python
from fastapi import FastAPI
import langchain
from langchain.llms import HuggingFaceTextGenInference
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
import os
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
langchain.debug = True
# Enable CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # You can specify the list of allowed origins or use "*" for any origin
allow_credentials=True,
allow_methods=["*"], # You can specify the HTTP methods that are allowed
allow_headers=["*"], # You can specify the HTTP headers that are allowed
)
# Configuration for local LLM
ai_url = "http://tgi-ai-server:" + str(os.getenv("AI_PORT", 80)) + "/generate"
# Configure the LLM
llm = HuggingFaceTextGenInference(
inference_server_url=ai_url,
max_new_tokens=20,
streaming=True,
)
template = """
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
"""
prompt_template = PromptTemplate(
template=template,
input_variables= ["prompt"]
)
# Initialize the LLM Chain
llm_chain = LLMChain(llm=llm, prompt=prompt_template)
@app.get("/chat")
async def chat():
prompt = {"prompt":"What is the Nickelodeon channel?"}
# Generate the response using the LLM Chain and stream the output
async def generate():
for text in llm_chain.run(prompt):
yield text
return StreamingResponse(generate(), media_type="text/plain")
# Run the server (if running this script directly)
# Use the command: uvicorn script_name:app --reload
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
Output:
```
[chain/start] [1:chain:LLMChain] Entering Chain run with input:
{
"prompt": "What is the Nickelodeon channel?"
}
[llm/start] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] Entering LLM run with input:
{
"prompts": [
"[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\nWhat is the Nickelodeon channel?[/INST]"
]
}
[llm/end] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] [765ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"type": "Generation"
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:LLMChain] [765ms] Exiting Chain run with output:
{
"text": ""
}
```
### Expected behavior
HuggingFaceTextGenInference does not return any streaming data. Works fine when streaming=False in parameters. | HuggingFaceTextGenInference Streaming does not output | https://api.github.com/repos/langchain-ai/langchain/issues/15516/comments | 8 | 2024-01-04T01:13:21Z | 2024-01-23T00:01:11Z | https://github.com/langchain-ai/langchain/issues/15516 | 2,064,847,688 | 15,516 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10 & Ubuntu 22.04
langchain==0.0.354
langchain-community==0.0.8
langchain-core==0.1.5
Python 3.10.13
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/textgen.py
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With the latest TextGen WebUI install the API endpoints are openAI
Lanchain has the endpoints:
Example Line 213
url = f"{self.model_url}/api/v1/generate"
when used this returns a 404.
What works is
url = f"{self.model_url}/v1/chat/completions"
A fix was attempted: https://gist.github.com/ddomen/8eaa49879d42a4a42a243437b5ddfa83
It works for me if I set legacy_api=False,
but his truncates responses to about 20 or so characters.
### Expected behavior
certianly not a 404.
My app that is using langchain was working with an install of TextGen fomr like amonth ago. I went to deploy in a new environment and pulled the latest TextGen and Langchain stopped working. When I dug into the problem I saw they now force an OpenAI API interface. | Langchain-Community LLM TextGen has wrong API endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/15512/comments | 2 | 2024-01-04T00:12:34Z | 2024-04-11T16:15:15Z | https://github.com/langchain-ai/langchain/issues/15512 | 2,064,809,247 | 15,512 |
[
"langchain-ai",
"langchain"
] | In cookbook 3 for multimodal retrieval, `limit = 6` is set while retireving documents but the number of returned documents is always 4, regardless of the asked query or the value of `limit`. How can I retrieve `top_k` documents in this code? [Specific line is here](https://github.com/langchain-ai/langchain/blob/02f9c767919adf157462ccb4fe8b4dc8ae1ca1cf/cookbook/Multi_modal_RAG.ipynb#L633) | Is The Limit Parameter Used to Retrieve Top_k? | https://api.github.com/repos/langchain-ai/langchain/issues/15511/comments | 1 | 2024-01-03T23:56:40Z | 2024-04-11T16:16:13Z | https://github.com/langchain-ai/langchain/issues/15511 | 2,064,798,810 | 15,511 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've been playing around with the multimodal notebooks introduced in the [docs here.](https://blog.langchain.dev/semi-structured-multi-modal-rag/). However, the number of retrieved documents for every query is always 4. Specifically, for [cookbook 3](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb) the `limit` parameter is set to 6, but the number of retrieved documents is always 4 regardless of the value of `limit`. Is there another way to get `top_k`?
### Suggestion:
_No response_ | Can't Specify Top-K retrieved Documents in Multimodal Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/15510/comments | 3 | 2024-01-03T23:52:17Z | 2024-06-19T03:27:35Z | https://github.com/langchain-ai/langchain/issues/15510 | 2,064,793,905 | 15,510 |
[
"langchain-ai",
"langchain"
] | ### System Info
Full traceback:
```
File "/src/app.py", line 9, in <module>
from langchain.chains import ConversationalRetrievalChain
File "/venv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 20, in <module>
from langchain.chains.api.base import APIChain
File "/venv/lib/python3.11/site-packages/langchain/chains/api/base.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/venv/lib/python3.11/site-packages/langchain/callbacks/__init__.py", line [45] in <module>
from langchain_core.tracers.context import (
ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/venv/lib/python3.11/site-packages/langchain_core/tracers/context.py)
```
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This issue seems to originate from the import:
```
from langchain.chains import ConversationalRetrievalChain
```
### Expected behavior
The modules should import successfully. | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' | https://api.github.com/repos/langchain-ai/langchain/issues/15508/comments | 6 | 2024-01-03T23:11:47Z | 2024-01-04T20:29:53Z | https://github.com/langchain-ai/langchain/issues/15508 | 2,064,767,804 | 15,508 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Console Output:
```
[chain/start] [1:chain:LLMChain] Entering Chain run with input:
{
"system_message": "Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences.",
"question": "I am John Connor. Who is the T-1000? Am I, John Connor, in danger?",
"user_name": "John Connor",
"ai_name": "Terminator"
}
[llm/start] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] Entering LLM run with input:
{
"prompts": [
"<|im_start|>system\n Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences.<|im_end|>\n <|im_start|>John Connor\n I am John Connor. Who is the T-1000? Am I, John Connor, in danger?<|im_end|>\n <|im_start|>Terminator"
]
}
[llm/end] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] [4.50s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"type": "Generation"
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:LLMChain] [4.50s] Exiting Chain run with output:
{
"text": ""
}
```
My code:
```python
import os
import langchain
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import HuggingFaceTextGenInference
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from fastapi.middleware.cors import CORSMiddleware
langchain.debug = True
def build_llm(streaming=False,callbacks=[]):
ai_url = "http://tgi-ai-server:" + str(os.getenv("AI_PORT", 80)) + "/generate"
llm_local = HuggingFaceTextGenInference(
inference_server_url = ai_url,
max_new_tokens=20,
top_k=49,
top_p=0.14,
typical_p=0.95,
temperature=1.31,
repetition_penalty=1.17,
# stop_sequences=[f"\n{user_name}:", f"\n{ai_name}:"],
streaming=streaming,
callbacks=callbacks
)
template = """<|im_start|>system
{system_message}<|im_end|>
<|im_start|>{user_name}
{question}<|im_end|>
<|im_start|>{ai_name}
"""
prompt = PromptTemplate(
template=template,
input_variables= ["system_message", "question", "user_name", "ai_name"]
)
llm_chain_local = LLMChain(llm=llm_local , prompt=prompt)
return llm_chain_local
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
# Enable CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # You can specify the list of allowed origins or use "*" for any origin
allow_credentials=True,
allow_methods=["*"], # You can specify the HTTP methods that are allowed
allow_headers=["*"], # You can specify the HTTP headers that are allowed
)
def build_prompt():
user_name = 'John Connor'
ai_name = 'Terminator'
system_message = "Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences."
question = "I am John Connor. Who is the T-1000? Am I, John Connor, in danger?"
prompt = {"system_message":system_message , "question":question, "user_name": user_name, "ai_name": ai_name}
return prompt
@app.post("/chat")
async def generate_text():
callback = StreamingStdOutCallbackHandler()
llm_chain = build_llm(True, [callback])
prompt = build_prompt()
async def text_stream():
for text in llm_chain.run(prompt):
yield text
return StreamingResponse(text_stream())
```
### Suggestion:
_No response_ | Hugging Face LLM returns empty response for LLMChain via FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/15506/comments | 1 | 2024-01-03T22:44:36Z | 2024-04-10T16:17:17Z | https://github.com/langchain-ai/langchain/issues/15506 | 2,064,744,737 | 15,506 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Able to `persist` between batch when the embedding is between built:
```python
db = Chroma.from_documents(
documents=documents, embedding=embeddings, persist_directory=persist_directory)
db.persist()
return db
```
would be nice to be :
```python
db = Chroma.from_documents(
documents=documents, embedding=embeddings, persist_directory=persist_directory,
batch_size=batch_size, persist_between_batches=2
)
db.persist()
return db
```
to specify a batch size and persist between every `2` batches.
### Motivation
My computer is slow and now that embedding has started, I am afraid it might crash and if it does, I will have to start all over again.
### Your contribution
Not really sorry | Embeddings - Persist between batches | https://api.github.com/repos/langchain-ai/langchain/issues/15504/comments | 1 | 2024-01-03T21:53:13Z | 2024-04-10T16:16:45Z | https://github.com/langchain-ai/langchain/issues/15504 | 2,064,695,031 | 15,504 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.354
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. replace HuggingFaceHub InferenceAPI to InferenceClient
2. replace max_length to max_new_tokens
PR: #
Reference: [https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client](https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client)
### Expected behavior
PR #15498 address this issues | Fix HuggingFaceHub LLM Integration | https://api.github.com/repos/langchain-ai/langchain/issues/15500/comments | 1 | 2024-01-03T20:17:12Z | 2024-04-10T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15500 | 2,064,594,842 | 15,500 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am learning how to use Pinecone properly with LangChain and OpenAI Embedding. I built an application which can allow user upload PDFs and ask questions about the PDFs. In the application I used Pinecone as the vector database and store embeddings in Pinecone. However, I want to make change to my code so that whenever an user upload a PDF, the application can check if the PDF already been store as embeddings in Pinecone, and if yes, use the old embeddings; If no, then upload new embeddings.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from streamlit_extras.add_vertical_space import add_vertical_space
# Langchain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models.openai import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
from langchain.schema import Document
from langchain.document_loaders import UnstructuredPDFLoader
# Pinecone
from langchain.vectorstores import Pinecone
import pinecone
from apikey import pinecone_api_key
import uuid
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# data = pdf_reader.load()
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = [Document(page_content=section) for section in sections]
# st.write(chunks)
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size = 500,
# chunk_overlap = 20
# )
# chunks = text_splitter.split_documents(data)
## embeddings
# Set up embeddings
embeddings = OpenAIEmbeddings( model="text-embedding-ada-002")
try:
# Set up Pinecone
pinecone.init(api_key=pinecone_api_key, environment='gcp-starter')
index_name = 'langchainresearch'
if index_name not in pinecone.list_indexes():
pinecone.create_index(index_name, dimension=1536, metric="cosine") # Adjust the dimension as per your embeddings
index = pinecone.Index(index_name)
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
except Exception as e:
print(f"An error occurred: {e}")
# Create or Load Chat History
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if docsearch is not None and submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = docsearch.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-4', temperature=0.7, max_tokens=3000)
message = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=prompt)
]
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=message)
print(cb)
# st.write(response)
# st.write(docs)
# Process the response using AIMessage schema
# ai_message = AIMessage(content="AI message content")
# ai_message.content = response.generations[0].message.content
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
Give me some recommedations on what should I do or change.
### Suggestion:
_No response_ | Issue: Embedding with Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/15497/comments | 11 | 2024-01-03T19:40:35Z | 2024-01-24T14:57:45Z | https://github.com/langchain-ai/langchain/issues/15497 | 2,064,553,332 | 15,497 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Provide a method to create the HNSW for the PGVector vectorstore
### Motivation
There is a similar method implemented for PGEmbedding but the embedding extension will be deprecated
### Your contribution
https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw
https://github.com/langchain-ai/langchain/blob/6e90b7a91bba16d84689d07d1016a941eddf4f64/libs/community/langchain_community/vectorstores/pgembedding.py#L184-L212 | PGVector method for HNSW | https://api.github.com/repos/langchain-ai/langchain/issues/15496/comments | 5 | 2024-01-03T19:31:06Z | 2024-07-08T16:04:56Z | https://github.com/langchain-ai/langchain/issues/15496 | 2,064,543,064 | 15,496 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi all,
I am trying to create an app that can act upon a natural language prompt to interact with the Monday API and then successfully carry out the relevant action.
The code I currently have is as follows:
> MondayDocs = """
>
>
> The below is Monday API documentation showing how to create an item.
>
> The endpoint is:
>
> "https://api.monday.com/v2".
>
> ///Example to create a new item:
>
>
> mutation_query = '''
> mutation {
> create_item (board_id: id, item_name: "itemname") {
> id
> }
> }
> '''
>
>
> data = {'query': mutation_query}
>
>
> response = requests.post(url=apiUrl, json=data, headers=headers)
>
>
> Use the below authorisation:
>
> headers = {"Authorization": "X"}
>
>
> """
>
>
> llm = ChatOpenAI(temperature=0, model= 'gpt-3.5-turbo-1106', openai_api_key="Y")
>
> Test = APIChain.from_llm_and_api_docs(llm, MondayDocs,limit_to_domains=None, verbose=True)
>
> Test.run("Create a new item named James.")
>
However, when I run this, I get the error message:
> No connection adapters were found
How can I fix this?
### Suggestion:
_No response_ | Issue: No connection adaptors were found | https://api.github.com/repos/langchain-ai/langchain/issues/15494/comments | 9 | 2024-01-03T18:58:45Z | 2024-04-11T16:14:00Z | https://github.com/langchain-ai/langchain/issues/15494 | 2,064,506,326 | 15,494 |
[
"langchain-ai",
"langchain"
] | ### System Info
``` bash
bash-4.2# pip freeze | grep langchain
langchain==0.0.353
langchain-community==0.0.8
langchain-core==0.1.5
bash-4.2# python --version
Python 3.10.13
bash-4.2# uname -a
Linux 5b9ca59024db 6.1.61-85.141.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 00:39:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
```
### Who can help?
@hwchase17 I looks like `langchain-core==0.1.5` has a problem with `'tracing_enabled'` because the V1 was deleted but some references to it remained:
``` python
File <ourfile>:5
2 import os
3 from typing import Any
----> 5 from langchain.chains import create_extraction_chain_pydantic
6 from langchain.chat_models import AzureChatOpenAI
7 from langchain.chains.base import Chain
File /var/lang/lib/python3.10/site-packages/langchain/chains/__init__.py:20
1 """**Chains** are easily reusable components linked together.
2
3 Chains encode a sequence of calls to components like models, document retrievers,
(...)
17 Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
18 """
---> 20 from langchain.chains.api.base import APIChain
21 from langchain.chains.api.openapi.chain import OpenAPIEndpointChain
22 from langchain.chains.combine_documents.base import AnalyzeDocumentChain
File /var/lang/lib/python3.10/site-packages/langchain/chains/api/base.py:11
8 from langchain_core.prompts import BasePromptTemplate
9 from langchain_core.pydantic_v1 import Field, root_validator
---> 11 from langchain.callbacks.manager import (
12 AsyncCallbackManagerForChainRun,
13 CallbackManagerForChainRun,
14 )
15 from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
16 from langchain.chains.base import Chain
File /var/lang/lib/python3.10/site-packages/langchain/callbacks/__init__.py:45
40 from langchain_community.callbacks.whylabs_callback import WhyLabsCallbackHandler
41 from langchain_core.callbacks import (
42 StdOutCallbackHandler,
43 StreamingStdOutCallbackHandler,
44 )
---> 45 from langchain_core.tracers.context import (
46 collect_runs,
47 tracing_enabled,
48 tracing_v2_enabled,
49 )
50 from langchain_core.tracers.langchain import LangChainTracer
52 from langchain.callbacks.file import FileCallbackHandler
ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/var/lang/lib/python3.10/site-packages/langchain_core/tracers/context.py)
```
Reverting to `langchain-core==0.1.4` fixed the issue for me
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Install core version `0.1.5` and `from langchain.chains import create_extraction_chain_pydantic`
### Expected behavior
No error | `langchain-core` cannot import name 'tracing_enabled' | https://api.github.com/repos/langchain-ai/langchain/issues/15491/comments | 7 | 2024-01-03T17:49:06Z | 2024-02-21T14:27:36Z | https://github.com/langchain-ai/langchain/issues/15491 | 2,064,424,956 | 15,491 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've scoured the internet trying to find an example that would use a custom model (Mistral) with ```HuggingFaceTextGenInference``` which uses ```LLMChain``` to return a **streaming** response via ```fastapi```.
Does anyone have a working example?
### Suggestion:
_No response_ | HELP!: Example of using HuggingFaceTextGenInference, llmchain, and fastapi | https://api.github.com/repos/langchain-ai/langchain/issues/15487/comments | 5 | 2024-01-03T16:55:10Z | 2024-04-10T16:16:31Z | https://github.com/langchain-ai/langchain/issues/15487 | 2,064,352,002 | 15,487 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Platform:**
Linux Ubuntu 22.04.1
**Python:**
3.10.12
**Langchain:**
- langchain 0.0.353
- langchain-community 0.0.7
- langchain-core 0.1.5
- langsmith 0.0.77
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.chains import *
```
### Expected behavior
I expect the import won't fail, but I got an exception:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/chains/__init__.py", line 20, in <module>
from langchain.chains.api.base import APIChain
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/chains/api/base.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain_community.callbacks.aim_callback import AimCallbackHandler
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/__init__.py", line 24, in <module>
from langchain_community.callbacks.manager import (
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/manager.py", line 14, in <module>
from langchain_community.callbacks.tracers.wandb import WandbTracer
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/tracers/__init__.py", line 4, in <module>
from langchain_core.tracers.langchain_v1 import LangChainTracerV1
ModuleNotFoundError: No module named 'langchain_core.tracers.langchain_v1'
``` | ModuleNotFoundError at import langchain.chains | https://api.github.com/repos/langchain-ai/langchain/issues/15484/comments | 16 | 2024-01-03T15:56:43Z | 2024-04-11T16:15:09Z | https://github.com/langchain-ai/langchain/issues/15484 | 2,064,266,659 | 15,484 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Chatglm3 has added many new features compared to previous chatglm and chatglm2, which is particularly useful for users. So there will definitely be more user demands in using Langchain to build a knowledge base, and it's unclear how long it will take for the community to adapt.
chatglm3 github path :https://github.com/THUDM/ChatGLM3
### Motivation
Chatglm3 has added many new features compared to previous chatglm and chatglm2, which is particularly useful for users. So there will definitely be more user demands in using Langchain to build a knowledge base, and it's unclear how long it will take for the community to adapt.
chatglm3 github path :https://github.com/THUDM/ChatGLM3
### Your contribution
i am soryy. i can not. | How long can I use Langchain to call the chatglm3 API | https://api.github.com/repos/langchain-ai/langchain/issues/15479/comments | 2 | 2024-01-03T15:09:25Z | 2024-04-17T16:18:32Z | https://github.com/langchain-ai/langchain/issues/15479 | 2,064,195,347 | 15,479 |
[
"langchain-ai",
"langchain"
] | ### Feature request
DynamoDBChatMessageHistory class is missing a TTL feature that would allow for history to automatically expire and be deleted by AWS DynamoDB service.
### Motivation
While implementing a chat history using DynamoDBChatMessageHistory, I encoutered a growing history session table. Since AWS DynamoDB supports automatic deletion of items using TTL, it would be nice to have this feature enabled in DynamoDBChatMessageHistory class when writing message into AWS DynamoDB table.
### Your contribution
I am currently in the process of submitting a PR related to this feature request. | Add TTL support for DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/15477/comments | 2 | 2024-01-03T14:29:38Z | 2024-01-30T15:50:29Z | https://github.com/langchain-ai/langchain/issues/15477 | 2,064,133,232 | 15,477 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How do I use SQLDatabaseChain with a big db schema?
I'm using:
`db = SQLDatabase.from_uri(f"postgresql://localhost:5432/test")
db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
`
And all models complain about context window. A smaller database schema works with some models, but that's not my use case. My models _are_ big. Is there a tried and tested method for this?
### Suggestion:
_No response_ | Issue: Large database schema too big for context window using SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15476/comments | 3 | 2024-01-03T14:05:12Z | 2024-04-11T16:13:57Z | https://github.com/langchain-ai/langchain/issues/15476 | 2,064,094,863 | 15,476 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Dear all,
Happy new year!
Due to work legacy stuff, I am still forced to use this library. I am wondering if there is a way to pass multiple prompt template (system, human and ai) instead of just one
Hopefully my man Dosubot can help!
Thanks you so much
Cheers,
Fra
### Idea or request for content:
_No response_ | Using multiple templates as starter for LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15475/comments | 2 | 2024-01-03T13:08:44Z | 2024-04-30T16:14:42Z | https://github.com/langchain-ai/langchain/issues/15475 | 2,064,012,239 | 15,475 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`document_variable_name="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
model = ChatGoogleGenerativeAI(model="gemini-pro",google_api_key=GOOGLE_API_KEY,temperature=0.2,convert_system_message_to_human=True)
template = f"""Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate(
input_variables=["document_variable_name","question"],
template=template,
)
print(QA_CHAIN_PROMPT)
qa_chain = RetrievalQA.from_chain_type(
llm=model,
retriever=vector_index,
return_source_documents=True,
chain_type="stuff",
document_variable_name=context,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)`
below is the error I am getting:
ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name context was not found in llm_chain input_variables: [] (type=value_error)
### Suggestion:
_No response_ | Issue: Issue while implementing RAG with Gemini LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15474/comments | 5 | 2024-01-03T11:35:01Z | 2024-04-29T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15474 | 2,063,844,827 | 15,474 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11
langchain==0.0.350
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the current implementation of the `S3FileLoader` class (https://github.com/langchain-ai/langchain/blob/baeac23/libs/community/langchain_community/document_loaders/s3_file.py#L13-L30):
```
class S3FileLoader(UnstructuredBaseLoader):
"""Load from `Amazon AWS S3` file."""
def __init__(
self,
bucket: str,
key: str,
*,
region_name: Optional[str] = None,
api_version: Optional[str] = None,
use_ssl: Optional[bool] = True,
verify: Union[str, bool, None] = None,
endpoint_url: Optional[str] = None,
aws_access_key_id: Optional[str] = None,
aws_secret_access_key: Optional[str] = None,
aws_session_token: Optional[str] = None,
boto_config: Optional[botocore.client.Config] = None,
):
...
```
Existing `__init__()` method and current implementation of this class doesn't support extra arguments `unstructured_kwargs`. So there is no way to manage `partition` call through these extra arguments (https://github.com/langchain-ai/langchain/blob/baeac23/libs/community/langchain_community/document_loaders/s3_file.py#L126).
### Expected behavior
The implementation of `S3FileLoader` class should follow common approach used for all ancestors of `UnstructuredBaseLoader` which support passing through extra `**unstructured_kwargs` arguments to manage `partition` call in a more granular way. | S3FileLoader doesn't provide the way of passing extra (unstructured_kwargs) parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15472/comments | 2 | 2024-01-03T11:04:15Z | 2024-03-27T22:03:50Z | https://github.com/langchain-ai/langchain/issues/15472 | 2,063,787,050 | 15,472 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hello
The best practice is using comments inside the class/function,
however, comments/TODOs have been placed before in many cases,
Splitting it from the class/function hides some free text that can help understand the purpose and intention of class/function and may cause misinterpreted code blocks before attaching text unrelated to this block.
more visual:
```
import module
#TODO fix bug described in JIRA-WL313, wrong calculation in corner case
class1():
pass
#TODO make classes great again,by adding better parsing
class2():
pass
```
chunks would be
chunk1
```
import module
#TODO fix bug
```
chunk2
```
def class1():
pass
#TODO make class great again one day
```
chunk3
```
def class2():
pass
```
I suggest making the split aware of such a possibility.
Best regards
Ilya
### Motivation
enrich code chunks by attaching relevant information to right chunks even if someone not follow best practice
### Your contribution
i would be happy to submit PR in case you guys think it will improve langchain which I absolutely love | in libs/langchain/langchain/text_splitter.py comments before class/function will be splitter from class/function itself | https://api.github.com/repos/langchain-ai/langchain/issues/15471/comments | 3 | 2024-01-03T10:45:00Z | 2024-04-10T16:13:31Z | https://github.com/langchain-ai/langchain/issues/15471 | 2,063,750,924 | 15,471 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add document loader for CHM (Microsoft Compiled HTML Help) documents, possibly using pychm.
### Motivation
A lot of Windows applications provide documentation in the form of CHM files. Being able to directly load those into the language model, would greatly simplify the workflow of ingesting documentation.
### Your contribution
At this time, I'm unable to provide help with writing the loader. | Support CHM files | https://api.github.com/repos/langchain-ai/langchain/issues/15469/comments | 2 | 2024-01-03T09:57:30Z | 2024-01-07T17:28:54Z | https://github.com/langchain-ai/langchain/issues/15469 | 2,063,656,337 | 15,469 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The following is the code I ran, referring to the official documentation. It looks like prompt_template is not effective and there is no explicit call of prompt_template in the code. How should I customize a prompt for initialize_agent.
```python
prompt_template = """
Translate the sentence into English
{input}
"""
prompt_template = PromptTemplate.from_template(prompt_template)
llm = AzureOpenAI(
model_name="gpt-4",
engine="gpt-4"
)
@tool
def get_word_length(word):
"""get the length of a word"""
return len(word)
tools = [get_word_length]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
prompt = prompt_template
)
answer = agent.run("今天天气怎么样")
```
The output is
```
> Entering new AgentExecutor chain...
This is a Chinese language query asking about the weather today. I don't have access to real-time weather data and besides, my provided actions are limited to getting the length of a word.
Final Answer: I'm sorry, I cannot answer that question.
```
### Suggestion:
_No response_ | Issue: How to customize prompt | https://api.github.com/repos/langchain-ai/langchain/issues/15467/comments | 3 | 2024-01-03T09:38:42Z | 2024-04-10T16:08:12Z | https://github.com/langchain-ai/langchain/issues/15467 | 2,063,619,530 | 15,467 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
below is my code
`context_text="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
print(context_text,"======================")
question = "Describe the Multi-head attention layer in detail?"
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001",google_api_key=GOOGLE_API_KEY)
vector_index = Chroma.from_texts(texts, embeddings).as_retriever(search_kwargs={"k":5})
template = f"""Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer.
{context_text}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)# Run chain
qa_chain = RetrievalQA.from_chain_type(
model,
retriever=vector_index,
return_source_documents=True,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT, "document_variable_name": "source_documents"}
)`
below is the error I am getting
ValidationError Traceback (most recent call last)
[<ipython-input-99-f9533899bc31>](https://localhost:8080/#) in <cell line: 9>()
7 Helpful Answer:"""
8 QA_CHAIN_PROMPT = PromptTemplate.from_template(template)# Run chain
----> 9 qa_chain = RetrievalQA.from_chain_type(
10 model,
11 retriever=vector_index,
4 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name source_documents was not found in llm_chain input_variables: [] (type=value_error)
### Suggestion:
_No response_ | Issue:While implementing Gemini, getting error while using prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15466/comments | 2 | 2024-01-03T09:36:43Z | 2024-04-10T16:13:27Z | https://github.com/langchain-ai/langchain/issues/15466 | 2,063,615,767 | 15,466 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mint 20.3
Python 3.11
Conda 23.9
Pip 23.3.2
Setuptools 69.0.3
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Clone github repo
CD into that folder
Activate conda env
Run `pip install -e .`
### Expected behavior
Langchain should be installed from source, but it doesn't | "Multiple top-level packages discovered in a flat-layout" when installing from source | https://api.github.com/repos/langchain-ai/langchain/issues/15465/comments | 2 | 2024-01-03T09:35:31Z | 2024-01-03T09:37:30Z | https://github.com/langchain-ai/langchain/issues/15465 | 2,063,613,427 | 15,465 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
let me clearify the issue below:
we are using vectordb conversational retriever. we can look into the attached code for more clarity. My issue is i have 4 documents stored in 2 different index. **End-user can select multiple document lets say 3 or 4 document, and ask questions related to any of the selected 4 document, he dont care whatever the document he selects its there on same index.** As a backend logic we can create something index document map by some logic which will give a map of documents existing in different index.
**Now the problem here is i need to pass all the index at once where this 3-4 documents are there, there may be possibility we get more than 2 index where the selected document exist.**
is there a way to pass multi-index while search into below function. i see index accept string argument. let me know if you suggest any workaround for this. we need solution on some better workaround then we are using.
**Below workaround we are using:**
we are creating one temp index on the go and get all the embeddings and text from all the index where the selected document exist, then use that index with query and in the same request we delete that temp index.
Let me know in case you need any other additional details

### Suggestion:
_No response_ | Issue: Passing list of index while retreiver in opensearch vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/15458/comments | 1 | 2024-01-03T08:12:56Z | 2024-04-10T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15458 | 2,063,465,461 | 15,458 |
[
"langchain-ai",
"langchain"
] | I am trying to build a langchain SQL database agent where I want to query only one view for now. I have mentioned the view name in the System Prompt and I have passed view_support=True to the SQLDatabase constructor. When I run the query agent is trying to find out the tables instead views. I am sensing that agent has no method that can fetch the view names of the database, it is trying to call 'sql_db_list_tables' and then sql_db_schema.
look at the code below:
```
import os
from dotenv import load_dotenv
load_dotenv()
from sqlalchemy import create_engine
odbc_str = 'mssql+pyodbc:///?odbc_connect=' \
'Driver={ODBC Driver 18 for SQL Server}' \
';Server=tcp:' + os.getenv("DB_SERVER")+';PORT=1453' + \
';Authentication=ActiveDirectoryPassword' + \
';DATABASE=' + os.getenv("DB_NAME") + \
';Uid='+os.getenv("UID")'+ \
';Pwd='+os.getenv("PWD")'+ \
';Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
print(odbc_str)
db_engine = create_engine(odbc_str)
```
```
from langchain.chat_models import AzureChatOpenAI
llm = AzureChatOpenAI(
openai_api_type=os.getenv("OPENAI_API_TYPE"),
api_version=os.getenv("OPENAI_API_VERSION"),
azure_deployment=os.getenv("DEPLOYMENT_NAME"),
model=os.getenv("OPENAI_CHAT_MODEL"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
temperature=0,
verbose= True)
```
```
from langchain.prompts.chat import ChatPromptTemplate
final_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a helpful AI assistant expert in querying SQL Database to
find answers to user's question about database view: XXXXXXXXXX.vw_YYYYYYYYY
where XXXXXXXXXX is the schema and vw_YYYYYYYYY is the view. You need to directly query
this view: XXXXXXXXXX.vw_YYYYYYYYY for all user's questions.
Note: DO NOT EXECUTE DROP, DELETE or UPDATE SQL QUERIES.
"""
),
("user", "{question}\n ai: "),
]
)
```
```
from langchain.agents import AgentType, create_sql_agent
from langchain.sql_database import SQLDatabase
from langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
db = SQLDatabase(db_engine, view_support=True)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
sqldb_agent.run(final_prompt.format(
question="What is the Exposure for id: 574182 by type ?"
))
```
AGENT ANSWER:
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input: ""
Observation: xyz, abc, pqr
Thought:The table I need to query, XXXXXXXXXX.vw_YYYYYYYYY, is not listed in the database tables. I need to check the schema of the view to understand its structure and the fields it contains.
Action: sql_db_schema
Action Input: "XXXXXXXXXX.vw_YYYYYYYYY"
Observation: Error: table_names {'XXXXXXXXXX.vw_YYYYYYYYY'} not found in database
Thought:It seems like there is an error in finding the view XXXXXXXXXX.vw_YYYYYYYYY in the database. I need to inform the user about this.
Final Answer: I'm sorry, but it seems like the view 'XXXXXXXXXX.vw_YYYYYYYYY' does not exist in the database. Please check the view name and try again.
> Finished chain.
One option is to override the sql agent action methods or add new methods, but I dont want to do do that as per prompt instructions it should successfully find the view in the schema and query.
Please help anyone has achieved this. Thanks in advance. | Langchain SQL Database Agent failed to find the view name in the MS SQL database. | https://api.github.com/repos/langchain-ai/langchain/issues/15457/comments | 5 | 2024-01-03T08:03:31Z | 2024-04-10T16:17:08Z | https://github.com/langchain-ai/langchain/issues/15457 | 2,063,449,745 | 15,457 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to know if there is an implementation to use ConversationChains, Agents with NVIDIA's Nemo Rails.
Thanks
### Suggestion:
_No response_ | Issues using Nemo rails with ConversationChains and Agents. | https://api.github.com/repos/langchain-ai/langchain/issues/15456/comments | 1 | 2024-01-03T07:30:33Z | 2024-04-10T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15456 | 2,063,398,062 | 15,456 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Allow using [bind parameters](https://docs.sqlalchemy.org/en/20/core/connections.html#sqlalchemy.engine.Connection.execute.params.parameters) in SQLDatabase's [run method](https://github.com/langchain-ai/langchain/blob/65afc13b8b53a1ca41a1a3998dad9eb8d83ca917/libs/community/langchain_community/utilities/sql_database.py#L426).
### Motivation
I don't see a way to use bind params in SQLDatabase queries.
### Your contribution
Happy to help out with a PR if needed (and if confirmed that this functionality doesn't exist and is wanted :) ) | Allow bind variables in SQLDatabase queries | https://api.github.com/repos/langchain-ai/langchain/issues/15449/comments | 1 | 2024-01-03T05:26:19Z | 2024-04-10T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15449 | 2,063,266,287 | 15,449 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.353
langchain-community 0.0.7
langchain-core 0.1.4
langchain-experimental 0.0.47
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linuxd
openai 0.28.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
from langchain.llms import AzureOpenAI
from langchain.agents import initialize_agent, AgentType, AgentExecutor
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate, ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import tool
import openai
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-06-01-preview"
os.environ["OPENAI_API_BASE"] = "xxx"
os.environ["OPENAI_API_KEY"] = "xxx"
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a powerful assistant. But you do not know how to calculate the length of a word"),
("user", "{input}"),
])
llm = AzureOpenAI(
model_name="gpt-4",
engine="gpt-4"
)
@tool
def get_word_length(word):
"""get the length of a word"""
return len(word)
tools = [get_word_length]
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True
)
agent.run("how many letters are there in Oslo and Beijing?")
```
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/mnt/workspace/workgroup/lengmou/Tars-Code-Agent/pages/5.py", line 115, in <module>
agent.run("how many letters are there in Oslo and Beijing?")
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 507, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 312, in __call__
raise e
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 96, in plan
predicted_message = self.llm.predict_messages(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 946, in predict_messages
content = self(text, stop=_stop, **kwargs)
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 892, in __call__
self.generate(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 666, in generate
output = self._generate_helper(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 553, in _generate_helper
raise e
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 540, in _generate_helper
self._generate(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_community/llms/openai.py", line 1152, in _generate
"token_usage": full_response["usage"],
KeyError: 'usage'
```
### Expected behavior
Please look at this issue, thanks! | KeyError: 'usage' | https://api.github.com/repos/langchain-ai/langchain/issues/15448/comments | 3 | 2024-01-03T04:03:31Z | 2024-04-10T16:15:30Z | https://github.com/langchain-ai/langchain/issues/15448 | 2,063,217,500 | 15,448 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm using an llm chain and would like to stream its output. I'm writing a function that takes it's token. I can get the tokens one by one, but how can I know if this token is the last token of the response?
### Suggestion:
_No response_ | Issue: streaming output. | https://api.github.com/repos/langchain-ai/langchain/issues/15445/comments | 1 | 2024-01-03T03:29:06Z | 2024-04-10T16:12:53Z | https://github.com/langchain-ai/langchain/issues/15445 | 2,063,195,674 | 15,445 |
[
"langchain-ai",
"langchain"
] | ### Feature request
If in-memory replica is increased in milvus, `replica_number` should be set when loading collection in langchain, but the default setting is 1 and cannot be changed.
> pymilvus.exceptions.MilvusException: <MilvusException: (code=1100, message=failed to load collection: can't change the replica number for loaded collection: expected=3, actual=1: invalid parameter)>
When creating the Milvus object, please change it so that `replica_number` can be passed over to the argument
### Motivation
There is no way to change `replica_number` in langchain when in-memory replica is increased in milvus.
### Your contribution
- `milvus.py`
```python3
def __init__(
self,
embedding_function: Embeddings,
collection_name: str = "LangChainCollection",
connection_args: Optional[dict[str, Any]] = None,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: Optional[bool] = False,
*,
primary_field: str = "pk",
text_field: str = "text",
vector_field: str = "vector",
replica_number : int = 1 // added factor
):
"""Initialize the Milvus vector store."""
try:
from pymilvus import Collection, utility
except ImportError:
raise ValueError(
"Could not import pymilvus python package. "
"Please install it with `pip install pymilvus`."
)
# Default search params when one is not provided.
self.default_search_params = {
"IVF_FLAT": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_SQ8": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_PQ": {"metric_type": "L2", "params": {"nprobe": 10}},
"HNSW": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_FLAT": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_SQ": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_PQ": {"metric_type": "L2", "params": {"ef": 10}},
"IVF_HNSW": {"metric_type": "L2", "params": {"nprobe": 10, "ef": 10}},
"ANNOY": {"metric_type": "L2", "params": {"search_k": 10}},
"AUTOINDEX": {"metric_type": "L2", "params": {}},
}
self.embedding_func = embedding_function
self.collection_name = collection_name
self.index_params = index_params
self.search_params = search_params
self.consistency_level = consistency_level
# In order for a collection to be compatible, pk needs to be auto'id and int
self._primary_field = primary_field
# In order for compatibility, the text field will need to be called "text"
self._text_field = text_field
# In order for compatibility, the vector field needs to be called "vector"
self._vector_field = vector_field
self.fields: list[str] = []
# Create the connection to the server
if connection_args is None:
connection_args = DEFAULT_MILVUS_CONNECTION
self.alias = self._create_connection_alias(connection_args)
self.col: Optional[Collection] = None
# Grab the existing collection if it exists
if utility.has_collection(self.collection_name, using=self.alias):
self.col = Collection(
self.collection_name,
using=self.alias,
)
# If need to drop old, drop it
if drop_old and isinstance(self.col, Collection):
self.col.drop()
self.col = None
# Initialize the vector store
self._init(replica_number=replica_number)
```
Please let me take the `replica_number` as an argument in the same way as above | milvus replica number factorization | https://api.github.com/repos/langchain-ai/langchain/issues/15442/comments | 1 | 2024-01-03T02:58:01Z | 2024-01-05T04:07:25Z | https://github.com/langchain-ai/langchain/issues/15442 | 2,063,178,107 | 15,442 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Gemini API is not available in Canada, but i believe it is available through `vertexai.preview.generative_models` in pre-GA mode.
Would it be possible to add a feature using the Vertex AI SDK instead of Gemini API, which i assume is what it is using?
### Motivation
Canada access to Gemini through Langchain
### Your contribution
I could test? | Accessing Gemini though Vertex AI SDK | https://api.github.com/repos/langchain-ai/langchain/issues/15431/comments | 3 | 2024-01-02T21:52:47Z | 2024-01-03T17:20:09Z | https://github.com/langchain-ai/langchain/issues/15431 | 2,062,998,345 | 15,431 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
```
Python `3.12.1` running inside Docker from `python:3.12-bookworm` on Linux.
### Who can help?
I have a FastAPI app that streams the output of an LLM. The app uses `langchain.chat_models.ChatOpenAI` at runtime, but during test I mock the LLM with `langchain.llms.fake.FakeStreamingListLLM`. However, when my app calls `.astream()` on each of them I'm getting different results:
- While `ChatOpenAI` yields instances of `AIMessageChunk`...
- `FakeStreamingListLLM` yields instances of strings.
This mismatch between the fake and the real class makes the fake unsuitable for some mocking purposes as the interface of `AIMessageChunk` is different from a `str`.
@baskaryan @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
See an snippet below on how to reproduce the output of strings within `ipython`:
```python
from langchain.llms.fake import FakeStreamingListLLM
import asyncio
async def pp(llm):
async for chunk in llm.astream('some input'):
yield(f"{type(chunk)}: {chunk}")
async def consume():
results = []
async for item in pp(FakeStreamingListLLM(responses=['a'])):
results.append(item)
return results
asyncio.run(consume())
# OUTPUT:
["<class 'str'>: a"]
```
### Expected behavior
I'd expect `FakeStreamingListLLM` to return instances of `AIMessageChunk` as the real chat model. | FakeStreamingListLLM.astream() yields strings while ChatOpenAI yields AIMessageChunk | https://api.github.com/repos/langchain-ai/langchain/issues/15426/comments | 3 | 2024-01-02T18:44:18Z | 2024-04-10T16:13:01Z | https://github.com/langchain-ai/langchain/issues/15426 | 2,062,806,705 | 15,426 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10
Python 3.11.5
langchain==0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to do a similarity search on a vectorstore created beforehand and use multiple filtering conditions. The relevant part of the code is as follows:
```
from langchain.vectorstores import Chroma
from langchain.embeddings import SentenceTransformerEmbeddings
import chromadb
db_path = "my_db"
embeddings = SentenceTransformerEmbeddings(cache_folder='intfloat/multilingual-e5-large')
chroma_client = chromadb.PersistentClient(path=db_path)
db= Chroma(persist_directory=db_path, embedding_function=embeddings, client=chroma_client)
query = "My query"
filtered = db.similarity_search_with_relevance_scores(k=5, query=query, filter={"key1":value1, "key2":value2})
```
When using one filtering condition like
```
filtered = db.similarity_search_with_relevance_scores(k=5, query=query, filter={"key1":value1})
```
the filtering condtion is applied and it works fine. But when using multiple conditions, neither of the conditions is applied.
How can I use multiple filtering conditions?
### Expected behavior
I would expect, that both conditions are applied either connected by an "and" or an "or". | Filter conditions are discarded when using multiple filter conditions in similarity_search_with_relevance_scores | https://api.github.com/repos/langchain-ai/langchain/issues/15417/comments | 2 | 2024-01-02T17:35:24Z | 2024-08-06T10:38:34Z | https://github.com/langchain-ai/langchain/issues/15417 | 2,062,736,735 | 15,417 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am creating a simple PDF reading application where I want to push my embeddings into Pinecone.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from streamlit_extras.add_vertical_space import add_vertical_space
# Langchain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models.openai import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
from langchain.document_loaders import UnstructuredPDFLoader
# Pinecone
from langchain.vectorstores import Pinecone
import pinecone
from apikey import pinecone_api_key
import uuid
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# data = pdf_reader.load()
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = sections
# st.write(chunks)
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size = 500,
# chunk_overlap = 20
# )
# chunks = text_splitter.split_documents(data)
## embeddings
# Set up embeddings
embeddings = OpenAIEmbeddings( model="text-embedding-ada-002")
# Set up Pinecone
pinecone.init(api_key=pinecone_api_key, environment='gcp-starter')
index_name = 'langchainresearch'
if index_name not in pinecone.list_indexes():
pinecone.create_index(index_name, dimension=1536, metric="cosine") # Adjust the dimension as per your embeddings
index = pinecone.Index(index_name)
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
# Create or Load Chat History
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = docsearch.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-4', temperature=0.7, max_tokens=3000)
message = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=prompt)
]
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=message)
print(cb)
# st.write(response)
# st.write(docs)
# Process the response using AIMessage schema
# ai_message = AIMessage(content="AI message content")
# ai_message.content = response.generations[0].message.content
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
After running the code I got this error message that I cannot find a solution.
AttributeError: 'str' object has no attribute 'page_content'
Traceback:
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\zy73\OneDrive\Desktop\AI Research\langchain\pdf.py", line 159, in <module>
main()
File "C:\Users\zy73\OneDrive\Desktop\AI Research\langchain\pdf.py", line 88, in main
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\langchain\schema\vectorstore.py", line 508, in from_documents
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\langchain\schema\vectorstore.py", line 508, in <listcomp>
texts = [d.page_content for d in documents]
Please help me debug this. Thank you
### Suggestion:
_No response_ | Issue: Pinecone Embeddings - Error | https://api.github.com/repos/langchain-ai/langchain/issues/15407/comments | 7 | 2024-01-02T15:33:53Z | 2024-01-03T19:31:42Z | https://github.com/langchain-ai/langchain/issues/15407 | 2,062,584,661 | 15,407 |
[
"langchain-ai",
"langchain"
] | ### System Info
python: 3.11
langchain:latest
### Who can help?
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
### Expected behavior
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
instead of a single output or top5 output, need the complete result from SQLResults as answe. | How to get the complete output as answer | https://api.github.com/repos/langchain-ai/langchain/issues/15404/comments | 3 | 2024-01-02T14:01:51Z | 2024-04-10T16:12:39Z | https://github.com/langchain-ai/langchain/issues/15404 | 2,062,463,881 | 15,404 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi! I am trying to replicate this tutorial https://python.langchain.com/docs/integrations/toolkits/playwright on Colab using the same code, only difference is i am using `ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)` instead of `ChatAnthropic(temperature=0)`.
When i run the agent (`result = await agent_chain.arun("What are the headers on langchain.com?")`) instead of obtaining the full output showed in the documentation, i get:
> Entering new AgentExecutor chain...
I'll need to extract the headers from the langchain.com website. Let me do that for you.
Action:```navigate_browser```
> Finished chain.
I'll need to extract the headers from the langchain.com website. Let me do that for you.
Action:```navigate_browser```.
It seems the agent stops before returning the final answer, any guess on why this is happening? Thanks a lot
### Suggestion:
_No response_ | Issue: not able to replicate documentation results | https://api.github.com/repos/langchain-ai/langchain/issues/15403/comments | 1 | 2024-01-02T13:55:08Z | 2024-04-09T16:15:03Z | https://github.com/langchain-ai/langchain/issues/15403 | 2,062,455,269 | 15,403 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain - 0.0.350
Python - 3.11
chromadb - 0.3.23
OS - Win 10
### Who can help?
@hwchase17
@eyur
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just updated the version from 0.0.330 to 350
used the same code to create a chromadb class -
and when using Chroma.from_docs(docs, embeddings) get this in output -
No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction
### Expected behavior
The expected behavior shall be that the embedding function provided should be used to create the embeddings,
The following is the fix for the issue -
line 129 should have a embedding_function passed instead of None | Warning message- No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction || Chroma db | https://api.github.com/repos/langchain-ai/langchain/issues/15400/comments | 1 | 2024-01-02T10:09:11Z | 2024-04-09T16:14:50Z | https://github.com/langchain-ai/langchain/issues/15400 | 2,062,195,571 | 15,400 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Links within the documentation fail to open when the URL lacks a trailing '/'. Notably, the links on the page indicated in the example URL below and other links across various pages result in `page not found` errors when the URL lacks a trailing '/'. It's crucial to clarify that both URLs below, with and without a trailing '/', point to the same page. The issue lies in the fact that links on the version without '/' do not open, giving a `page not found` error, whereas the version with '/' functions correctly without any error.
Example URL (without '/' at the end of the URL): [https://python.langchain.com/docs/modules/agents](https://python.langchain.com/docs/modules/agents)
Example URL (with '/' at the end of the URL): [https://python.langchain.com/docs/modules/agents/](https://python.langchain.com/docs/modules/agents/)
### Idea or request for content:
- **Suggested Changes:**
1. Investigate and address the issue causing links to fail when the URL lacks a trailing '/'.
2. Implement a fix to ensure all links across the documentation open successfully, regardless of the presence or absence of a trailing '/' in the URL.
- **Steps to Reproduce:**
1. Navigate to [Example URL without '/'](https://python.langchain.com/docs/modules/agents) in a web browser.
2. Attempt to click on various links within the documentation, especially those that navigate to other pages.
3. Observe the page not found errors when the URL lacks a trailing '/'.
- **Additional Context:**
- This issue is consistent across multiple pages in the documentation.
- The problem is observed on Chrome, Brave, Firefox, and Bing browsers.
- Other browsers may also have this issue, but I haven't tested in them. | DOC: Resolve URL Navigation Issues - Trailing Slash Discrepancy | https://api.github.com/repos/langchain-ai/langchain/issues/15399/comments | 1 | 2024-01-02T09:54:31Z | 2024-04-09T16:14:33Z | https://github.com/langchain-ai/langchain/issues/15399 | 2,062,178,490 | 15,399 |
[
"langchain-ai",
"langchain"
] | ### System Info
/Users/sunwenke/miniconda3/envs/langchain/bin/python /Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py
Traceback (most recent call last):
File "/Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py", line 8, in <module>
entity_store = SQLiteEntityStore(db_file="/Users/sunwenke/workspace/yongxinApi/langchain/Chinook.db")
File "/Users/sunwenke/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/memory/entity.py", line 256, in __init__
self.conn = sqlite3.connect(db_file)
File "/Users/sunwenke/miniconda3/envs/langchain/lib/python3.10/site-packages/pydantic/v1/main.py", line 357, in __setattr__
raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
ValueError: "SQLiteEntityStore" object has no field "conn"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.memory import ConversationEntityMemory
from langchain.memory.entity import SQLiteEntityStore
from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from localopenai import llm
entity_store = SQLiteEntityStore(db_file="/Users/sunwenke/workspace/yongxinApi/langchain/Chinook.db")
memory = ConversationEntityMemory(llm=llm, entity_store=entity_store)
conversation = ConversationChain(
llm=llm,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
memory=memory,
verbose=True,
)
conversation.run("有多少员工")
### Expected behavior
class SQLiteEntityStore(BaseEntityStore):
"""SQLite-backed Entity store"""
session_id: str = "default"
table_name: str = "memory_store"
def __init__(
self,
session_id: str = "default",
db_file: str = "entities.db",
table_name: str = "memory_store",
*args: Any,
**kwargs: Any,
):
try:
import sqlite3
except ImportError:
raise ImportError(
"Could not import sqlite3 python package. "
"Please install it with `pip install sqlite3`."
)
super().__init__(*args, **kwargs)
self.conn = sqlite3.connect(db_file)
self.session_id = session_id
self.table_name = table_name
self._create_table_if_not_exists()
self.conn = sqlite3.connect(db_file) 属性不存在 需要 在当前类 添加conn属性 | 这个貌似是一个bug, 修改之后就好了 | https://api.github.com/repos/langchain-ai/langchain/issues/15396/comments | 1 | 2024-01-02T08:55:07Z | 2024-04-10T16:08:18Z | https://github.com/langchain-ai/langchain/issues/15396 | 2,062,114,468 | 15,396 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi Langchain Gurus,
I am trying to use SQLDatabaseChain to query and answer questions on postgresql table. so far the code that I have written uses following hugging face pipeline
```
model_name ='tiiuae/falcon-7b-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline("question-answering", model=model_name, tokenizer=tokenizer, torch_dtype=bfloat16)
llm = HuggingFacePipeline(pipeline=pipe)
```
the prompt template looks like following
```
default_prompt = """You are a postgresql expert. Given an input question, first create a syntactically correct postgresql query to run, then look at the results of the query and return the answer to the input question.
Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per postgresql. You can order the results to return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.
Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Pay attention to use date('now') function to get the current date, if the question involves "today".
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{table_info}
Question: {question}
"""
```
post that I am using the SQLDatabaseChain like this
```
database = SQLDatabase.from_uri(f"postgresql+psycopg2://{db_user}:{db_password}@{db_host}:5432/{db_name}")
prompt_template = PromptTemplate.from_template(prompt_template)
print("SQLDatabase loaded.")
db_chain = SQLDatabaseChain.from_llm(llm, self.database, verbose=True)
db_chain.run('How many records are available in flightData table')
```
running this throws error
```
``
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
CREATE TABLE "FlightData" (
year BIGINT,
month BIGINT,
day BIGINT,
dep_time DOUBLE PRECISION,
sched_dep_time BIGINT,
dep_delay DOUBLE PRECISION,
arr_time DOUBLE PRECISION,
sched_arr_time BIGINT,
arr_delay DOUBLE PRECISION,
carrier TEXT,
flight BIGINT,
tailnum TEXT,
origin TEXT,
dest TEXT,
air_time DOUBLE PRECISION,
distance BIGINT,
hour BIGINT,
minute BIGINT,
time_hour TEXT
)
/*
3 rows from FlightData table:
year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time arr_delay carrier flight tailnum origin dest air_time distance hour minute time_hour
2013 1 1 517.0 515 2.0 830.0 819 11.0 UA 1545 N14228 EWR IAH 227.0 1400 5 15 1/1/2013 5:00
2013 1 1 533.0 529 4.0 850.0 830 20.0 UA 1714 N24211 LGA IAH 227.0 1416 5 29 1/1/2013 5:00
2013 1 1 542.0 540 2.0 923.0 850 33.0 AA 1141 N619AA JFK MIA 160.0 1089 5 40 1/1/2013 5:00
*/
Question: How many records are available in flightData table
SQLQuery: argument needs to be of type (SquadExample, dict)
```
Requesting help /pointer on how I can I run this code without error and generate the correct answer?
### Suggestion:
_No response_ | Issue: Unable to use SQLDatabaseChain with Falcon 7b Instruct for quering the postgresql database. | https://api.github.com/repos/langchain-ai/langchain/issues/15395/comments | 1 | 2024-01-02T08:54:23Z | 2024-04-09T16:09:38Z | https://github.com/langchain-ai/langchain/issues/15395 | 2,062,113,700 | 15,395 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
pip show langchain_community
Name: langchain-community
Version: 0.0.3
```
```
python --version
Python 3.10.12
```
```
pip show langchain_core
Name: langchain-core
Version: 0.1.1
```
```
pip show pydantic
Name: pydantic
Version: 2.5.1
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
/root/jupyter/pydantic/pydantic/_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
Traceback (most recent call last):
File "/root/jupyter/GitHub-Issues/david/langC-rag-lcel.py", line 86, in <module>
chain.invoke("where did harrison work?")
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1514, in invoke
input = step.invoke(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2040, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2040, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 112, in invoke
return self.get_relevant_documents(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 211, in get_relevant_documents
raise e
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 204, in get_relevant_documents
result = self._get_relevant_documents(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 656, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_community/vectorstores/docarray/base.py", line 127, in similarity_search
results = self.similarity_search_with_score(query, k=k, **kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_community/vectorstores/docarray/base.py", line 106, in similarity_search_with_score
query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
File "/root/jupyter/pydantic/pydantic/main.py", line 166, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [0.00177018... -0.018160881474614143]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
metadata
Field required [type=missing, input_value={'embedding': [0.00177018... -0.018160881474614143]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
```
### Expected behavior
it should run successfully | similarity_search get 2 validation errors for DocArrayDoc | https://api.github.com/repos/langchain-ai/langchain/issues/15394/comments | 2 | 2024-01-02T07:44:12Z | 2024-04-11T16:13:54Z | https://github.com/langchain-ai/langchain/issues/15394 | 2,062,050,508 | 15,394 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: `Mac M1`
Python: `3.8.18`
Lanchain: `0.0.350`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install Lanchain
2. Execute the code:
``` python
from datetime import datetime
from datetime import timedelta
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models.openai import ChatOpenAI
from langchain.chains import RetrievalQA
db_name = 'YOUR_DB_NAME'
collection_name = 'YOUR_COLLECTION_NAME'
db_connection_string = 'YOUR_STRING'
def perform_retrieval_qa(
query: str,
from_date: str = (datetime.now() - timedelta(days=6)).strftime("%Y-%m-%d"),
to_date: str = datetime.now().strftime("%Y-%m-%d"),
):
print(from_date, to_date)
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
db_connection_string,
f"{db_name}.{collection_name}",
OpenAIEmbeddings(),
index_name="default"
)
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0)
retrieval_qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vector_search.as_retriever(
search_kwargs={
'k': 70,
'filter':{
'date': {
'$gte': from_date,
'$lte': to_date
}
}
},
)
)
result = retrieval_qa_chain(
{"query": query},
return_only_outputs=True
)
return result["result"]
```
### Expected behavior
I wish the RAG should be performed with the records from the filtered dates ranges, but it is not happening. The filter is not applied and the RAG is performed with the entire data. | [Filter] Unable to filter dates with MongoDB | https://api.github.com/repos/langchain-ai/langchain/issues/15391/comments | 5 | 2024-01-02T07:03:26Z | 2024-04-09T16:13:23Z | https://github.com/langchain-ai/langchain/issues/15391 | 2,062,017,256 | 15,391 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
If i set the in-memory replica of milvus to 3 and then run the following code on langchain, the following error occurs.
```python3
vector_db = Milvus.from_documents(
docs,
embeddings,
collection_name=collection,
)
```
> pymilvus.exceptions.MilvusException: <MilvusException: (code=1100, message=failed to load collection: can't change the replica number for loaded collection: expected=3, actual=1: invalid parameter)>
when i look at the langchain code, the `replica_number` is 1 as the default and cannot be handed over as a factor, can you improve this?
```python3
def _load(self) -> None:
"""Load the collection if available."""
from pymilvus import Collection
if isinstance(self.col, Collection) and self._get_index() is not None:
self.col.load()
```
```python3
def load(
self,
partition_names: Optional[list] = None,
replica_number: int = 1,
timeout: Optional[float] = None,
**kwargs,
):
```
### Suggestion:
Please modify the langchain so that the `replica_number` of the milvus load collection can be handed over as an argument | Issue: milvus collectoin load replica number | https://api.github.com/repos/langchain-ai/langchain/issues/15390/comments | 2 | 2024-01-02T06:49:44Z | 2024-04-09T16:13:24Z | https://github.com/langchain-ai/langchain/issues/15390 | 2,062,006,876 | 15,390 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`from langchain.chains.question_answering import load_qa_chain
template = """
{Your_Prompt}
CONTEXT:
{context}
QUESTION:
{query}
CHAT HISTORY:
{chat_history}
ANSWER:
"""
prompt = PromptTemplate(input_variables=["chat_history", "query", "context"], template=template)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="query")
chain = load_qa_chain(ChatOpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)`
is the above code is correct? if correct then please let me know from where i will get chat_history variable?
### Suggestion:
_No response_ | Issue:Issue regarding Memory implementation | https://api.github.com/repos/langchain-ai/langchain/issues/15388/comments | 3 | 2024-01-02T06:34:26Z | 2024-04-09T16:13:52Z | https://github.com/langchain-ai/langchain/issues/15388 | 2,061,996,098 | 15,388 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain==0.0.335
```
```
python 3.11
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create simple `test.py`:
```python
from langchain.chains.llm_summarization_checker.base import LLMSummarizationCheckerChain
from langchain.llms.fake import FakeListLLM
llm = FakeListLLM(responses=[])
chain = LLMSummarizationCheckerChain.from_llm(llm)
```
2. Install `pyinstaller` via `pip install -U pyinstaller`
3. Run
```
pyinstaller test.py`
```
4. Wait until it's done.
5. Run
```
$ ./dist/test/test
Traceback (most recent call last):
File "test.py", line 1, in <module>
from langchain.chains.llm_summarization_checker.base import LLMSummarizationCheckerChain
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "langchain/chains/__init__.py", line 51, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "langchain/chains/llm_summarization_checker/base.py", line 20, in <module>
File "langchain_core/prompts/prompt.py", line 202, in from_file
FileNotFoundError: [Errno 2] No such file or directory: '/home/tiger/junk/tt/dist/test/_internal/langchain/chains/llm_summarization_checker/prompts/create_facts.txt'
[789522] Failed to execute script 'test' due to unhandled exception!
```
### Expected behavior
```
CREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "create_facts.txt")
CHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "check_facts.txt")
REVISED_SUMMARY_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "revise_summary.txt")
ARE_ALL_TRUE_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "are_all_true_prompt.txt")
```
These lines from https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_summarization_checker/base.py should be changed to something like lazy loading, or just hard code the `.txt` files.
PS - I'm using `llama2` so these prompts do not work anyway. | langchain is not `pyinstaller` friendly due to dependency on external files, e.g. `llm_summarization_checker` | https://api.github.com/repos/langchain-ai/langchain/issues/15386/comments | 2 | 2024-01-02T03:17:26Z | 2024-01-02T04:09:36Z | https://github.com/langchain-ai/langchain/issues/15386 | 2,061,900,973 | 15,386 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.351
boto3==1.34.3
Python version: 3.11.7
### Who can help?
I use DynamoDBChatMessageHistory as the conversation_history, seems duplicate Human messages saved to DynamoDB table every time, AI message saved once.
Here is the duplicate messages:

Below is my code:
` def ask_stream(
self,
input_text: str,
conversation_history: ConversationBufferMemory = ConversationBufferMemory(
ai_prefix="Assistant"
),
verbose: bool = False,
**kwargs,
):
"""Processes a stream of input by invoking the engine.
Parameters
----------
conversation_history: ConversationBufferMemory
The conversation history
input_text: str
The input prompt or message.
verbose: boolean
if the langchain shall show the detailed logging
kwargs: dict
Additional keyword arguments to pass to the engine.
For example, you can pass in temperature to control the model's
creativity.
Returns
-------
str
The response generated by the engine.
Raises
------
Any exceptions raised by the engine.
"""
temp_stop = kwargs.get("stop", self.stop)
template = """The following is a friendly conversation between a human and an AI."""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
chain = ConversationChain(
llm=self.engine,
memory=conversation_history,
verbose=verbose,
prompt=PROMPT,
)
return chain.predict(input=input_text, stop=temp_stop)
`
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Start a new conversation with DynamoDBChatMessageHistory as memory.
### Expected behavior
Each HumanMessage should be saved once in DynamoDB table. | DynamoDBChatMessageHistory saved Human Message duplicate | https://api.github.com/repos/langchain-ai/langchain/issues/15385/comments | 2 | 2024-01-02T03:00:50Z | 2024-01-02T08:41:53Z | https://github.com/langchain-ai/langchain/issues/15385 | 2,061,894,643 | 15,385 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python=3.10
Langchain=0.0.352
langchain-community==0.0.6
I'm using a custom Bing search engine. When I asked something that my Bing search was not able to return results "Webpages" does not exist.
I had to change this: search_results["webPages"]["value"] (line 47) to
if "webPages" in search_results:
return search_results["webPages"]["value"]
else:
return []
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an OpenAI Functions agent with Bing
2. Have a custom bing search
3. Search for something that will not return results from Bing
### Expected behavior
Expect the tool to return "No bing results found" and then a response from an agent | Bing Search Tool has key value error | https://api.github.com/repos/langchain-ai/langchain/issues/15384/comments | 1 | 2024-01-02T01:05:50Z | 2024-01-02T23:25:02Z | https://github.com/langchain-ai/langchain/issues/15384 | 2,061,856,348 | 15,384 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
from langchain.vectorstores.neo4j_vector import Neo4jVector
ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector'
### Idea or request for content:
The existing documentation for the Neo4J Vector Index incorrectly indicates the use of "Neo4jVector" from the "langchain.vectorstores" module. However, it appears that there is no implementation of "Neo4jVector" within the Vector stores. | ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector' | https://api.github.com/repos/langchain-ai/langchain/issues/15383/comments | 2 | 2024-01-01T22:32:24Z | 2024-01-02T03:25:12Z | https://github.com/langchain-ai/langchain/issues/15383 | 2,061,806,618 | 15,383 |
[
"langchain-ai",
"langchain"
] | ### Feature request
To add Async Client support to MongoDB Vector Stores
### Motivation
Currently, Langhain works very well with the PyMongo Client but not with the Async Clients like Motor it throws error, probably I hope it may not be implemented yet?
Reference:
PyMongo Working Docs - https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas
Async Error in Colab - https://colab.research.google.com/drive/1uBfiqoRH6rfiCCXhbxYlNxfc9ILizecl?usp=sharing
### Your contribution
I am not sure. | [MongoDB] Async Support for Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/15377/comments | 8 | 2024-01-01T12:48:06Z | 2024-05-02T07:53:49Z | https://github.com/langchain-ai/langchain/issues/15377 | 2,061,541,643 | 15,377 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.353
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Update to 0.0.353
2. Try to import create_async_playwright_browser
3. Get error
### Expected behavior
1. No Error ;D
I can tell that the import was supported as recently as 0.0.350 | Playwright Utilities Removed In 0.0.353, Documentation Not Updated | https://api.github.com/repos/langchain-ai/langchain/issues/15372/comments | 3 | 2024-01-01T02:18:08Z | 2024-04-08T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15372 | 2,061,256,601 | 15,372 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Python 3.12.1
- MacOS 14.2.1
- langchain-cli 0.0.20 from pip OR langchain-* from git master branch (commit-ish [26f84b7](https://github.com/langchain-ai/langchain/commit/26f84b74d0f7dc4d2211a1a62d47eec36cb1d726)) -- can reproduce with latest code: langchain 0.0.353, langchain-cli 0.0.20, langchain-community 0.0.7, langchain-core 0.1.4
- pandas 2.1.4
- lancedb 0.4.3
- numpy 1.26.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I started with the rag-chroma-private quickstart, then modified it to my needs. I got rid of Chroma due to errors and now use a persistent LanceDB file.
When I try to enter a prompt on the playground, I get a crash (see the bottom of this message).
server.py:
```
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
from app.chain import chain as rag_private_chain
app = FastAPI()
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
add_routes(app, rag_private_chain, path="/rag-lancedb-private")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
chain.py:
```
# Load
from typing import List
from langchain.chat_models import ChatOllama
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import OllamaEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_core.documents.base import Document
import os, sys
#DB
import lancedb
from langchain.vectorstores.lancedb import LanceDB
#Chroma
#from langchain.vectorstores.chroma import Chroma
#from langchain.vectorstores.utils import filter_complex_metadata
files_directory = "/Users/sean/Downloads/posts"
all_splits = None
my_emb = OllamaEmbeddings(model="llama2:70b-chat")
table = None
db = lancedb.connect("./lance.db")
vectorstore = None
#Generating the vectors only has to be done when the data changes
def do_loader():
#List each file in files_directory and loop through them creating an UnstructuredMarkdownLoader for each file
all_splits = []
for filename in os.listdir(files_directory):
full_path = os.path.join(files_directory, filename)
loader = UnstructuredMarkdownLoader(full_path, mode="elements", strategy="hi_res")
data: List[Document] = loader.load()
all_splits.extend(data)
continue
print(f"Got {len(all_splits)} documents")
table = db.create_table(
"rag",
data=[
{
"vector": my_emb.embed_query("Hello World"),
"text": "Hello World",
"id": "1",
}
],
mode="overwrite",
)
#New
vectorstore = LanceDB.from_documents(all_splits, my_emb, connection=table)
print("Added docs to vector store")
#Not calling do_loader() because I have already populated LanceDB with the vectors.
table = db.open_table("rag")
vectorstore = LanceDB(connection=table, embedding=my_emb)
retriever = vectorstore.as_retriever()
print("Loaded DB")
# Prompt
# Optionally, pull from the Hub
# from langchain import hub
# prompt = hub.pull("rlm/rag-prompt")
# Or, define your own:
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
ollama_llm = "llama2:70b-chat"
model = ChatOllama(model=ollama_llm)
print("Created Ollama model")
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
print("Done with chain.py")
```
Error:
```
INFO: Application startup complete.
INFO: 127.0.0.1:64594 - "GET /rag-chroma-private/playground/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:64594 - "GET /rag-chroma-private/playground/assets/index-52e8ab2f.css HTTP/1.1" 200 OK
INFO: 127.0.0.1:64595 - "GET /rag-chroma-private/playground/assets/index-6a0f524c.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:64595 - "GET /rag-chroma-private/playground/favicon.ico HTTP/1.1" 200 OK
INFO: 127.0.0.1:64594 - "POST /rag-chroma-private/stream_log HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 269, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 538, in receive
await self.message_event.wait()
File "/opt/homebrew/Cellar/python@3.12/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/locks.py", line 212, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 15bc21f40
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/applications.py", line 116, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
| await self.app(scope, receive, _send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 754, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 774, in app
| await route.handle(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 296, in handle
| await self.app(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 75, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
| await response(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 255, in __call__
| async with anyio.create_task_group() as task_group:
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/serialization.py", line 90, in default
| return super().default(obj)
| ^^^^^^^
| RuntimeError: super(): __class__ cell not found
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
| await func()
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 245, in stream_response
| async for data in self.body_iterator:
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/api_handler.py", line 1049, in _stream_log
| "data": self._serializer.dumps(data).decode("utf-8"),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/serialization.py", line 168, in dumps
| return orjson.dumps(obj, default=default)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| TypeError: Type is not JSON serializable: numpy.ndarray
+------------------------------------
```
### Expected behavior
It's not clear to me which part of the stack is failing, but the expected behavior is that the LLM should generate some output rather than crashing. | RAG crash: TypeError: Type is not JSON serializable: numpy.ndarray | https://api.github.com/repos/langchain-ai/langchain/issues/15371/comments | 9 | 2024-01-01T01:46:45Z | 2024-06-08T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15371 | 2,061,243,335 | 15,371 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS 14.0, Jupiter with Python 3.11.6
(base) ➜ llm-env pip show langchain
Name: langchain
Version: 0.0.353
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /lib/python3.11/site-packages
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when try to execute,
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools.python.tool import PythonREPLTool
from langchain.python import PythonREPL
Jupiter gave an error:
ValueError: 'lib/python3.11/site-packages/langchain/agents/agent_toolkits' is not in the subpath of 'lib/python3.11/site-packages/langchain_core' OR one path is relative and the other is absolute.
### Expected behavior
Import agent related thing correctly. | ValueError: 'lib/python3.11/site-packages/langchain/agents/agent_toolkits' is not in the subpath of 'lib/python3.11/site-packages/langchain_core' OR one path is relative and the other is absolute. | https://api.github.com/repos/langchain-ai/langchain/issues/15370/comments | 3 | 2024-01-01T01:32:47Z | 2024-02-07T23:07:23Z | https://github.com/langchain-ai/langchain/issues/15370 | 2,061,236,158 | 15,370 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The ollama integration assumes that all models are served on "localhost:11434", if the ollama service is hosted on a different machine, the integration will fail.
Can we add an environment variable that if present overrides this url, so the correct url for the ollama server can be set.
### Motivation
In My setup ollama sits on a separate machine that is resourced for serving LLMs.
### Your contribution
I'm afraid I don't have any knowledge of python, go, cpp and rust only. | Ability to set ollama serve url | https://api.github.com/repos/langchain-ai/langchain/issues/15365/comments | 4 | 2023-12-31T20:05:56Z | 2024-06-08T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15365 | 2,061,158,064 | 15,365 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.353
pygpt4all 1.1.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import GPT4ALL, LlamaCpp
### Expected behavior
ImportError: cannot import name 'GPT4ALL' from 'langchain.llms' (/home/rkuo/.local/lib/python3.10/site-packages/langchain/llms/__init__.py) | ImportError: cannot import name 'GPT4ALL' from 'langchain.llms' | https://api.github.com/repos/langchain-ai/langchain/issues/15362/comments | 3 | 2023-12-31T19:14:23Z | 2024-04-09T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15362 | 2,061,148,468 | 15,362 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.9
LangChain 0.0.339
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I created new instance of SQLDatabase and executed SQL queries using the 'run' method.
The problem is that when I execute a simple SELECT query the result returns without the column names, only the values, and therefore I can't tell which value relates to which column.
I saw in the implementation the following code in the 'run' method:
```
result = self._execute(command, fetch)
# Convert columns values to string to avoid issues with sqlalchemy
# truncating text
res = [
tuple(truncate_word(c, length=self._max_string_length) for c in r.values())
for r in result
]
```
and I'm wondering if this data manipulation is done by intention or maybe it's a bug.
### Expected behavior
For example:
executes the following SQL: select id, count(*) as num_count from some_table
expected result: [{'id': 383, 'num_count': 10}]
actual result: [(383, 10)] | SQLDatabase returns result without column names. | https://api.github.com/repos/langchain-ai/langchain/issues/15360/comments | 1 | 2023-12-31T17:17:04Z | 2024-04-07T16:07:44Z | https://github.com/langchain-ai/langchain/issues/15360 | 2,061,123,042 | 15,360 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.