issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello!
I am currently using Langchain to interface with a [customLLM](https://python.langchain.com/docs/modules/model_io/models/llms/how_to/custom_llm) async.
To do so, I am overriding the original `LLM` class as explained in the tutorial as such:
```
class LLMInterface(LLM):
@property
def _llm_type(self) -> str:
return "Ruggero's LLM"
async def send_request(self, payload):
return await some_function_that_queries_custom_llm(payload)
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = json.dumps({"messages": [{"role": "user", "content": prompt}]})
future = self.run_in_executor(self.send_request, payload)
raw_response = future.result() # get the actual result
print(raw_response)
# Parse JSON response
response_json = json.loads(raw_response)
# Extract content
for message in response_json["messages"]:
if message["role"] == "bot":
content = message["content"]
return content
# Return empty string if no bot message found
return ""
```
Unfortunately I get:
```
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_queue.SimpleQueue' object
```
Even if a use an executor I get the similar problem
```
def run_in_executor(self, coro_func: Any, *args: Any) -> Any:
future_result = Future()
def wrapper():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = loop.run_until_complete(coro_func(*args))
future_result.set_result(result)
except Exception as e:
future_result.set_exception(e)
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
Thread(target=wrapper).start()
return future_result
```
In my code I use Langchain as such:
```
llm_chain = ConversationChain(llm=llm, memory=memory)
output = llm_chain.run(self.llm_input)
```
What is the appropriate way to interface a custom LLM that it is queried async?
i know that Langchain supports [async calls ](https://blog.langchain.dev/async-api/) but I was not able to make it work
Thank you!
### Suggestion:
providing a
```
async def _async_call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
```
method would be ideal
Any alternative work-around would be fine too
Thank you! | Issue: How to create async calls to custom LLMs | https://api.github.com/repos/langchain-ai/langchain/issues/6932/comments | 2 | 2023-06-29T17:06:20Z | 2023-06-29T22:20:21Z | https://github.com/langchain-ai/langchain/issues/6932 | 1,781,213,657 | 6,932 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Would like to ask how we should deal with multiple destination chains where the chains are expecting different input variables?
For e.g. in the tutorial for MultiPromptChain, i would like math questions to be directed to the PalChain instead of the standard LLMChain. With the initial LLMRouterChain, the router prompt uses `input` as the input_variables. however, when it has decided that the input `what is 2+2` is a math question and should be routed to PalChain, i am presented the error
```
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in _validate_inputs(self, inputs)
101 missing_keys = set(self.input_keys).difference(inputs)
102 if missing_keys:
--> 103 raise ValueError(f"Missing some input keys: {missing_keys}")
104
105 def _validate_outputs(self, outputs: Dict[str, Any]) -> None:
ValueError: Missing some input keys: {'question'}
```
manually replacing the MATH_PROMPT that PalChain uses from {question} to {input} works but I would like to know how I can specify the input variable that the destination chain is expecting when setting up the destination_chains array here:
```
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True
)
```
been at it for 2 nights so am seeking help. thanks!
### Suggestion:
_No response_ | Issue: How to handle RouterChain when 1 or more destination chain(s) which is expecting a different input variable? | https://api.github.com/repos/langchain-ai/langchain/issues/6931/comments | 8 | 2023-06-29T16:45:54Z | 2023-08-03T08:30:01Z | https://github.com/langchain-ai/langchain/issues/6931 | 1,781,186,169 | 6,931 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The PRs wait for review (for any feedback) very long time.
- Some sort of triage would be helpful.
- Some sort of PR closure can be helpful. If something is wrong with PR this should be explained/discussed/addressed. Not providing **any feedback** is demoralizing for contributors.
Thanks!
PS Take a look at the PR list and check PRs that passed all checks and has no answer/feedback. Hundreds of such PRs?
### Suggestion:
_No response_ | PR-s wait the review forever | https://api.github.com/repos/langchain-ai/langchain/issues/6930/comments | 7 | 2023-06-29T16:19:06Z | 2023-11-19T22:54:48Z | https://github.com/langchain-ai/langchain/issues/6930 | 1,781,148,858 | 6,930 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Please add support to invoke [Amazon SageMaker Asynchronous Endpoints](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html).
This would require some code changes to the _call method of the SageMaker Endpoint class. Therefore, there is a need to create a new class, called SagemakerAsyncEndpoint, or introduce a logic in the SagemakerEndpoint class to check whether the endpoint is async or not.
### Motivation
They are a great way to have models running on expensive GPU instances, while keeping cost control especially during POC phase.
### Your contribution
Submitting a PR. | Support for Amazon SageMaker Asynchronous Endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/6928/comments | 4 | 2023-06-29T15:31:14Z | 2024-01-30T00:46:57Z | https://github.com/langchain-ai/langchain/issues/6928 | 1,781,073,650 | 6,928 |
[
"langchain-ai",
"langchain"
] | ### System Info
* Langchain-0.0.215
* Python3.8.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The chain type in `RetrievalQA.from_chain_type()`:
* `stuff`: work successfully
* `refine`: not work unless uses correctly naming or parameter. #6912
* `map_rerank`: not work
* `map_reduce`: not work
All the error of each type like:
code:
```python
prompt_template = """
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank don't try to make up an answer.
{context}
Question: {question}
Answer in JSON representations
"""
QA_PROMPT = PromptTemplate(
template=prompt_template,
input_variables=['context', 'question']
)
chain_type_kwargs = {
'prompt': QA_PROMPT,
'verbose': True
}
docs = PyMuPDFLoader('file.pdf').load()
splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap
)
docs = splitter.split_documents(document)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(
documents=docs,
embedding= embeddings
)
qa_cahin = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0.2),
chain_type='map_rerank',
retriever=db.as_retriever(),
chain_type_kwargs=chain_type_kwargs
)
```
result:
```
ValidationError Traceback (most recent call last)
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\chains\retrieval_qa\base.py](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/langchain/chains/retrieval_qa/base.py) in from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
89 """Load chain from chain type."""
90 _chain_type_kwargs = chain_type_kwargs or {}
---> 91 combine_documents_chain = load_qa_chain(
92 llm, chain_type=chain_type, **_chain_type_kwargs
93 )
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\chains\question_answering\__init__.py](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/langchain/chains/question_answering/__init__.py) in load_qa_chain(llm, chain_type, verbose, callback_manager, **kwargs)
236 f"Should be one of {loader_mapping.keys()}"
...
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\pydantic\main.cp38-win_amd64.pyd](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/pydantic/main.cp38-win_amd64.pyd) in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for MapRerankDocumentsChain
__root__
Output parser of llm_chain should be a RegexParser, got None (type=value_error)
```
### Expected behavior
Does not crash, when i try to run it. | ValidationError: 1 validation error for MapRerankDocumentsChain | https://api.github.com/repos/langchain-ai/langchain/issues/6926/comments | 11 | 2023-06-29T15:15:32Z | 2024-06-01T00:07:43Z | https://github.com/langchain-ai/langchain/issues/6926 | 1,781,037,761 | 6,926 |
[
"langchain-ai",
"langchain"
] | ### Feature request
A colleague and I would like to implement an iterator version of the AgentExecutor. That is, we'd like to expose each intermediate step of the plan/action/observation loop. This should look something like:
```python3
inputs = ... # complex question that requires tool usage, multi-step
for step in agent_executor.iter(inputs=inputs):
do_stuff_with(step)
```
### Motivation
This would be useful for a few applications. By hooking into agent steps we could:
- Route different thoughts/actions/observations conditionally to:
- different frontend components
- different postprocessing/analysis logic
- other agents
We could also "pause" agent execution and intervene.
### Your contribution
Yes, we have a PR ~ready. | Iterator version of AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/6925/comments | 2 | 2023-06-29T15:13:26Z | 2023-07-25T16:33:49Z | https://github.com/langchain-ai/langchain/issues/6925 | 1,781,033,687 | 6,925 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hello, I'd like to ask about supporting [BingNewsAPI](https://www.microsoft.com/en-us/bing/apis/bing-news-search-api), which the apis and response format is different from bing web search api, that I cannot utilize existing [BingSearchAPIWrapper](https://python.langchain.com/docs/modules/agents/tools/integrations/bing_search) result or run method.
I know that you provide separating host url in existing BingSearchWrapper, but as the response format is different, there is an parsing error on calling result method.
### Motivation
I cannot utilize existing [BingSearchAPIWrapper](https://python.langchain.com/docs/modules/agents/tools/integrations/bing_search) s result or run method on [BingNewsAPI](https://www.microsoft.com/en-us/bing/apis/bing-news-search-api)
### Your contribution
If you allow me, I'd like to create pr for BingNewsSearchAPIWrapper utilities. | Support BingNewsApiWrapper on utilities | https://api.github.com/repos/langchain-ai/langchain/issues/6924/comments | 2 | 2023-06-29T14:28:14Z | 2023-08-31T07:22:13Z | https://github.com/langchain-ai/langchain/issues/6924 | 1,780,951,536 | 6,924 |
[
"langchain-ai",
"langchain"
] | ### System Info
Trying to get an agent to describe the contents of a particular URL for me, however, **my agent does not execute past first step**. Attached image to show what my code looks like and where it freezes as it runs.
<img width="683" alt="freezing_agent" src="https://github.com/hwchase17/langchain/assets/97087128/b3888b6e-f016-479e-ac1a-3036b7f16d97">
Thanks <3 !
(also posted as discussion, was not sure where to add)
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from config import SERPAPI_API_KEY, OPENAI_API_KEY
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
from langchain.tools.playwright.utils import create_async_playwright_browser
import asyncio
async_browser = create_async_playwright_browser()
browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
tools = browser_toolkit.get_tools()
gpt_model = "gpt-4"
llm = ChatOpenAI(
temperature=0,
model_name=gpt_model,
openai_api_key=OPENAI_API_KEY
)
agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
async def main():
response = await agent_chain.arun(input="Browse to https://www.theverge.com/2023/3/14/23638033/openai-gpt-4-chatgpt-multimodal-deep-learning and describe it to me.")
return response
result = asyncio.run(main())
print(result)
### Expected behavior
Enter chain, navigate to browser, read content in browser, return description.
Based on: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat.html | Structured Agent Search Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/6923/comments | 5 | 2023-06-29T14:25:29Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/6923 | 1,780,945,041 | 6,923 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hey,
Love your product, and just wanted to give some constructive feedback. Your new layout and organization strategy on "https://python.langchain.com/docs/get_started/introduction.html" is significantly harder to navigate
### Idea or request for content:
_No response_ | DOC: | https://api.github.com/repos/langchain-ai/langchain/issues/6921/comments | 4 | 2023-06-29T13:42:48Z | 2023-09-28T18:26:50Z | https://github.com/langchain-ai/langchain/issues/6921 | 1,780,846,938 | 6,921 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.200
Python 3.11.4
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use with `load_qa_chain`
Error in console:
```
Error in on_chain_start callback: 'name'
```
Log file content is missing "Entering new chain"
### Expected behavior
1. No error in console
2. "Entering new chain" is present in log file
Fix: access dict with fallback like StdOutCallbackHandler does:
```python
class_name = serialized.get("name", "")
``` | FileCallbackHandler doesn't log entering new chain | https://api.github.com/repos/langchain-ai/langchain/issues/6920/comments | 3 | 2023-06-29T13:15:39Z | 2023-09-28T17:27:55Z | https://github.com/langchain-ai/langchain/issues/6920 | 1,780,800,182 | 6,920 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.200
Python 3.11.4
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the handler with a prompt containing characters that aren't represented on the platform charset (e.g. cp1252)
Error appears in console:
```
[manager.py:207] Error in on_text callback: 'charmap' codec can't encode character '\u2010' in position 776: character maps to <undefined>
```
Target logfile only has "entering new chain" and "finished chain" lines.
### Expected behavior
1. No error
2. Log file has usual output
Fix: in constructor: ```self.file = cast(TextIO, open(file_path, "a", encoding="utf-8"))``` | FileCallbackHandler should open file in UTF-8 encoding | https://api.github.com/repos/langchain-ai/langchain/issues/6919/comments | 4 | 2023-06-29T13:10:04Z | 2023-09-28T17:29:58Z | https://github.com/langchain-ai/langchain/issues/6919 | 1,780,791,122 | 6,919 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to create a chain to make query against my database. Also I want to add memory to this chain.
Example of dialogue I want to see:
Query: Who is an owner of website with domain domain.com?
Answer: Boba Bobovich
Query: Tell me his email Answer:
Boba Bobovich's email is boba@boba.com
I have this code:
```
import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain, PromptTemplate
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
db = SQLDatabase.from_uri(os.getenv("DB_URI"))
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
db_chain.run("Who is owner of the website with domain https://damon.name")
db_chain.run("Tell me his email")
print(memory.load_memory_variables({}))
```
It gives:
```
> Entering new chain...
Who is owner of the website with domain https://damon.name
SQLQuery:SELECT first_name, last_name FROM owners JOIN websites ON owners.id = websites.owner_id WHERE domain = 'https://damon.name' LIMIT 5;
SQLResult: [('Geo', 'Mertz')]
Answer:Geo Mertz is the owner of the website with domain https://damon.name.
> Finished chain.
> Entering new chain...
Tell me his email
SQLQuery:SELECT email FROM owners WHERE first_name = 'Westley' AND last_name = 'Waters'
SQLResult: [('Ken70@hotmail.com',)]
Answer:Westley Waters' email is Ken70@hotmail.com.
> Finished chain.
{'history': "Human: Who is owner of the website with domain https://damon.name\nAI: Geo Mertz is the owner of the website with domain https://damon.name.\nHuman: Tell me his email\nAI: Westley Waters' email is Ken70@hotmail.com."}
```
Well, it saves context to memory but chain doesn't use it to give a proper answer (wrong email). How to fix it?
Also I don't want to use an agent because I want to manage to do this with a simple chain first. Tell me if it's impossible with simple chain.
### Suggestion:
_No response_ | How to add memory to SQLDatabaseChain? | https://api.github.com/repos/langchain-ai/langchain/issues/6918/comments | 43 | 2023-06-29T13:02:50Z | 2024-07-13T03:00:41Z | https://github.com/langchain-ai/langchain/issues/6918 | 1,780,780,200 | 6,918 |
[
"langchain-ai",
"langchain"
] | ### User based chat history
How I do implementation of User based chat history management and thread management. Plaese give any suggestion regarding this.
### Suggestion:
_No response_ | Issue: User based chat history | https://api.github.com/repos/langchain-ai/langchain/issues/6917/comments | 3 | 2023-06-29T12:37:35Z | 2023-11-13T16:08:20Z | https://github.com/langchain-ai/langchain/issues/6917 | 1,780,741,688 | 6,917 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: `Ubuntu 22.04`
Langchain Version: `0.0.219`
Poetry Version: `1.5.1`
Python Version: `3.10.6`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I would like to contribute to the project, I followed instructions for [contributing](https://github.com/hwchase17/langchain/blob/8502117f62fc4caa53d504ccc9d4e6a512006e7f/.github/CONTRIBUTING.md) and have installed poetry.
After setting up the virtual env, the command: `poetry install -E all` runs successfully :heavy_check_mark:
However when trying to do a `make test` I get the following error:
```
Traceback (most recent call last):
File "[redacted]/langchain/.venv/bin/make", line 5, in <module>
from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
```
I have the virtual env activated and the command `make` is using the one installed in the virtual env. (see above error)
I'm assuming I'm missing a dependency but it's not obvious to determine, can you help please?
### Expected behavior
`make` should run properly | make - ModuleNotFoundError: No module named 'scripts' | https://api.github.com/repos/langchain-ai/langchain/issues/6915/comments | 5 | 2023-06-29T11:40:35Z | 2023-11-25T16:08:49Z | https://github.com/langchain-ai/langchain/issues/6915 | 1,780,653,651 | 6,915 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.219
python-3.9
macos-13.4.1 (22F82)
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. use async style request in stream mode
```python
ChatOpenAI(streaming=True).agenerate([...])
```
2. Got an error if openai sdk raises `openai.error.APIError`

3. `acompletion_with_retry` not effective because the exception was thrown by `"openai/api_requestor.py", line 763, in _interpret_response_line` which is after “request”
in: tenacity/_asyncio.py

### Expected behavior
Catch exception, and do retry. | Stream Mode does not recognize openai.error | https://api.github.com/repos/langchain-ai/langchain/issues/6907/comments | 1 | 2023-06-29T09:36:28Z | 2023-10-06T16:06:39Z | https://github.com/langchain-ai/langchain/issues/6907 | 1,780,466,588 | 6,907 |
[
"langchain-ai",
"langchain"
] | ### System Info
Using Google Colab, Python 3.10.12. Installing latest version of langchain as of today (v0.0.219, I believe).
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the following function to create a chain on a collection of text files , I get the following error:
```
def get_company_qa_chain(company_text_dir):
loader = DirectoryLoader(company_text_dir, glob="*.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1000,
chunk_overlap = 100,
length_function = len,
)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings() # default model="text-embedding-ada-002"
docsearch = Chroma.from_documents(docs, embeddings)
model_name = "gpt-3.5-turbo"
temperature=0
chat = ChatOpenAI(model_name = model_name, temperature=temperature)
template="""You are a helpful utility for answering questions. Communicate in English only."""
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = """Use the following context to answer the question that follows. If you don't know the answer,
say you don't know, don't try to make up an answer:
{context}
Question: {question}
"""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template,
input_variables=["context", "question"])
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain_type_kwargs = {"prompt": chat_prompt}
qa = RetrievalQA.from_chain_type(llm=chat, chain_type="stuff",
retriever=docsearch.as_retriever(),
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs)
return qa
```
```
/usr/local/lib/python3.10/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 4
```
I have isolated the error to a particular text file, which has been produced by pymupdf. There are no clearly strange characters in the file, and everything has been cleaned using the following function before writing to text file:
```
def clean_str(text):
text = unicodedata.normalize('NFKC', text)
text = text.replace("{", "").replace("}", "").replace("[", "").replace("]", "")
return text
```
### Expected behavior
I would expect the code to handle this error and skip the problematic file. I would be also totally happy with a way of cleaning the input files to ensure this doesn't happen again. It was quite a bit of work to hunt down the problem file manually. | "ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 4" | https://api.github.com/repos/langchain-ai/langchain/issues/6906/comments | 3 | 2023-06-29T09:29:43Z | 2023-11-03T16:07:17Z | https://github.com/langchain-ai/langchain/issues/6906 | 1,780,456,921 | 6,906 |
[
"langchain-ai",
"langchain"
] | ### System Info
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
config = json.load(f)
UnicodeDecodeError: 'gbk' codec can't decode byte 0xaf in position 108: illegal multibyte sequence
### Who can help?
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
### Expected behavior
add set encoding for JSON files
| load_prompt Unable to set encoding for JSON files | https://api.github.com/repos/langchain-ai/langchain/issues/6900/comments | 5 | 2023-06-29T06:48:33Z | 2024-05-01T03:47:10Z | https://github.com/langchain-ai/langchain/issues/6900 | 1,780,233,553 | 6,900 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.9
langchain 0.0.199
### Who can help?
@hwchase17 @agola11 @eyu
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The regex such as https://github.com/hwchase17/langchain/blob/6370808d41eda0d056375015fda9284e9f01280c/langchain/agents/mrkl/output_parser.py#L18
will raise a ReDOS when the model output something below
`'Action' + '\n'*n + ':' + '\t'*n + '\t0Input:' + '\t'*n + '\nAction' + '\t'*n + 'Input' + '\t'*n + '\t0Input:ActionInput:'`
full code:
```
import re
n=780
llm_output='Action' + '\n'*n + ':' + '\t'*n + '\t0Input:' + '\t'*n + '\nAction' + '\t'*n + 'Input' + '\t'*n + '\t0Input:ActionInput:'
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
match = re.search(regex, llm_output, re.DOTALL)
action = match.group(1).strip()
print(action)
action_input = match.group(2)
```
To slow the regex
### Expected behavior
```
Protection
There are several things you can do to protect yourself from ReDoS attacks.
1.Look at safer alternative libraries such as Facebook’s [pyre2](https://pypi.org/project/pyre2/), which is a python wrapper around Google’s C++ Regex Library, [re2](https://github.com/google/re2/).
2.Always double check all regex you add to your application and never blindly trust regex patterns you find online.
3.Utilize SAST and fuzzing tools to test your own code, and check out [Ochrona](https://ochrona.dev/) to make sure your dependencies are not vulnerable to ReDoS attacks.
4.If possible, limit the length of your input to avoid longer than necessary strings.
```
from https://medium.com/ochrona/python-dos-prevention-the-redos-attack-7267a8fa2d5c
Also you can check your regex online https://devina.io/redos-checker. | The ReDOS Attack for the regex in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/6898/comments | 1 | 2023-06-29T06:18:22Z | 2023-10-05T16:06:45Z | https://github.com/langchain-ai/langchain/issues/6898 | 1,780,203,422 | 6,898 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.217
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code fails:
```python
def create_sqlite_db_file(db_dir):
# Connect to SQLite database (or create it if it doesn't exist)
conn = sqlite3.connect(db_dir)
# Create a cursor
c = conn.cursor()
# Create a dummy table
c.execute('''
CREATE TABLE IF NOT EXISTS employees(
id INTEGER PRIMARY KEY,
name TEXT,
salary REAL,
department TEXT,
position TEXT,
hireDate TEXT);
''')
# Insert dummy data into the table
c.execute('''
INSERT INTO employees (name, salary, department, position, hireDate)
VALUES ('John Doe', 80000, 'IT', 'Engineer', '2023-06-26');
''')
# Commit the transaction
conn.commit()
# Close the connection
conn.close()
def test_log_and_load_sql_database_chain(tmp_path):
# Create the SQLDatabaseChain
db_file_path = tmp_path / "my_database.db"
sqlite_uri = f"sqlite:///{db_file_path}"
llm = OpenAI(temperature=0)
create_sqlite_db_file(db_file_path)
db = SQLDatabase.from_uri(sqlite_uri)
db_chain = SQLDatabaseChain.from_llm(llm, db)
db_chain.save('/path/to/test_chain.yaml')
from langchain.chains import load_chain
loaded_chain = load_chain('/path/to/test_chain.yaml', database=db)
```
Error:
```
def load_llm_from_config(config: dict) -> BaseLLM:
"""Load LLM from Config Dict."""
> if "_type" not in config:
E TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No error should occur. | SQLDatabaseChain cannot be loaded | https://api.github.com/repos/langchain-ai/langchain/issues/6889/comments | 3 | 2023-06-28T21:29:40Z | 2023-12-27T16:06:48Z | https://github.com/langchain-ai/langchain/issues/6889 | 1,779,800,725 | 6,889 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.218
Python 3.10.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Cannot import MultiQueryRetriever, because when i run it, it says:
Traceback (most recent call last):
File "C:\langchain\JETPACK-FLOW-MULTIQUERY.py", line 14, in <module>
from langchain.retrievers.multi_query import MultiQueryRetriever
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\retrievers\__init__.py", line 21, in <module>
from langchain.retrievers.self_query.base import SelfQueryRetriever
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\retrievers\self_query\base.py", line 9, in <module>
from langchain.chains.query_constructor.base import load_query_constructor_chain
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\query_constructor\base.py", line 14, in <module>
from langchain.chains.query_constructor.parser import get_parser
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\query_constructor\parser.py", line 9, in <module>
raise ValueError(
ValueError: Lark should be at least version 1.1.5, got 0.12.0
There was same error reported in this bug report:
https://github.com/hwchase17/langchain/discussions/6068
Their solution was simply to install langchain version 0.0.201, but for me it does not help, because there is no MultiQueryReceiver in this version.
I am using: langchain-0.0.218
### Expected behavior
Does not crash, when i try to run it. | MultiQueryReceiver does not work | https://api.github.com/repos/langchain-ai/langchain/issues/6885/comments | 0 | 2023-06-28T20:24:26Z | 2023-06-28T20:37:18Z | https://github.com/langchain-ai/langchain/issues/6885 | 1,779,701,054 | 6,885 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Several memory storage options have a "buffer" type which includes a pruning function that allows users to specify a max_tokens_limit which stops token overflow issues. I'm requesting this be included with the VectorStoreRetrieverMemory as well.
### Motivation
Running into token overflow issues is annoying and a pruning option is a simple way to get around this when using vector store based memory.
### Your contribution
I'm happy to attempt a solution on this given I'm able to use my company computer this weekend. Otherwise I would appreciate any support people are able to give. | Include pruning on VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6884/comments | 0 | 2023-06-28T20:23:47Z | 2023-06-28T20:49:16Z | https://github.com/langchain-ai/langchain/issues/6884 | 1,779,699,848 | 6,884 |
[
"langchain-ai",
"langchain"
] | ### System Info
Massive client changes as of this pr:
https://github.com/anthropics/anthropic-sdk-python/commit/8d1d6af6527a3ef80b74dc3a466166ab7df057df
Installing the latest anthropic client will cause langchain Anthropic LLM utiltiies to fail.
Looks like api_url no longer exists (they've moved to base_url).
```
???
E pydantic.error_wrappers.ValidationError: 1 validation error for ChatAnthropic
E __root__
E __init__() got an unexpected keyword argument 'api_url' (type=type_error)
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of anthropic.
2. Attempt to run Anthropic LLM and it will produce a runtime failure about api_url which no longer exists in the Anthropic client initializer.
### Expected behavior
Should work with latest release of Anthropic client. | Incompatibility with latest Anthropic Client | https://api.github.com/repos/langchain-ai/langchain/issues/6883/comments | 5 | 2023-06-28T19:50:00Z | 2023-10-17T16:06:25Z | https://github.com/langchain-ai/langchain/issues/6883 | 1,779,642,812 | 6,883 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add a parameter to ConversationalRetrievalChain to skip the condense question prompt procedure.
### Motivation
Currently, when using ConversationalRetrievalChain (with the from_llm() function), we have to run the input through a LLMChain with a default "condense_question_prompt" which condenses the chat history and the input to make a standalone question out of it.
This could be a good idea, but in my test cases (in French) it performs very poorly when switching between topics. It mixes things up and hallucinates a new question which is nowhere near the original question.
The only solution I can think of is to just allow the actual chat history to be sent in the prompt but we can't skip the condense question prompt part. For now, a workaround could be to provide a custom condense prompt asking the LLM to do nothing but that would call the LLM twice for no reason...
### Your contribution
Since I am not sure why this isn't already an option, I'd assume it's for a good reason so I'm asking here before thinking of implementing anything. | Allow skipping "condense_question_prompt" when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/6879/comments | 9 | 2023-06-28T16:46:24Z | 2024-03-16T16:07:47Z | https://github.com/langchain-ai/langchain/issues/6879 | 1,779,327,153 | 6,879 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Can someone please create and integrate an in memory vector store based on purely numpy?
I don't like that ChromaDB downloads to much deps when I just want something really simple.
### Motivation
I saw this https://github.com/jdagdelen/hyperDB and it works so this should be in langchain
### Your contribution
I dont have the time rn but I want to publish this idea | In memory vector store as python object based on purely numpy | https://api.github.com/repos/langchain-ai/langchain/issues/6877/comments | 1 | 2023-06-28T16:17:22Z | 2023-10-05T16:06:50Z | https://github.com/langchain-ai/langchain/issues/6877 | 1,779,281,098 | 6,877 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.218
OSX
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create or find a sitemap with url(s) that run into an infinite redirect loop.
2. Run `SitemapLoader` with that.
3. Observe it crashing
### Expected behavior
The loader to continue on such errors, or have a config flag to say so, like `UnstructuredURLLoader`'s `continue_on_failure`. | I think the SitemapLoader should be as robust with default options as the UnstructuredURLLoader, which has an option continue_on_failure set to true. | https://api.github.com/repos/langchain-ai/langchain/issues/6875/comments | 1 | 2023-06-28T15:16:30Z | 2023-10-05T16:06:55Z | https://github.com/langchain-ai/langchain/issues/6875 | 1,779,179,472 | 6,875 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
embeddings = OpenAIEmbeddings(model='text-embedding-ada-002',deployment='XXXXX',chunk_size=1)
db = Chroma.from_documents(texts, embeddings)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,chat_memory=ChatMessageHistory(messages=[]))
model = ConversationalRetrievalChain.from_llm(llm=AzureOpenAI(model_name='gpt-35-turbo',deployment_name='XXXXX',temperature=0.7,openai_api_base=os.environ['OPENAI_API_BASE'],openai_api_key=os.environ['OPENAI_API_KEY']), retriever=db.as_retriever(), memory=memory)
I'm using ConversationalRetrievalChain to query from embeddings and also using memory to store chat history.
I've written a Flask api to fetch result from model.
I'm trying to store memory of each user in redis store, so that there wont be any mix and match in chat history.
For that I'm trying with below code, but I'm unable to store memory in redis.
`r = redis.Redis(host='redis-1XXX.c3XXX.ap-south-1-1.ec2.cloud.redislabs.com', port=17506, db=0, password='XXXX')`
`memory = {}`
`memory_key = f"memory_{employee_code}"`
`messages = r.get(memory_key)`
`messages = json.loads(messages)`
`memory[memory_key] = ConversationBufferMemory(memory_key="chat_history", return_messages=True, chat_memory=ChatMessageHistory(messages=messages))`
`model = ConversationalRetrievalChain.from_llm(llm=AzureOpenAI(model_name='gpt-35-turbo',deployment_name='XXXXXX',temperature=0.7,openai_api_base=os.environ['OPENAI_API_BASE'],openai_api_key=os.environ['OPENAI_API_KEY']), retriever=db.as_retriever(), memory=memory[memory_key])`
`res=model.run(query)`
`messages.append({query:res})`
`r.set(memory_key,json.dumps(messages))`
This is not working. Can anyone help me on this?
Thanks in advance.
### Suggestion:
_No response_ | Issue: Unable to store user specific chat history in redis. Using ConversationalRetrievalChain along with ConversationBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6872/comments | 5 | 2023-06-28T13:57:42Z | 2024-02-01T09:29:07Z | https://github.com/langchain-ai/langchain/issues/6872 | 1,779,010,819 | 6,872 |
[
"langchain-ai",
"langchain"
] | ### System Info
While trying to run example from https://python.langchain.com/docs/modules/chains/popular/sqlite I get the following error:
````
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[42], line 1
----> 1 db_chain.run("What is the expected assisted goals by Erling Haaland")
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:273, in Chain.run(self, callbacks, tags, *args, **kwargs)
271 if len(args) != 1:
272 raise ValueError("`run` supports only one positional argument.")
--> 273 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
275 if kwargs and not args:
276 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:149, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
147 except (KeyboardInterrupt, Exception) as e:
148 run_manager.on_chain_error(e)
--> 149 raise e
150 run_manager.on_chain_end(outputs)
151 final_outputs: Dict[str, Any] = self.prep_outputs(
152 inputs, outputs, return_only_outputs
153 )
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:143, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
137 run_manager = callback_manager.on_chain_start(
138 dumpd(self),
139 inputs,
140 )
141 try:
142 outputs = (
--> 143 self._call(inputs, run_manager=run_manager)
144 if new_arg_supported
145 else self._call(inputs)
146 )
147 except (KeyboardInterrupt, Exception) as e:
148 run_manager.on_chain_error(e)
File ~/.local/lib/python3.10/site-packages/langchain/chains/sql_database/base.py:105, in SQLDatabaseChain._call(self, inputs, run_manager)
103 # If not present, then defaults to None which is all tables.
104 table_names_to_use = inputs.get("table_names_to_use")
--> 105 table_info = self.database.get_table_info(table_names=table_names_to_use)
106 llm_inputs = {
107 "input": input_text,
108 "top_k": str(self.top_k),
(...)
111 "stop": ["\nSQLResult:"],
112 }
113 intermediate_steps: List = []
File ~/.local/lib/python3.10/site-packages/langchain/sql_database.py:289, in SQLDatabase.get_table_info(self, table_names)
287 table_info += f"\n{self._get_table_indexes(table)}\n"
288 if self._sample_rows_in_table_info:
--> 289 table_info += f"\n{self._get_sample_rows(table)}\n"
290 if has_extra_info:
291 table_info += "*/"
File ~/.local/lib/python3.10/site-packages/langchain/sql_database.py:311, in SQLDatabase._get_sample_rows(self, table)
308 try:
309 # get the sample rows
310 with self._engine.connect() as connection:
--> 311 sample_rows_result = connection.execute(command) # type: ignore
312 # shorten values in the sample rows
313 sample_rows = list(
314 map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)
315 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1413, in Connection.execute(self, statement, parameters, execution_options)
1411 raise exc.ObjectNotExecutableError(statement) from err
1412 else:
-> 1413 return meth(
1414 self,
1415 distilled_parameters,
1416 execution_options or NO_OPTIONS,
1417 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:483, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options)
481 if TYPE_CHECKING:
482 assert isinstance(self, Executable)
--> 483 return connection._execute_clauseelement(
484 self, distilled_params, execution_options
485 )
486 else:
487 raise exc.ObjectNotExecutableError(self)
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1629, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options)
1621 schema_translate_map = execution_options.get(
1622 "schema_translate_map", None
1623 )
1625 compiled_cache: Optional[CompiledCacheType] = execution_options.get(
1626 "compiled_cache", self.engine._compiled_cache
1627 )
-> 1629 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
1630 dialect=dialect,
1631 compiled_cache=compiled_cache,
1632 column_keys=keys,
1633 for_executemany=for_executemany,
1634 schema_translate_map=schema_translate_map,
1635 linting=self.dialect.compiler_linting | compiler.WARN_LINTING,
1636 )
1637 ret = self._execute_context(
1638 dialect,
1639 dialect.execution_ctx_cls._init_compiled,
(...)
1647 cache_hit=cache_hit,
1648 )
1649 if has_events:
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:684, in ClauseElement._compile_w_cache(self, dialect, compiled_cache, column_keys, for_executemany, schema_translate_map, **kw)
682 else:
683 extracted_params = None
--> 684 compiled_sql = self._compiler(
685 dialect,
686 cache_key=elem_cache_key,
687 column_keys=column_keys,
688 for_executemany=for_executemany,
689 schema_translate_map=schema_translate_map,
690 **kw,
691 )
693 if not dialect._supports_statement_cache:
694 cache_hit = dialect.NO_DIALECT_SUPPORT
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:288, in CompilerElement._compiler(self, dialect, **kw)
286 if TYPE_CHECKING:
287 assert isinstance(self, ClauseElement)
--> 288 return dialect.statement_compiler(dialect, self, **kw)
File ~/.local/lib/python3.10/site-packages/pybigquery/sqlalchemy_bigquery.py:137, in BigQueryCompiler.__init__(self, dialect, statement, column_keys, inline, **kwargs)
135 if isinstance(statement, Column):
136 kwargs['compile_kwargs'] = util.immutabledict({'include_table': False})
--> 137 super(BigQueryCompiler, self).__init__(dialect, statement, column_keys, inline, **kwargs)
TypeError: SQLCompiler.__init__() got multiple values for argument 'cache_key'
````
I use a VertexAI model (MODEL_TEXT_BISON_001) as the LLM.
Some essential library versions:
langchain == 0.0.206
SQLAlchemy == 2.0.11
ipython == 8.12.1
python == 3.10.10
google-cloud-bigquery == 3.10.0
google-cloud-bigquery-storage == 2.16.2
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`db` = SQLDatabase.from_uri(f"bigquery://{project_id}/{dataset}")
toolkit = SQLDatabaseToolkit(llm=llm, db=db)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("What is the sport that generates highest revenue")`
### Expected behavior
When running the db_chain, I expect to get an answer from the bigquery database | SQLDatabaseChain - Questions/error | https://api.github.com/repos/langchain-ai/langchain/issues/6870/comments | 19 | 2023-06-28T13:24:59Z | 2024-08-02T23:29:36Z | https://github.com/langchain-ai/langchain/issues/6870 | 1,778,943,766 | 6,870 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
how to track token usage for OpenAIEmbeddings? it seems that get_openai_callback always return 0.
### Suggestion:
_No response_ | how to track token usage for OpenAIEmbeddings? it seems that get_openai_callback always return 0. | https://api.github.com/repos/langchain-ai/langchain/issues/6869/comments | 3 | 2023-06-28T13:14:02Z | 2023-10-06T16:06:44Z | https://github.com/langchain-ai/langchain/issues/6869 | 1,778,919,115 | 6,869 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there any way to store the word2vec/glove/fasttext based embeddings in the vector database using langchain
```
pages= "page content"
embeddings = OpenAIEmbeddings()
persist_directory = 'db'
vectordb = Chroma.from_documents(documents=pages, embedding=embeddings, persist_directory=persist_directory)
```
shall use **word2vec/glove/fasttext** embeddings instead of **OpenAIEmbeddings()** in the above code?
if possible then what is the syntax for that?
### Motivation
For using native embedding formats
### Your contribution
For using native embedding formats like word2vec/glove/fasttext in langchain | Word2vec/Glove/FastText embedding support | https://api.github.com/repos/langchain-ai/langchain/issues/6868/comments | 2 | 2023-06-28T12:26:22Z | 2024-01-30T00:45:35Z | https://github.com/langchain-ai/langchain/issues/6868 | 1,778,835,710 | 6,868 |
[
"langchain-ai",
"langchain"
] | ### Feature request
PGVector lacks Upsert and deletion capabilities.
I have to custom create this functionality.
### Motivation
I want to use PGVector because it's easy to implement and I don't require to deal with DB vector providers.
### Your contribution
If you deem this useful I will try propose a pull request. | PGVector is Lacking Basic Features | https://api.github.com/repos/langchain-ai/langchain/issues/6866/comments | 7 | 2023-06-28T11:32:05Z | 2024-01-25T11:42:00Z | https://github.com/langchain-ai/langchain/issues/6866 | 1,778,756,225 | 6,866 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True, top_k=3)
result = db_chain("Make a list of those taking the exam")
result['result'] is incomplete,why ?token ?
### Suggestion:
_No response_ | sqldatabasechain result incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/6861/comments | 8 | 2023-06-28T08:26:32Z | 2023-12-08T16:06:35Z | https://github.com/langchain-ai/langchain/issues/6861 | 1,778,460,142 | 6,861 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.217
python=3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi! I am working on a 'question answering' use case. I am loading PDF docs which I am storing in the Chroma vectorDB along with the instructor embeddings. I use the following command to do that:
```
vectordb = Chroma.from_documents(documents=texts,
embedding=embedding,
persist_directory='db')
```
Here I am storing the vectordb on my local machine in the 'db' folder.
When I use this vectordb as retriever and then use RetrievalQA to ask questions I get 'X' answers.
After storing the vectordb on my local, the next time I directly load the db from the directory:
`vectordb = Chroma(persist_directory='db', embedding_function=embedding)`
When I use this vectordb as retriever and then use RetrievalQA to ask the same questions, I get different answers.
I hope I was able to explain the issue properly.
### Expected behavior
My understanding is that I should get the same answer after loading the vectordb from my local. Why am I getting different answers? Is this an issue with langchain or am I doing this incorrectly? Can you please help me understand? | Different results when loading Chroma() vs Chroma.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/6854/comments | 6 | 2023-06-28T04:06:56Z | 2023-12-14T09:25:45Z | https://github.com/langchain-ai/langchain/issues/6854 | 1,778,137,863 | 6,854 |
[
"langchain-ai",
"langchain"
] | ### System Info
TL;DR
The error is reported in the error reproduction section.
Here's a guess at the solution:
HuggingFaceTextGenInference [docs](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/huggingface_textgen_inference) and [code](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py#L77-L90) don't yet support [huggingface's native max_length generation kwarg](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.max_length)
I'm guessing adding max_length below max_new_tokens in places like [here](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py#L142)
would provide the desired behavior? Ctrl-F for max_length shows other places the addition may be required
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code snippet below works for local models
```
pipe = pipeline("text-generation", model=hf_llm, tokenizer=tokenizer, max_new_tokens=200)
llm = HuggingFacePipeline(pipeline=pipe)
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm, chain_type="stuff", retriever=db.as_retriever()
)
chain(
{"question": "What did the president say about Justice Breyer"},
return_only_outputs=True,
)
```
However, when replacing the llm definition with this snippet
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://hf-inference-server:80/",
max_new_tokens=256,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
```
Yields this error
```
ValidationError: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 2342 `inputs`
tokens and 512 `max_new_tokens`
```
The code snippet that fails here works on it's own when used like this
`generated_text = llm("<|prompter|>What is the capital of Hungary?<|endoftext|><|assistant|>")`
### Expected behavior
Expecting a text based answer with no error. | max_length support for HuggingFaceTextGenInference | https://api.github.com/repos/langchain-ai/langchain/issues/6851/comments | 6 | 2023-06-28T01:07:18Z | 2023-12-13T16:38:12Z | https://github.com/langchain-ai/langchain/issues/6851 | 1,777,979,636 | 6,851 |
[
"langchain-ai",
"langchain"
] | ### System Info
❯ pip list |grep unstructured
unstructured 0.7.9
❯ pip list |grep langchain
langchain 0.0.215
langchainplus-sdk 0.0.17
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("../modules/tk.txt")
document = loader.load()
```
errors:
```
UnpicklingError Traceback (most recent call last)
Cell In[11], line 3
1 from langchain.document_loaders import UnstructuredFileLoader
2 loader = UnstructuredFileLoader("../modules/tk.txt")
----> 3 document = loader.load()
File [~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:71](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:71), in UnstructuredBaseLoader.load(self)
69 def load(self) -> List[Document]:
70 """Load file."""
---> 71 elements = self._get_elements()
72 if self.mode == "elements":
73 docs: List[Document] = list()
File [~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:133](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:133), in UnstructuredFileLoader._get_elements(self)
130 def _get_elements(self) -> List:
131 from unstructured.partition.auto import partition
--> 133 return partition(filename=self.file_path, **self.unstructured_kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/auto.py:193](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/auto.py:193), in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
183 elements = partition_image(
184 filename=filename, # type: ignore
185 file=file, # type: ignore
(...)
190 **kwargs,
191 )
192 elif filetype == FileType.TXT:
--> 193 elements = partition_text(
194 filename=filename,
195 file=file,
196 encoding=encoding,
197 paragraph_grouper=paragraph_grouper,
198 **kwargs,
199 )
200 elif filetype == FileType.RTF:
201 elements = partition_rtf(
202 filename=filename,
203 file=file,
204 include_page_breaks=include_page_breaks,
205 **kwargs,
206 )
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/documents/elements.py:118](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/documents/elements.py:118), in process_metadata..decorator..wrapper(*args, **kwargs)
116 @wraps(func)
117 def wrapper(*args, **kwargs):
--> 118 elements = func(*args, **kwargs)
119 sig = inspect.signature(func)
120 params = dict(**dict(zip(sig.parameters, args)), **kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/file_utils/filetype.py:493](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/file_utils/filetype.py:493), in add_metadata_with_filetype..decorator..wrapper(*args, **kwargs)
491 @wraps(func)
492 def wrapper(*args, **kwargs):
--> 493 elements = func(*args, **kwargs)
494 sig = inspect.signature(func)
495 params = dict(**dict(zip(sig.parameters, args)), **kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:92](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:92), in partition_text(filename, file, text, encoding, paragraph_grouper, metadata_filename, include_metadata, **kwargs)
89 ctext = ctext.strip()
91 if ctext:
---> 92 element = element_from_text(ctext)
93 element.metadata = metadata
94 elements.append(element)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:104](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:104), in element_from_text(text)
102 elif is_us_city_state_zip(text):
103 return Address(text=text)
--> 104 elif is_possible_narrative_text(text):
105 return NarrativeText(text=text)
106 elif is_possible_title(text):
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:86](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:86), in is_possible_narrative_text(text, cap_threshold, non_alpha_threshold, language, language_checks)
83 if under_non_alpha_ratio(text, threshold=non_alpha_threshold):
84 return False
---> 86 if (sentence_count(text, 3) < 2) and (not contains_verb(text)) and language == "en":
87 trace_logger.detail(f"Not narrative. Text does not contain a verb:\n\n{text}") # type: ignore # noqa: E501
88 return False
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:189](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:189), in contains_verb(text)
186 if text.isupper():
187 text = text.lower()
--> 189 pos_tags = pos_tag(text)
190 return any(tag in POS_VERB_TAGS for _, tag in pos_tags)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/nlp/tokenize.py:57](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/nlp/tokenize.py:57), in pos_tag(text)
55 for sentence in sentences:
56 tokens = _word_tokenize(sentence)
---> 57 parts_of_speech.extend(_pos_tag(tokens))
58 return parts_of_speech
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:165](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:165), in pos_tag(tokens, tagset, lang)
140 def pos_tag(tokens, tagset=None, lang="eng"):
141 """
142 Use NLTK's currently recommended part of speech tagger to
143 tag the given list of tokens.
(...)
163 :rtype: list(tuple(str, str))
164 """
--> 165 tagger = _get_tagger(lang)
166 return _pos_tag(tokens, tagset, tagger, lang)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:107](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:107), in _get_tagger(lang)
105 tagger.load(ap_russian_model_loc)
106 else:
--> 107 tagger = PerceptronTagger()
108 return tagger
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:169](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:169), in PerceptronTagger.__init__(self, load)
165 if load:
166 AP_MODEL_LOC = "file:" + str(
167 find("taggers/averaged_perceptron_tagger/" + PICKLE)
168 )
--> 169 self.load(AP_MODEL_LOC)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:252](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:252), in PerceptronTagger.load(self, loc)
246 def load(self, loc):
247 """
248 :param loc: Load a pickled model at location.
249 :type loc: str
250 """
--> 252 self.model.weights, self.tagdict, self.classes = load(loc)
253 self.model.classes = self.classes
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/data.py:755](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/data.py:755), in load(resource_url, format, cache, verbose, logic_parser, fstruct_reader, encoding)
753 resource_val = opened_resource.read()
754 elif format == "pickle":
--> 755 resource_val = pickle.load(opened_resource)
756 elif format == "json":
757 import json
UnpicklingError: pickle data was truncated
```
how to fix it
### Expected behavior
no | UnpicklingError: pickle data was truncated | https://api.github.com/repos/langchain-ai/langchain/issues/6850/comments | 2 | 2023-06-27T23:10:44Z | 2023-10-05T16:07:05Z | https://github.com/langchain-ai/langchain/issues/6850 | 1,777,879,256 | 6,850 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I tried to install langchain[llms] with pip on Windows 11. The installation did not throw any errors. But trying to import langchain in a python script gives the following error:
from numexpr.interpreter import MAX_THREADS, use_vml, __BLOCK_SIZE1__
ImportError: DLL load failed while importing interpreter: The specified module could not be found.
pip list shows that numexpr=2.8.4 is installed.
Uninstalling and reinstalling numexpr did not help.
Since I had a machine where langchain was working I could determine the difference that the one machine had Visual Studio installed with C compiling capabilities.
Uninstalling and reinstalling numexpr fixed it after installing Visual Studio on the second machine.
But since the documentation does not explain that Visual Studio is required in Windows I am confused and it took me a while to figure this out. Also that the installation had no errors but still didnt worked? Maybe someone looks into that.
Please delete the issue if this is not the right place.
### Suggestion:
_No response_ | Issue: Installing langchain[llms] is really difficult | https://api.github.com/repos/langchain-ai/langchain/issues/6848/comments | 2 | 2023-06-27T23:01:43Z | 2023-10-05T16:07:28Z | https://github.com/langchain-ai/langchain/issues/6848 | 1,777,867,466 | 6,848 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Have callbacks as an argument for BaseConversationalRetrievalChain._get_docs method and BaseRetriever.get_relevant_documents
### Motivation
I am using a custom retriever which has multiple intermediate steps, and I would like to store info from some of these steps for debugging and subsequent analyses. This information is specific to the request, and it cannot be stored at an individual document level.
### Your contribution
I think this can be addressed by having an option to pass callbacks to the BaseConversationalRetrievalChain._get_docs method and BaseRetriever.get_relevant_documents. BaseCallbackHandler may have to be modified too to extend a new class RetrieverManagerMixin which can contain methods like on_retriever_start and on_retriever_end. | Callbacks for retriever | https://api.github.com/repos/langchain-ai/langchain/issues/6846/comments | 1 | 2023-06-27T22:15:16Z | 2023-10-05T16:08:00Z | https://github.com/langchain-ai/langchain/issues/6846 | 1,777,829,041 | 6,846 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, `langchain 0.0.217 depends on pydantic<2 and >=1`. Pydantic v2 is re-written in Rust and is between 5-50x faster than v1 depending on the use case. Given how much LangChain relies on Pydantic for both modeling and functional components, and given that FastAPI is now supporting (in beta) Pydantic v2, it'd be great to see LangChain handle a user-specified installation of Pydantic above v2.
The following is an example of what happens when a user specifies installing Pydantic above v2.
```bash
The conflict is caused by:
The user requested pydantic==2.0b2
fastapi 0.100.0b1 depends on pydantic!=1.8, !=1.8.1, <3.0.0 and >=1.7.4
inflect 6.0.4 depends on pydantic>=1.9.1
langchain 0.0.217 depends on pydantic<2 and >=1
```
### Motivation
Pydantic v2 is re-written in Rust and is between 5-50x faster than v1 depending on the use case. Given how much LangChain relies on Pydantic for both modeling and functional components, and given that FastAPI is now supporting (in beta) Pydantic v2, it'd be great to see LangChain handle a user-specified installation of Pydantic above v2.
### Your contribution
Yes! I'm currently opening just an issue to document my request, and because I'm fairly backlogged. But I have contributed to LangChain in the past and would love to write a pull request to facilitate this in full. | Support for Pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/6841/comments | 30 | 2023-06-27T20:24:25Z | 2023-08-17T21:20:44Z | https://github.com/langchain-ai/langchain/issues/6841 | 1,777,698,608 | 6,841 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain git+https://github.com/hwchase17/langchain@8392ca602c03d3ae660d05981154f17ee0ad438e
Archcraft x86_64
Python 3.11.3
### Who can help?
@eyurtsev @dev2049
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Export the chat from WhatsApp, of a conversation with media and deleted messages.
2. The exported chat contains deleted messages and omitted media during the export. For example : `6/29/23, 12:16 am - User 4: This message was deleted` and `4/20/23, 9:42 am - User 3: <Media omitted>`.
3. Currently these messages are also processed and stored in the index.
### Expected behavior
We can avoid embedding these messages in the index. | WhatsappChatLoader doesn't ignore deleted messages and omitted media | https://api.github.com/repos/langchain-ai/langchain/issues/6838/comments | 1 | 2023-06-27T19:11:54Z | 2023-06-28T02:21:59Z | https://github.com/langchain-ai/langchain/issues/6838 | 1,777,599,060 | 6,838 |
[
"langchain-ai",
"langchain"
] | ### System Info
python - 3.9
langchain - 0.0.213
OS - Mac Monterey
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code Snippet
```
from langchain import (
LLMMathChain,
OpenAI,
SerpAPIWrapper,
SQLDatabase,
SQLDatabaseChain,
)
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA, ConversationChain
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, LLMChain
from langchain.utilities import GoogleSearchAPIWrapper
from langchain import OpenAI, LLMMathChain, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory, ConversationBufferWindowMemory
import time
from chainlit import AskUserMessage, Message, on_chat_start
import random
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.docstore.document import Document
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.document_loaders import WebBaseLoader
from langchain.document_loaders import UnstructuredHTMLLoader
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from pathlib import Path
from langchain.document_loaders import WebBaseLoader
from langchain.prompts import PromptTemplate
import json
import os
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo", openai_api_key=openai_api_key)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=50,
return_messages=True,
input_key="question",
output_key='output'
)
# load will vector db
relevant_parts = []
for p in Path(".").absolute().parts:
relevant_parts.append(p)
if relevant_parts[-3:] == ["langchain", "docs", "modules"]:
break
def get_meta(data:str):
split_data = [item.split(":") for item in data.split("\n")]
# Creating a dictionary from the split data
result = {}
for item in split_data:
if len(item) > 1:
key = item[0].strip()
value = item[1].strip()
result[key] = value
# Converting the dictionary to JSON format
json_data = json.dumps(result)
return json_data
template = """You're an customer care representative
You have the following products in your store based on the customer question. Answer politely to customer questions on any questions on product sold in the wesbite and provide product details. Send the product webpage links for each recommendation
{context}
"chat_history": {chat_history}
Question: {question}
Answer:"""
products_path = "ecommerce__20200101_20200131__10k_data_10.csv"
loader = CSVLoader(products_path, csv_args={
'delimiter': ',',
'quotechar': '"',
})
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=0, length_function = len)
sources = text_splitter.split_documents(documents)
source_chunks = []
splitter = RecursiveCharacterTextSplitter(chunk_size=3000, chunk_overlap=100, length_function = len)
for source in sources:
for chunk in splitter.split_text(source.page_content):
chunk_metadata = json_meta = json.loads(get_meta(chunk))
source_chunks.append(Document(page_content=chunk, metadata=chunk_metadata))
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key, model="ada")
products_db = Chroma.from_documents(source_chunks, embeddings, collection_name="products")
prompt = PromptTemplate(template=template, input_variables=["context", "question", "chat_history"])
product_inquery_agent = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=products_db.as_retriever(search_kwargs={"k": 4}),
chain_type_kwargs = {"prompt": prompt, "verbose": True, "memory": conversational_memory},
)
tools = [
Tool(
name="Product inquiries System",
func=product_inquery_agent.run,
description="useful for getting information about products, features, and specifications to make informed purchase decisions. Input should be a fully formed question.",
return_direct=True,
),
]
prefix = """Have a conversation with a human as a customer representative agent, answering the following questions as best you can in a polite and friendly manner. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
llm_chain = LLMChain(llm=OpenAI(temperature=0, openai_api_key=openai_api_key), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=conversational_memory)
```
### Expected behavior
query = "what products are available in your website ?"
response = agent_chain.run(query)
print(response)
The response should print the output captured but it throws a error with key not found
``` > Entering new chain...
Thought: I need to find out what products are available
Action: Product inquiries System
Action Input: What products are available in your website?
> Entering new chain...
> Entering new chain...
Prompt after formatting:
You're an customer care representative
You have the following products in your store based on the customer question. Answer politely to customer questions on any questions on product sold in the wesbite and provide product details. Send the product webpage links for each recommendation
Uniq Id: cc2083338a16c3fe2f7895289d2e98fe
Product Name: ARTSCAPE Etched Glass 24" x 36" Window Film, 24-by-36-Inch
Brand Name:
Asin:
Category: Home & Kitchen | Home Décor | Window Treatments | Window Stickers & Films | Window Films
Upc Ean Code:
List Price:
Selling Price: $12.99
Quantity:
Model Number: 01-0121
About Product: Make sure this fits by entering your model number. | The visual effect of textured glass and stained glass | Creates privacy / Provides UV protection | No adhesives / Applies easily | Patterns repeat to cover any size window | Made in USA
Product Specification: ProductDimensions:72x36x0inches|ItemWeight:11.2ounces|ShippingWeight:12.2ounces(Viewshippingratesandpolicies)|Manufacturer:Artscape|ASIN:B000Q3PRYA|DomesticShipping:ItemcanbeshippedwithinU.S.|InternationalShipping:ThisitemcanbeshippedtoselectcountriesoutsideoftheU.S.LearnMore|ShippingAdvisory:Thisitemmustbeshippedseparatelyfromotheritemsinyourorder.Additionalshippingchargeswillnotapply.|Itemmodelnumber:01-0121
Technical Details: show up to 2 reviews by default Product Description Artscape window films create the look of stained and etched glass. These thin, translucent films provide privacy while still allowing natural light to enter the room. Artscape films are easily applied to any smooth glass surface. They do not use adhesives and are easily removed if needed. They can be trimmed or combined to fit any size window. The images have a repeating pattern left to right and top to bottom and can be used either vertically or horizontally. These films provide UV protection and are the perfect decorative accent for windows that require continued privacy. Artscape patented products are all made in the USA. From the Manufacturer Artscape window films create the look of stained and etched glass. These thin, translucent films provide privacy while still allowing natural light to enter the room. Artscape films are easily applied to any smooth glass surface. They do not use adhesives and are easily removed if needed. They can be trimmed or combined to fit any size window. The images have a repeating pattern left to right and top to bottom and can be used either vertically or horizontally. These films provide UV protection and are the perfect decorative accent for windows that require continued privacy. Artscape patented products are all made in the USA. | 12.2 ounces (View shipping rates and policies)
Shipping Weight: 12.2 ounces
Product Dimensions:
Image: [https://images-na.ssl-images-amazon.com/images/I/51iAe3LF7FL.jpg|https://images-na.ssl-images-amazon.com/images/I/417B20oEehL.jpg|https://images-na.ssl-images-amazon.com/images/I/41J3bDg633L.jpg|https://images-na.ssl-images-amazon.com/images/I/51hhnuGWwdL.jpg|https://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg](https://images-na.ssl-images-amazon.com/images/I/51iAe3LF7FL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/417B20oEehL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/41J3bDg633L.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/51hhnuGWwdL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg)
Variants:
Sku:
Product Url: https://www.amazon.com/ARTSCAPE-Etched-Glass-Window-Film/dp/B000Q3PRYA
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Direction To Use:
Is Amazon Seller: Y
Size Quantity Variant:
Product Description:
Uniq Id: f8c32a45e507a177992973cf0d46d20c
Product Name: Terra by Battat – 4 Dinosaur Toys, Medium – Dinosaurs for Kids & Collectors, Scientifically Accurate & Designed by A Paleo-Artist; Age 3+ (4 Pc)
Brand Name:
Asin:
Category:
Upc Ean Code:
List Price:
Selling Price: $18.66
Quantity:
Model Number: AN4054Z
About Product: Make sure this fits by entering your model number. | 4 medium-sized dinosaurs for kids, with lifelike pose, accurate ratio, and exquisitely detailed paint | Includes: Parasaurolophus walkeri, Stegosaurus ungulatus, Pachyrhinosaurus, and euoplocephalus tutus toy dinos | Museum Quality: classic toy dinosaurs designed by an internationally renowned paleo-artist | Educational toy: dinosaur toys for kids spark curiosity about paleontology, science, and natural History | Dimensions: Dimensions: miniature figurines measure 6.25-8.5 (L) 1.5-2.25 (W) 2.25-3.5 (H) inches approximately toys 6.25-8.5 (L) 1.5-2.25 (W) 2.25-3.5 (H) inches approximately | No batteries required, just imagination! | Earth-friendly recyclable packaging | Age: suggested for ages 3+ | Collect them all! Discover the entire Terra by Battat family of animal toy figurines and dinosaur playsets!
Product Specification: ProductDimensions:8.7x3.9x3.4inches|ItemWeight:15.8ounces|ShippingWeight:1.4pounds(Viewshippingratesandpolicies)|ASIN:B07PF1R8LS|Itemmodelnumber:AN4054Z|Manufacturerrecommendedage:36months-10years
Technical Details: Go to your orders and start the return Select the ship method Ship it! | Go to your orders and start the return Select the ship method Ship it! | show up to 2 reviews by default Look out! Here come 4 incredible dinosaur toys that look like the real thing! Sought after by both kids and collectors, These scientifically precise and very collectable dinosaur replicas also feature bright, beautiful colors and look amazing in any dinosaur playset. And, they're made with high quality material, so these dinos will never go extinct! The intimidating-looking, and instantly recognizable Stegosaurus was the largest of plate-backed prehistoric herbivores. Scientists have suggested that the Stegosaurus’ iconic plates were for defense against predators, but this is unlikely as the plates were quite thin. Others have said the Stegosaurus’ spiked tail may have been used as a weapon. Terra’s Stegosaurus toy accurately recreates the formidable size and terrifying presence of this iconic predator. In particular, the large, heavily built frame and rounded backs of the mighty Stegosaurus are faithfully depicted in the dinosaur replica. This highly collectable playset also includes a toy Parasaurolophus, Stegosaurus, Pachyrhinosaurus, and euoplocephalus tutus. Collect your own miniature versions of 4 powerful and iconic dinosaurs from Terra by Battat! | 1.4 pounds (View shipping rates and policies)
Shipping Weight: 1.4 pounds
Product Dimensions:
Uniq Id: e04b990e95bf73bbe6a3fa09785d7cd0
Product Name: Woodstock- Collage 500 pc Puzzle
Brand Name:
Asin:
Category: Toys & Games | Puzzles | Jigsaw Puzzles
Upc Ean Code:
List Price:
Selling Price: $17.49
Quantity:
Model Number: 62151
About Product: Make sure this fits by entering your model number. | Puzzle has 500 pieces | Completed puzzle measure 14 x 19 | 100% officially licensed merchandise | Great for fans & puzzlers alike
Product Specification: ProductDimensions:1.9x8x10inches|ItemWeight:13.4ounces|ShippingWeight:13.4ounces(Viewshippingratesandpolicies)|ASIN:B07MX21WWX|Itemmodelnumber:62151|Manufacturerrecommendedage:14yearsandup
Technical Details: show up to 2 reviews by default 100% Officially licensed merchandise; complete puzzle measures 14 x 19 in. | 13.4 ounces (View shipping rates and policies)
Shipping Weight: 13.4 ounces
Product Dimensions:
Image: [https://images-na.ssl-images-amazon.com/images/I/61plo8Xv4vL.jpg|https://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg](https://images-na.ssl-images-amazon.com/images/I/61plo8Xv4vL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg)
Variants:
Sku:
Product Url: https://www.amazon.com/Woodstock-Collage-500-pc-Puzzle/dp/B07MX21WWX
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Direction To Use:
Is Amazon Seller: Y
Size Quantity Variant:
Product Description:
"chat_history": []
Question: What products are available in your website?
Answer:
> Finished chain.
> Finished chain.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[42], line 2
1 query = "what products are available in your website ?"
----> 2 response = agent_chain.run(query)
3 print(response)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/agents/agent.py:987, in AgentExecutor._call(self, inputs, run_manager)
985 # We now enter the agent loop (until it returns something).
986 while self._should_continue(iterations, time_elapsed):
--> 987 next_step_output = self._take_next_step(
988 name_to_tool_map,
989 color_mapping,
990 inputs,
991 intermediate_steps,
992 run_manager=run_manager,
993 )
994 if isinstance(next_step_output, AgentFinish):
995 return self._return(
996 next_step_output, intermediate_steps, run_manager=run_manager
997 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/agents/agent.py:850, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
848 tool_run_kwargs["llm_prefix"] = ""
849 # We then call the tool on the tool input to get an observation
--> 850 observation = tool.run(
851 agent_action.tool_input,
852 verbose=self.verbose,
853 color=color,
854 callbacks=run_manager.get_child() if run_manager else None,
855 **tool_run_kwargs,
856 )
857 else:
858 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:299, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
297 except (Exception, KeyboardInterrupt) as e:
298 run_manager.on_tool_error(e)
--> 299 raise e
300 else:
301 run_manager.on_tool_end(
302 str(observation), color=color, name=self.name, **kwargs
303 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:271, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
268 try:
269 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
270 observation = (
--> 271 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
272 if new_arg_supported
273 else self._run(*tool_args, **tool_kwargs)
274 )
275 except ToolException as e:
276 if not self.handle_tool_error:
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:414, in Tool._run(self, run_manager, *args, **kwargs)
411 """Use the tool."""
412 new_argument_supported = signature(self.func).parameters.get("callbacks")
413 return (
--> 414 self.func(
415 *args,
416 callbacks=run_manager.get_child() if run_manager else None,
417 **kwargs,
418 )
419 if new_argument_supported
420 else self.func(*args, **kwargs)
421 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py:120, in BaseRetrievalQA._call(self, inputs, run_manager)
117 question = inputs[self.input_key]
119 docs = self._get_docs(question)
--> 120 answer = self.combine_documents_chain.run(
121 input_documents=docs, question=question, callbacks=_run_manager.get_child()
122 )
124 if self.return_source_documents:
125 return {self.output_key: answer, "source_documents": docs}
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:293, in Chain.run(self, callbacks, tags, *args, **kwargs)
290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
--> 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
295 if not kwargs and not args:
296 raise ValueError(
297 "`run` supported with either positional arguments or keyword arguments,"
298 " but none were provided."
299 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:168, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
166 raise e
167 run_manager.on_chain_end(outputs)
--> 168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
171 if include_run_info:
172 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:233, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
231 self._validate_outputs(outputs)
232 if self.memory is not None:
--> 233 self.memory.save_context(inputs, outputs)
234 if return_only_outputs:
235 return outputs
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/memory/chat_memory.py:34, in BaseChatMemory.save_context(self, inputs, outputs)
32 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
33 """Save context from this conversation to buffer."""
---> 34 input_str, output_str = self._get_input_output(inputs, outputs)
35 self.chat_memory.add_user_message(input_str)
36 self.chat_memory.add_ai_message(output_str)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/memory/chat_memory.py:30, in BaseChatMemory._get_input_output(self, inputs, outputs)
28 else:
29 output_key = self.output_key
---> 30 return inputs[prompt_input_key], outputs[output_key]
KeyError: 'output' ``` | agent with memory unable to execute and throwing a output key error | https://api.github.com/repos/langchain-ai/langchain/issues/6837/comments | 5 | 2023-06-27T18:45:58Z | 2024-06-24T07:06:27Z | https://github.com/langchain-ai/langchain/issues/6837 | 1,777,551,770 | 6,837 |
[
"langchain-ai",
"langchain"
] | ### Large observation handling limit.
Hey langchain community,
I have a tool which takes a database query as input and does database query. This is similar to what `QuerySQLDataBaseTool` does. The problem is the output of the query is out of control, it can be large and the agent exceeded the token limit.
The solution I have tried:
1. Do pagination:
- Chunk the large output, summarize each chunk according to the target question
- Combine all chunks' summarization, which is much smaller than the original output.
Problems:
- Even though I did the summarization according to the target question, the summarization will still lose information.
- The pagination can be slow.
2. Vectorization:
- Chunk the large output
- Embed each chunk and put them into a Vector DB.
- Do a similarity search based on the target question, and take number of chunks within the token limit.
Problems:
- The embedding take times, so it can be slow for a single thought.
- The output of the query is semantic continuously as a a whole, the chunks can break the semantic meaning.
Does anyone have a solution for this problem? I appreciated any idea!
### Suggestion:
_No response_ | Issue: Large observation handling limit | https://api.github.com/repos/langchain-ai/langchain/issues/6836/comments | 6 | 2023-06-27T17:57:22Z | 2023-11-08T16:08:35Z | https://github.com/langchain-ai/langchain/issues/6836 | 1,777,470,667 | 6,836 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.208
OS version: macOS 13.4
Python version: 3.10.12
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Queries without an `order by` clause aren't guaranteed to have any particular order.
### Expected behavior
Chat history should be in order. | PostgresChatMessageHistory message order isn't guaranteed | https://api.github.com/repos/langchain-ai/langchain/issues/6829/comments | 1 | 2023-06-27T15:22:50Z | 2023-06-30T17:13:58Z | https://github.com/langchain-ai/langchain/issues/6829 | 1,777,227,042 | 6,829 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello everyone,
I am currently utilizing the OpenAIFunctions agent along with some custom tools that I've developed. I'm trying to incorporate a custom property named `source_documents` into one of these tools. My intention is to assign a value to this property within the tool and subsequently utilize this value outside of the tool.
To illustrate, this is how I invoke my custom tool: `tools = [JamesAllenRetrievalSearchTool(source_documents), OrderTrackingTool()]`. Here, `source_documents `is the property that I wish to update within the `JamesAllenRetrievalSearchTool `class.
I attempted to create a constructor within the custom tool and pass the desired variable for updating, but unfortunately, this approach was unsuccessful. If anyone has knowledge of a solution to my problem, I would greatly appreciate your assistance.
Thank you!
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is My custom tool Class:
`class JamesAllenRetrivalSearchTool(BaseTool):
name = "jamesallen-search"
description = "Use this tool as the primary source of context information. Always search for the answers using this tool first, don't make up answers yourself"
return_direct = True
args_schema: Type[BaseModel] = JamesAllenRetrivalSearchInput
def _run(self, question: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
knowledge_base_retrival_chain = RetrivalQAChain(prompt)
result = knowledge_base_retrival_chain.run(question)
# **HERE -> I need to update returned source_documents from the chain outside the custom tool class**
# source_documents = result["source_documents"]
# self.retrival_source_docs = source_documents
return result["result"]
async def _arun(self, question: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Not implemented")
`
### Expected behavior
When I run the tool I expect that the answer from the chain will return and the source documents will be update. | Custom tool class not working with extra properties | https://api.github.com/repos/langchain-ai/langchain/issues/6828/comments | 5 | 2023-06-27T14:54:25Z | 2024-07-29T22:20:46Z | https://github.com/langchain-ai/langchain/issues/6828 | 1,777,167,767 | 6,828 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.27 (I installed over pip install lanchain)
Python v: 3.8
OS: Windows 11
When I try to ``from langchain.llms import GPT4All`
I am getting the error that says there is no Gpt4All module.
When I check the installed library it does not exist. However when I check github repo it exist in the latest version as per today.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import GPT4All
model_path = r"C:\Users\suat.atan\AppData\Local\nomic.ai\GPT4All\ggml-gpt4all-j-v1.3-groovy.bin"
llm = GPT4All(model= model_path)
llm("Where is Paris?")
```
### Expected behavior
At least the first line of the code should work.
This code should answer my prompt | Import Error for Gpt4All | https://api.github.com/repos/langchain-ai/langchain/issues/6825/comments | 6 | 2023-06-27T13:31:27Z | 2023-11-28T16:10:20Z | https://github.com/langchain-ai/langchain/issues/6825 | 1,776,978,548 | 6,825 |
[
"langchain-ai",
"langchain"
] | ### System Info
Most recent version of langchain, 0.0.216.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Causes bazel build fail due to small typo in __init__.py filename (filenames cannot have spaces).
### Expected behavior
Should rename __init__. py to __init__.py | "office365/__init__ .py" filename contains typo | https://api.github.com/repos/langchain-ai/langchain/issues/6822/comments | 3 | 2023-06-27T12:42:09Z | 2023-10-09T16:06:26Z | https://github.com/langchain-ai/langchain/issues/6822 | 1,776,845,482 | 6,822 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain v0.0.216, Python 3.11.3 on WSL2
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the first example at https://python.langchain.com/docs/modules/chains/foundational/router
### Expected behavior
[This line](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/llm.py#L275) gets triggered:
> The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
As suggested by the error, we can make the following code changes to pass the output parser directly to LLMChain by changing [this line](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/router/llm_router.py#L83) to this:
```python
llm_chain = LLMChain(llm=llm, prompt=prompt, output_parser=prompt.output_parser)
```
And calling `LLMChain.__call__` instead of `LLMChain.predict_and_parse` by changing [these lines](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/router/llm_router.py#L58-L61) to this:
```python
cast(
Dict[str, Any],
self.llm_chain(inputs, callbacks=callbacks),
)
```
Unfortunately, while this avoids the warning, it creates a new error:
> ValueError: Missing some output keys: {'destination', 'next_inputs'}
because LLMChain currently [assumes the existence of a single `self.output_key`](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/llm.py#L220) and produces this as output:
> {'text': {'destination': 'physics', 'next_inputs': {'input': 'What is black body radiation?'}}}
Even modifying that function to return the keys if the parsed output is a dict triggers the same error, but for the missing key of "text" instead. `predict_and_parse` avoids this fate by skipping output validation entirely.
It appears changes may have to be a bit more involved here if LLMRouterChain is to keep using LLMChain. | LLMRouterChain uses deprecated predict_and_parse method | https://api.github.com/repos/langchain-ai/langchain/issues/6819/comments | 21 | 2023-06-27T11:45:08Z | 2024-02-29T01:21:01Z | https://github.com/langchain-ai/langchain/issues/6819 | 1,776,735,480 | 6,819 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am proposing an enhancement for the `langchain` implementation of `qdrant`. As of the current version, `langchain` only supports single text searches. My feature proposal involves extending `langchain` to integrate the [search_batch](https://github.com/qdrant/qdrant-client/blob/master/qdrant_client/qdrant_client.py#L171) method from the `qdrant` client. This would allow us to conduct batch searches, increasing efficiency, and potentially speeding up the process for large volumes of text.
### Motivation
This feature request is born out of the need for more efficient text searches when dealing with large data sets. Currently, using `langchain` for search functionality becomes cumbersome and time-consuming due to the lack of batch search capabilities. Running single text searches one after another restricts the speed of operations and is not scalable when dealing with large text corpuses.
Given the increasing size and complexity of modern text datasets and applications, it is more pertinent than ever to have a robust and efficient method of search that can handle bulk operations. With this enhancement, we can perform multiple text searches simultaneously, thus saving considerable time and computing resources.
### Your contribution
While I would love to contribute, I simply do not have the time right now, so I, therefore, hope that the great community, currently building `langchain` sees some sense in the above-mentioned paragraphs. | Batch search for Qdrant database | https://api.github.com/repos/langchain-ai/langchain/issues/6818/comments | 1 | 2023-06-27T11:37:00Z | 2023-10-05T16:08:11Z | https://github.com/langchain-ai/langchain/issues/6818 | 1,776,722,095 | 6,818 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.200 in Debian 11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Pydantic Model:
```py
class SortCondition(BaseModel):
field: str = Field(description="Field name")
order: str = Field(description="Sort order", enum=["desc", "asc"])
class RecordQueryCondition(BaseModel):
datasheet_id: str = Field(description="The ID of the datasheet to retrieve records from.")
filter_condition: Optional[Dict[str, str]] = Field(
description="""
Find records that meet specific conditions.
This object should contain a key-value pair where the key is the field name and the value is the lookup value. For instance: {"title": "test"}.
"""
)
sort_condition: Optional[List[SortCondition]] = Field(min_items=1,
description="Sort returned records by specific field"
)
maxRecords_condition: Optional[int] = Field(
description="Limit the number of returned values."
)
```
OpenAI return parameters:
```json
{
"datasheet_id": "dsti6VpNpuKQpHVSnh",
"sort_condition": [
{
"field": "Timestamp",
"direction": "desc" # error key!
}
],
"maxRecords_condition": 1
}
```
So, LangChain raise a error:
ValidationError: 1 validation error for RecordQueryCondition
sort_condition -> 0 -> order
field required (type=value_error.missing)
This is source code: https://github.com/xukecheng/APITable-LLMs-Enhancement-Experiments/blob/main/apitable_langchain_function.ipynb
### Expected behavior
I use the OpenAI API to define functions, so it will work properly: https://github.com/xukecheng/APITable-LLMs-Enhancement-Experiments/blob/main/apitable_openai_function.ipynb | Issue: The parameters passed by the OpenAI function agent seem to have a problem. | https://api.github.com/repos/langchain-ai/langchain/issues/6814/comments | 2 | 2023-06-27T10:01:30Z | 2023-07-04T07:42:15Z | https://github.com/langchain-ai/langchain/issues/6814 | 1,776,544,976 | 6,814 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I was trying to get `PythonREPL `or in this case `PythonAstREPL `to work with the `OpenAIMultiFunctionsAgent` reliably, because I came across the same problem as mentioned in this issue: https://github.com/hwchase17/langchain/issues/6364.
I applied the mentioned fix, which worked very well for fixing the REPL tool, but sadly it also broke the usage of any other tools. The agent repeatedly reports `tool_selection is not a valid tool, try another one.`
This is my code to create the agent and tools, as well as for applying the fix (i'm using chainlit for ui):
```
@cl.langchain_factory(use_async=False)
def factory():
# Initialize the OpenAI language model
model = llms[str(use_model)]
llm = ChatOpenAI(
temperature=0,
model=model,
streaming=True,
client="openai",
# callbacks=[cl.ChainlitCallbackHandler()]
)
# Initialize the SerpAPIWrapper for search functionality
search = SerpAPIWrapper(search_engine="google")
# Define a list of tools offered by the agent
tools = [
Tool(
name="Search",
func=search.run,
description="Useful when you need to answer questions about current events or if you have to search the web. You should ask targeted questions like for google."
),
CustomPythonAstREPLTool(),
WriteFileTool(),
ReadFileTool(),
CopyFileTool(),
MoveFileTool(),
DeleteFileTool(),
FileSearchTool(),
ListDirectoryTool(),
ShellTool(),
HumanInputRun(),
]
memory = ConversationTokenBufferMemory(
memory_key="memory",
return_messages=True,
max_token_limit=2000,
llm=llm
)
# needed for memory with openai functions agent
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
prompt = OpenAIFunctionsAgent.create_prompt(
extra_prompt_messages=[MessagesPlaceholder(variable_name="memory")],
),
print("Prompt: ", prompt[0])
cust_agent = CustomOpenAIMultiFunctionsAgent(
tools=tools,
llm=llm,
prompt=prompt[0],
# kwargs=agent_kwargs,
# return_intermediate_steps=True,
)
mrkl = AgentExecutor.from_agent_and_tools(
agent=cust_agent,
tools=tools,
memory=memory,
# kwargs=agent_kwargs,
# return_intermediate_steps=True,
)
return mrkl
# ----- Custom classes and functions ----- #
class CustomPythonAstREPLTool(PythonAstREPLTool):
name = "python"
description = (
"A Python shell. Use this to execute python commands. "
"The input must be an object as follows: "
"{'__arg1': 'a valid python command.'} "
"When using this tool, sometimes output is abbreviated - "
"Make sure it does not look abbreviated before using it in your answer. "
"Don't add comments to your python code."
)
def _parse_ai_message(message: BaseMessage) -> Union[AgentAction, AgentFinish]:
"""Parse an AI message."""
if not isinstance(message, AIMessage):
raise TypeError(f"Expected an AI message got {type(message)}")
function_call = message.additional_kwargs.get("function_call", {})
if function_call:
function_call = message.additional_kwargs["function_call"]
function_name = function_call["name"]
try:
_tool_input = json.loads(function_call["arguments"])
except JSONDecodeError:
print(
f"Could not parse tool input: {function_call} because "
f"the `arguments` is not valid JSON."
)
_tool_input = function_call["arguments"]
# HACK HACK HACK:
# The code that encodes tool input into Open AI uses a special variable
# name called `__arg1` to handle old style tools that do not expose a
# schema and expect a single string argument as an input.
# We unpack the argument here if it exists.
# Open AI does not support passing in a JSON array as an argument.
if "__arg1" in _tool_input:
tool_input = _tool_input["__arg1"]
else:
tool_input = _tool_input
content_msg = "responded: {content}\n" if message.content else "\n"
return _FunctionsAgentAction(
tool=function_name,
tool_input=tool_input,
log=f"\nInvoking: `{function_name}` with `{tool_input}`\n{content_msg}\n",
message_log=[message],
)
return AgentFinish(return_values={"output": message.content}, log=message.content)
class CustomOpenAIMultiFunctionsAgent(OpenAIMultiFunctionsAgent):
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date, along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
user_input = kwargs["input"]
agent_scratchpad = _format_intermediate_steps(intermediate_steps)
memory = kwargs["memory"]
prompt = self.prompt.format_prompt(
input=user_input, agent_scratchpad=agent_scratchpad, memory=memory
)
messages = prompt.to_messages()
predicted_message = self.llm.predict_messages(
messages, functions=self.functions, callbacks=callbacks
)
agent_decision = _parse_ai_message(predicted_message)
return agent_decision
```
And these are the console outputs with `langchain.debug = true`:
```
Prompt: input_variables=['memory', 'agent_scratchpad', 'input'] output_parser=None partial_variables={}
messages=[SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), MessagesPlaceholder(variable_name='memory'), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template='{input}', template_format='f-string', validate_template=True), additional_kwargs={}), MessagesPlaceholder(variable_name='agent_scratchpad')]
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "write a text file to my desktop",
"memory": []
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop"
]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [3.00s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
[tool/start] [1:chain:AgentExecutor > 3:tool:invalid_tool] Entering Tool run with input:
"tool_selection"
[tool/end] [1:chain:AgentExecutor > 3:tool:invalid_tool] [0.0ms] Exiting Tool run with output:
"tool_selection is not a valid tool, try another one."
[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one."
]
}
[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [2.42s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
[tool/start] [1:chain:AgentExecutor > 5:tool:invalid_tool] Entering Tool run with input:
"tool_selection"
[tool/end] [1:chain:AgentExecutor > 5:tool:invalid_tool] [0.0ms] Exiting Tool run with output:
"tool_selection is not a valid tool, try another one."
[llm/start] [1:chain:AgentExecutor > 6:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one.\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one."
]
}
[llm/end] [1:chain:AgentExecutor > 6:llm:ChatOpenAI] [2.35s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
2023-06-27 11:07:24 - Error in ChainlitCallbackHandler.on_tool_start callback: Task stopped by user
[chain/error] [1:chain:AgentExecutor] [7.86s] Chain run errored with error:
"InterruptedError('Task stopped by user')"
```
Langchain Plus run: https://www.langchain.plus/public/b6c08e7e-bdb0-4792-a291-545e055ad966/r
### Suggestion:
_No response_ | Issue[Bug]: OpenAIMultiFunctionsAgent stuck - 'tool_selection is not a valid tool' | https://api.github.com/repos/langchain-ai/langchain/issues/6813/comments | 1 | 2023-06-27T09:09:28Z | 2023-10-05T16:07:36Z | https://github.com/langchain-ai/langchain/issues/6813 | 1,776,446,588 | 6,813 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.215 and langchain 0.0.216
python 3.9
chromadb 0.3.21
### Who can help?
@agola11 @hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import VectorDBQA
from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings
loader = TextLoader('state_of_the_union.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
persist_directory = 'db'
embedding = HuggingFaceInstructEmbeddings(model_name=“hkunlp/instructor-large”)
vectordb = Chroma.from_documents(documents=documents, embedding=embedding, persist_directory=persist_directory)
vectordb.persist()
vectordb = None
[state_of_the_union.txt](https://github.com/hwchase17/langchain/files/11879392/state_of_the_union.txt)
The detail error information is attached as follows,
[error_info.txt](https://github.com/hwchase17/langchain/files/11879458/error_info.txt)
I don't know why there will be a error "AttributeError: 'Collection' object has no attribute 'upsert'"
And when i degrade the langchain version to 0.0.177, everything go back normal
### Expected behavior
The document could be stored locally for the further retrieval. | The latest version langchain encountered errors when saving Chroma locally, "error "AttributeError: 'Collection' object has no attribute 'upsert'"" | https://api.github.com/repos/langchain-ai/langchain/issues/6811/comments | 3 | 2023-06-27T08:25:25Z | 2024-02-16T17:31:05Z | https://github.com/langchain-ai/langchain/issues/6811 | 1,776,368,368 | 6,811 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Would you please consider supporting kwargs in GoogleSearchApiWrapper's run / result call,
https://python.langchain.com/docs/modules/agents/tools/integrations/google_search
for the extra filtering on search. for example, I'd like to add "cr" option in cse search, but it seems that I cannot pass any options to run / result method, although internal function "_google_search_results" supports passing extra option to search engine.
### Motivation
I'd like to add "cr" option in cse search, but it seems that I cannot pass any options to run / result method, although internal function "_google_search_results" supports passing extra option to search engine.
### Your contribution
If you allow me, I'd like to make pr for this change. | Support kwargs on GoogleSearchApiWrapper run / result | https://api.github.com/repos/langchain-ai/langchain/issues/6810/comments | 1 | 2023-06-27T07:52:22Z | 2023-08-31T04:06:32Z | https://github.com/langchain-ai/langchain/issues/6810 | 1,776,309,539 | 6,810 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a sentence, and I'd like to extract entities from it. On each entity, I'd like to run a custom tool for validating. Is this possible via agents? I've looked through the documentation but couldn't find any related topics
### Suggestion:
_No response_ | Issue: How to iterate using agents | https://api.github.com/repos/langchain-ai/langchain/issues/6809/comments | 1 | 2023-06-27T07:49:29Z | 2023-10-05T16:07:41Z | https://github.com/langchain-ai/langchain/issues/6809 | 1,776,305,294 | 6,809 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.215
python version: Python 3.8.8
### Who can help?
@hwchase17 @agola11 @ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I take the example from https://python.langchain.com/docs/modules/chains/additional/question_answering#the-map_reduce-chain . I ignore the retrieval part and inject the whole document into `load_qa_chain` with set `chain_type="map_reduce"`:
```
from langchain.document_loaders import TextLoader
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": documents, "question": query}, return_only_outputs=True)
```
comes error below:
```
InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 9640 tokens (9384 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
```
when `documents` is long document, set `chain_type="map_reduce"` seems do not work, why and how to solve it?
Thanks a lot!
### Expected behavior
load_qa_chain with `chain_type="map_reduce"` setting should can process long document directly,does it? | load_qa_chain with chain_type="map_reduce" can not process long document | https://api.github.com/repos/langchain-ai/langchain/issues/6805/comments | 3 | 2023-06-27T06:25:37Z | 2023-10-05T16:09:37Z | https://github.com/langchain-ai/langchain/issues/6805 | 1,776,178,595 | 6,805 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10, Langchain > v0.0.212
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Enable caching with langchain.llm_cache = RedisCache
### Expected behavior
This works with version <= v0.0.212 | ValueError: RedisCache only supports caching of normal LLM generations, got <class 'langchain.schema.ChatGeneration'> | https://api.github.com/repos/langchain-ai/langchain/issues/6803/comments | 1 | 2023-06-27T05:41:52Z | 2023-06-29T12:08:37Z | https://github.com/langchain-ai/langchain/issues/6803 | 1,776,131,453 | 6,803 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Have SOURCES info in map_rerank's answer similar to the information available for 'map_reduce' and 'stuff' chain_type options.
### Motivation
Standardization of output
Indicate answer source when map-rerank is used with ConversationalRetrievalChain
### Your contribution
https://github.com/hwchase17/langchain/pull/6794 | Source info in map_rerank's answer | https://api.github.com/repos/langchain-ai/langchain/issues/6795/comments | 1 | 2023-06-27T01:33:01Z | 2023-10-05T16:07:51Z | https://github.com/langchain-ai/langchain/issues/6795 | 1,775,936,801 | 6,795 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain .216, OS X 11.6, Python 3.11.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Setup OpenAIEmbeddings method with Azure arguments
2. Split text with a splitter like RecursiveCharacterTextSplitter
3. Use text and embedding function in chroma.from_texts
```python
import openai
import os
from dotenv import load_dotenv, find_dotenv
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
_ = load_dotenv(find_dotenv())
API_KEY = os.environ.get('STAGE_API_KEY')
API_VERSION = os.environ.get('API_VERSION')
RESOURCE_ENDPOINT = os.environ.get('RESOURCE_ENDPOINT')
openai.api_type = "azure"
openai.api_key = API_KEY
openai.api_base = RESOURCE_ENDPOINT
openai.api_version = API_VERSION
openai.log = "debug"
sample_text = 'This metabolite causes atherosclerosis in the liver[55]. Strengths and limitations This is the first thorough bibliometric analysis of nutrition and gut microbiota research conducted on a global level.'
embed_deployment_id = 'text-embedding-ada-002'
embed_model = 'text-embedding-ada-002'
persist_directory = "./storage_openai_chunks" # will be created if not existing
embeddings = OpenAIEmbeddings(
deployment=embed_deployment_id,
model=embed_model,
openai_api_key=API_KEY,
openai_api_base=RESOURCE_ENDPOINT,
openai_api_type="azure",
openai_api_version=API_VERSION,
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=40, chunk_overlap=10)
texts = text_splitter.split_text(sample_text)
vectordb = Chroma.from_texts(collection_name='test40',
texts=texts,
embedding=embeddings,
persist_directory=persist_directory)
vectordb.persist()
print(vectordb.get())
message='Request to OpenAI API' method=post path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15
api_version=2023-05-15 data='{"input": [[2028, 28168, 635, 11384, 264, 91882, 91711], [258, 279, 26587, 58, 2131, 948, 32937, 82, 323], [438, 9669, 1115, 374, 279, 1176], [1820, 1176, 17879, 44615, 24264], [35584, 315, 26677, 323, 18340], [438, 18340, 53499, 6217, 3495, 13375], [444, 55015, 389, 264, 3728, 2237, 13]], "encoding_format": "base64"}' message='Post details'
message='OpenAI API response' path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15 processing_ms=None request_id=None response_code=400
body='{\n "error": "/input/6 expected type: String, found: JSONArray\\n/input/5 expected type: String, found: JSONArray\\n/input/4 expected type: String, found: JSONArray\\n/input/3 expected type: String, found: JSONArray\\n/input/2 expected type: String, found: JSONArray\\n/input/1 expected type: String, found: JSONArray\\n/input/0 expected type: String, found: JSONArray\\n/input expected: null, found: JSONArray\\n/input expected type: String, found: JSONArray"\n}' headers="{'Date': 'Tue, 27 Jun 2023 00:08:56 GMT', 'Content-Type': 'application/json; charset=UTF-8', 'Content-Length': '454', 'Connection': 'keep-alive', 'Strict-Transport-Security': 'max-age=16070400; includeSubDomains', 'Set-Cookie': 'TS01bd4155=0179bf738063e38fbf3fffb70b7f9705fd626c2df1126f29599084aa69d137b77c61d6377a118a5ebe5a1f1f9f314c22a777a0e861; Path=/; Domain=.***', 'Vary': 'Accept-Encoding'}" message='API response body'
Traceback (most recent call last):
File "/Users/A/dev/python/openai/langchain_embed_issue.py", line 39, in <module>
vectordb = Chroma.from_texts(collection_name='test40',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 403, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 148, in add_texts
embeddings = self._embedding_function.embed_documents(list(texts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 465, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 302, in _get_len_safe_embeddings
response = embed_with_retry(
^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 97, in embed_with_retry
return _embed_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 95, in _embed_with_retry
return embeddings.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 418, in handle_error_response
error_code=error_data.get("code"),
^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get'
Process finished with exit code 1
```
### Expected behavior
OpenAIEmbeddings should return embeddings instead of an error.
Because Azure currently only accepts str input, in contrast to OpenAI which accepts tokens or strings, the input is rejected because OpenAIEmbeddings sends tokens only. Azure embedding API docs confirm this, where the request body input parameter is of type string: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#embeddings
Second, after modifying openai.py to send strings, Azure complains that it currently accepts one input at a time--in other words, it doesn't accept batches of strings (or even tokens if it accepted tokens). So the for loop increment was modified to send one decoded batch of tokens (in other words, the original str chunk) at a time.
Modifying embeddings/openai.py with:
```python
# batched_embeddings = []
# _chunk_size = chunk_size or self.chunk_size
# for i in range(0, len(tokens), _chunk_size):
# response = embed_with_retry(
# self,
# input=tokens[i : i + _chunk_size],
# **self._invocation_params,
# )
# batched_embeddings += [r["embedding"] for r in response["data"]]
batched_embeddings = []
_chunk_size = chunk_size or self.chunk_size if 'azure' not in self.openai_api_type else 1
#
#
for i in range(0, len(tokens), _chunk_size):
embed_input = encoding.decode(tokens[i]) if 'azure' in self.openai_api_type else tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
input=embed_input,
**self._invocation_params,
)
batched_embeddings += [r["embedding"] for r in response["data"]]
```
and re-running the code:
```text
# same code
...
message='Request to OpenAI API' method=post path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15
api_version=2023-05-15 data='{"input": "This metabolite causes atherosclerosis", "encoding_format": "base64"}' message='Post details'
message='OpenAI API response' path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15 processing_ms=27.0109 request_id=47bee143-cb00-4782-8560-f267ee839af4 response_code=200
body='{\n "object": "list",\n "data": [\n {\n "object": "embedding",\n "index": 0,\n "embedding": "5zPWu+V2e7w75Ia7HeCavKhhE71NQhA865WYvE+Y9DuB8ce8Xak7uhgQgble4z48H8L4uyePnzu2XVq8ucg+u7ZdWj28ofq7Jzd6PMFMkbvQiIq8nbuwPFJMLTxGe5i83c2lPIXQsjzPToc8taB/vZlZ7ryVjwM8jsiLPIvLfrywnBG9RjLEO2XkuTpOMz
... (removed for brevity)
/gP7uzTTC8RZf5PMOULTv2D4C7caQfvR60EbyqjZ48yqxUuzHeLzhSFJW8qDu5uwcj7zyeDnO8UMKvPNLEezxNixm6X7U3vBeDqzumrI08jzQqPDZObLzZS2c843itO9a+y7w+mJG8gChjPAIHqLqEeLg6ysUTvfqaizzT2yo77Di/u3A3azyziva8ct9VvI80Kry1n5U7ipJvvHy2FjuAQSK9"\n }\n ],\n "model": "ada",\n "usage": {\n "prompt_tokens": 7,\n "total_tokens": 7\n }\n}\n' headers="{'Date': 'Tue, 27 Jun 2023 00:20:13 GMT', 'Content-Type': 'application/json', 'Content-Length': '8395', 'Connection': 'keep-alive', 'x-ms-region': 'East US', 'apim-request-id': 'b932333d-1eb9-415a-a84b-da1c5f95433b', 'x-content-type-options': 'nosniff, nosniff', 'openai-processing-ms': '26.8461', 'access-control-allow-origin': '*', 'x-request-id': '0677d084-2449-486c-9bff-b6ef07df004f', 'x-ms-client-request-id': 'b932333d-1eb9-415a-a84b-da1c5f95433b', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload, max-age=16070400; includeSubDomains', 'X-Frame-Options': 'SAMEORIGIN', 'X-XSS-Protection': '1; mode=block'}" message='API response body'
{'ids': ['60336172-1480-11ee-b223-acde48001122', '6033621c-1480-11ee-b223-acde48001122', '60336280-1480-11ee-b223-acde48001122', '603362b2-1480-11ee-b223-acde48001122', '603362da-1480-11ee-b223-acde48001122', '603362f8-1480-11ee-b223-acde48001122', '60336370-1480-11ee-b223-acde48001122'], 'embeddings': None, 'documents': ['This metabolite causes atherosclerosis', 'in the liver[55]. Strengths and', 'and limitations This is the first', 'the first thorough bibliometric', 'analysis of nutrition and gut', 'and gut microbiota research conducted', 'conducted on a global level.'], 'metadatas': [None, None, None, None, None, None, None]}
```
Also made the following change to openai.py a few lines later, although this is untested:
```python
batched_embeddings = []
_chunk_size = chunk_size or self.chunk_size if 'azure' not in self.openai_api_type else 1
# azure only accepts str input, currently one list element at a time
for i in range(0, len(tokens), _chunk_size):
embed_input = encoding.decode(tokens[i]) if 'azure' in self.openai_api_type else tokens[i : i + _chunk_size]
response = await async_embed_with_retry(
self,
input=embed_input,
**self._invocation_params,
)
batched_embeddings += [r["embedding"] for r in response["data"]]
```
| Azure rejects tokens sent by OpenAIEmbeddings, expects strings | https://api.github.com/repos/langchain-ai/langchain/issues/6793/comments | 2 | 2023-06-27T01:01:39Z | 2024-05-28T14:17:44Z | https://github.com/langchain-ai/langchain/issues/6793 | 1,775,913,567 | 6,793 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Right now only `text-bison` model is support by Google PaLM. When tried `code-bison` its throwing below error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[39], line 1
----> 1 llm = VertexAI(model_name='code-bison')
File /opt/conda/envs/python310/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for VertexAI
__root__
Unknown model publishers/google/models/code-bison; {'gs://google-cloud-aiplatform/schema/predict/instance/text_generation_1.0.0.yaml': <class 'vertexai.language_models._language_models._PreviewTextGenerationModel'>} (type=value_error)
```
Can we get other model supports as well?
### Motivation
Support for many other PaLM models
### Your contribution
I can update and test the code for different other models | Request to Support other VertexAI's LLM model Support | https://api.github.com/repos/langchain-ai/langchain/issues/6779/comments | 10 | 2023-06-26T19:48:47Z | 2024-05-17T10:39:46Z | https://github.com/langchain-ai/langchain/issues/6779 | 1,775,480,124 | 6,779 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.216 (I have had this since i started with Langchain (198)
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [x] Callbacks/Tracing
- [ ] Async
### Reproduction
I run this function with
```json
{
"prompt": "some prompt",
"temperature": 0.2
}
```
```python3
from typing import Any
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import PromptTemplate
from app.config.api_keys import ApiKeys
from app.repositories.pgvector_repository import PGVectorRepository
def completions_service(data) -> dict[str, Any]:
pgvector = PGVectorRepository().instance
tech_template = """Some prompt info (redacted)
{summaries}
Q: {question}
A: """
prompt_template = PromptTemplate(
template=tech_template, input_variables=["summaries", "question"]
)
qa = RetrievalQAWithSourcesChain.from_chain_type(llm=ChatOpenAI(temperature=data['temperature'],
model_name="gpt-3.5-turbo",
openai_api_key=ApiKeys.openai),
chain_type="stuff",
retriever=pgvector.as_retriever(),
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True,
verbose=True,
)
output = qa({"question": data['prompt']})
return output
```
I get this in the command line
```bash
> Entering new chain...
> Finished chain.
```
For some reason, there is a double space in the Entering line (as if there is meant to be something there) and then nothing until it says finish, I have set verbose = True but not luck
### Expected behavior
I would expect to see the embeddings, the full prompt being sent to openai etc. but I get nothing. | Verbose flag not outputting anything other than Entering chain and Finished chain | https://api.github.com/repos/langchain-ai/langchain/issues/6778/comments | 2 | 2023-06-26T19:31:15Z | 2023-07-09T13:23:46Z | https://github.com/langchain-ai/langchain/issues/6778 | 1,775,453,879 | 6,778 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10
Name: langchain
Version: 0.0.208
Summary: Building applications with LLMs through composability
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pydantic import BaseModel, Field
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
from langchain.agents import Tool
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.chains import RetrievalQA
# Here a proper Azure OpenAI service needs to be defined
OPENAI_API_KEY=" "
OPENAI_DEPLOYMENT_ENDPOINT="https://gptkbopenai.openai.azure.com/"
OPENAI_DEPLOYMENT_NAME = "gptkbopenai"
OPENAI_MODEL_NAME = "GPT4"
#OPENAI_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_EMBEDDING_MODEL_NAME = "text-embedding-ada-002"
OPENAI_DEPLOYMENT_VERSION = "2023-03-15-preview"
OPENAI_API_TYPE = "azure"
OPENAI_API_BASE = "https://gptkbopenai.openai.azure.com/"
OPENAI_EMBEDDING_DEPLOYMENT_NAME = "text-embedding-ada-002"
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import AzureChatOpenAI
import os
os.environ["OPENAI_API_TYPE"] = OPENAI_API_TYPE
os.environ["OPENAI_API_BASE"] = OPENAI_API_BASE
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
import faiss
# Code adapted from Langchain document comparison toolkit
class DocumentInput(BaseModel):
question: str = Field()
tools = []
files = [
{
"name": "belfast",
"path": "C:\\Users\\625050\\OneDrive - Clifford Chance LLP\\Documents\\Projects\\ChatGPT\\LeaseTest\\Belfast.pdf",
},
{
"name": "bournemouth",
"path": "C:\\Users\\625050\\OneDrive - Clifford Chance LLP\\Documents\\Projects\\ChatGPT\\LeaseTest\\Bournemouth.pdf",
}
]
llm = AzureChatOpenAI(deployment_name= "GPT4")
for file in files:
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
print(docs[0])
embeddings = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME , chunk_size=1)
#embeddings = OpenAIEmbeddings(model='text-embedding-ada-002',
#deployment=OPENAI_DEPLOYMENT_NAME,
#openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
#openai_api_type='azure',
#openai_api_key= OPENAI_API_KEY,
#chunk_size = 1
# )
# = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
agent({"input": "Who are the landlords?"})
Error:
Entering new chain...
/openai/deployments/GPT4/chat/completions?api-version=2023-03-15-preview None False None None
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[5], line 15
1 #llm = AzureChatOpenAI()#model_kwargs = {'deployment': "GPT4"},
2 #model_name=OPENAI_MODEL_NAME,
3 #openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
4 #openai_api_version=OPENAI_DEPLOYMENT_VERSION,
5 #openai_api_key=OPENAI_API_KEY
6 #)
8 agent = initialize_agent(
9 agent=AgentType.OPENAI_FUNCTIONS,
10 tools=tools,
11 llm=llm,
12 verbose=True,
13 )
---> 15 agent({"input": "Who are the landlords?"})
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\agent.py:957, in AgentExecutor._call(self, inputs, run_manager)
955 # We now enter the agent loop (until it returns something).
956 while self._should_continue(iterations, time_elapsed):
--> 957 next_step_output = self._take_next_step(
958 name_to_tool_map,
959 color_mapping,
960 inputs,
961 intermediate_steps,
962 run_manager=run_manager,
963 )
964 if isinstance(next_step_output, AgentFinish):
965 return self._return(
966 next_step_output, intermediate_steps, run_manager=run_manager
967 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\agent.py:762, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
756 """Take a single step in the thought-action-observation loop.
757
758 Override this to take control of how the agent makes and acts on choices.
759 """
760 try:
761 # Call the LLM to see what to do.
--> 762 output = self.agent.plan(
763 intermediate_steps,
764 callbacks=run_manager.get_child() if run_manager else None,
765 **inputs,
766 )
767 except OutputParserException as e:
768 if isinstance(self.handle_parsing_errors, bool):
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\openai_functions_agent\base.py:209, in OpenAIFunctionsAgent.plan(self, intermediate_steps, callbacks, **kwargs)
207 prompt = self.prompt.format_prompt(**full_inputs)
208 messages = prompt.to_messages()
--> 209 predicted_message = self.llm.predict_messages(
210 messages, functions=self.functions, callbacks=callbacks
211 )
212 agent_decision = _parse_ai_message(predicted_message)
213 return agent_decision
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:258, in BaseChatModel.predict_messages(self, messages, stop, **kwargs)
256 else:
257 _stop = list(stop)
--> 258 return self(messages, stop=_stop, **kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:208, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
201 def __call__(
202 self,
203 messages: List[BaseMessage],
(...)
206 **kwargs: Any,
207 ) -> BaseMessage:
--> 208 generation = self.generate(
209 [messages], stop=stop, callbacks=callbacks, **kwargs
210 ).generations[0][0]
211 if isinstance(generation, ChatGeneration):
212 return generation.message
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:102, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
--> 102 raise e
103 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
104 generations = [res.generations for res in results]
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:94, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
---> 94 results = [
95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:95, in <listcomp>(.0)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
94 results = [
---> 95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:359, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
351 message = _convert_dict_to_message(
352 {
353 "content": inner_completion,
(...)
356 }
357 )
358 return ChatResult(generations=[ChatGeneration(message=message)])
--> 359 response = self.completion_with_retry(messages=message_dicts, **params)
360 return self._create_chat_result(response)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:307, in ChatOpenAI.completion_with_retry(self, **kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
305 return self.client.create(**kwargs)
--> 307 return _completion_with_retry(**kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File c:\Users\625050\Anaconda3\envs\DD\lib\concurrent\futures\_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File c:\Users\625050\Anaconda3\envs\DD\lib\concurrent\futures\_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:305, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
--> 305 return self.client.create(**kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_resources\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:154, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
153 print(url, headers, stream, request_id, request_timeout)
--> 154 response, a, api_key = requestor.request(
155 "post",
156 url,
157 params=params,
158 headers=headers,
159 stream=stream,
160 request_id=request_id,
161 request_timeout=request_timeout,
162 )
164 print(response, a, api_key)
166 if stream:
167 # must be an iterator
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:230, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
209 def request(
210 self,
211 method,
(...)
218 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
219 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
220 result = self.request_raw(
221 method.lower(),
222 url,
(...)
228 request_timeout=request_timeout,
229 )
--> 230 resp, got_stream = self._interpret_response(result, stream)
231 return resp, got_stream, self.api_key
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:624, in APIRequestor._interpret_response(self, result, stream)
616 return (
617 self._interpret_response_line(
618 line, result.status_code, result.headers, stream=True
619 )
620 for line in parse_stream(result.iter_lines())
621 ), True
622 else:
623 return (
--> 624 self._interpret_response_line(
625 result.content.decode("utf-8"),
626 result.status_code,
627 result.headers,
628 stream=False,
629 ),
630 False,
631 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:687, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
685 stream_error = stream and "error" in resp.data
686 if stream_error or not 200 <= rcode < 300:
--> 687 raise self.handle_error_response(
688 rbody, rcode, resp.data, rheaders, stream_error=stream_error
689 )
690 return resp
InvalidRequestError: Unrecognized request argument supplied: functions
### Expected behavior
An answer should be generated and no error should be thrown. | Functions might not be supported through Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6777/comments | 11 | 2023-06-26T19:28:21Z | 2024-02-21T22:18:31Z | https://github.com/langchain-ai/langchain/issues/6777 | 1,775,450,165 | 6,777 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | Can't use memory for an internal LLMChain inside a SequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/6768/comments | 0 | 2023-06-26T16:09:11Z | 2023-07-13T06:47:46Z | https://github.com/langchain-ai/langchain/issues/6768 | 1,775,129,370 | 6,768 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.215
python: 3.10.11
OS: Ubuntu 18.04
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm implementing two custom tools that return column names and unique values in them. Their definition is as follows
```
def get_unique_values(column_name):
all_values = metadata[column_name]
return all_values
def get_column_names():
return COLUMN_NAMES
tools = [
Tool(
name="Get all column names",
func=get_column_names,
description="Useful for getting the names of all available columns. This doesn't have any arguments as input and simply invoking it would return the list of columns",
),
Tool(
name="Get distinct values of a column",
func=get_unique_values,
description="Useful for getting distinct values of a particular column. Knowing the distinct values is important to decide if a particular column should be considered in a given context or not. The input to this function should be a string representing the column name whose unique values are needed to be found out. For example, `gender` would be the input if you wanted to know what unique values are in `gender` column.",
),
]
```
I'm using a custom agent as well defined as follows: (The definition is quite straightforward and taken from the official docs)
```
prompt = CustomPromptTemplate(
template=template,
tools=tools,
input_variables=["input", "intermediate_steps"],
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
```
When I provide a prompt using `agent_executor.run()` I get the error pasted below.
Surprisingly, if I define the as follows, I don't get an error. Rather it doesn't follow my prompt template because there's no LLMChain used here and hence gets stuck in meaningless back and forth actions.
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
```
Error trace
```
> Entering new chain...
Traceback (most recent call last):
File "/home/user/proj/agent_exp.py", line 245, in <module>
agent_executor.run(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 987, in _call
next_step_output = self._take_next_step(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 803, in _take_next_step
raise e
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 792, in _take_next_step
output = self.agent.plan(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 345, in plan
return self.output_parser.parse(output)
File "/home/user/proj/agent_exp.py", line 102, in parse
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
langchain.schema.OutputParserException: Could not parse LLM output: `Thought: I need to get all the column names
Action: Get all column names`
```
### Expected behavior
The agent should have called these functions appropriately and returned a list of columns that have the string datatype | Unknown parsing error on custom tool + custom agent implementation | https://api.github.com/repos/langchain-ai/langchain/issues/6767/comments | 5 | 2023-06-26T15:20:23Z | 2023-06-26T17:15:43Z | https://github.com/langchain-ai/langchain/issues/6767 | 1,775,037,115 | 6,767 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Qdrant does not support the vector_size parameter, which is a very common and frequently used parameter. I hope it can be supported.
langchain/vectorstores/qdrant.py
```
# Just do a single quick embedding to get vector size
partial_embeddings = embedding.embed_documents(texts[:1])
vector_size = len(partial_embeddings[0])
```
### Suggestion:
I hope it can be supported. | Issue: Qdrant does not support the vector_size parameter, which is a very common and frequently used parameter. I hope it can be supported. | https://api.github.com/repos/langchain-ai/langchain/issues/6766/comments | 5 | 2023-06-26T15:12:51Z | 2023-10-18T16:06:58Z | https://github.com/langchain-ai/langchain/issues/6766 | 1,775,023,349 | 6,766 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain.chains.router.embedding_router import EmbeddingRouterChain
from langchain.embeddings import LlamaCppEmbeddings
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
encoder.client.verbose = False
names_and_descriptions = [
("physics", ["for questions about physics"]),
("math", ["for questions about math"]),
]
router_chain = EmbeddingRouterChain.from_names_and_descriptions(names_and_descriptions, Weaviate, encoder,
weaviate_url='http://localhost:8080',
routing_keys=["input"])
```
### Expected behavior
I expect to be able to configure the underlying vectorizer with kwargs passed into the `from_names_and_descriptions` e.g. `weaviate_url` | `EmbeddingRouterChain.from_names_and_descriptions` doesn't accept vectorstore kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/6764/comments | 4 | 2023-06-26T14:58:12Z | 2023-10-02T16:05:14Z | https://github.com/langchain-ai/langchain/issues/6764 | 1,774,991,189 | 6,764 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain == 0.0.205
python == 3.10.11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from fastapi import FastAPI, Depends, Request, Response
from typing import Any, Dict, List, Generator
import asyncio
from langchain.llms import OpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import LLMResult, HumanMessage, SystemMessage
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from html import escape
app = FastAPI()
# 添加 CORS 中间件
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
allow_credentials=True,
)
q = asyncio.Queue()
stop_item = "###finish###"
class StreamingStdOutCallbackHandlerYield(StreamingStdOutCallbackHandler):
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when LLM starts running."""
# Clear the queue at the start
while not q.empty():
await q.get()
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
await q.put(token)
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running."""
await q.put(stop_item)
async def generate_llm_stream(prompt: str) -> Generator[str, None, None]:
llm = OpenAI(temperature=0.5, streaming=True, callbacks=[StreamingStdOutCallbackHandlerYield()])
result = await llm.agenerate([prompt])
while True:
item = await q.get()
if item == stop_item:
break
yield item
@app.get("/generate-song", status_code=200)
async def generate_song() -> Response:
prompt = "Write me a song about sparkling water."
async def event_stream() -> Generator[str, None, None]:
async for item in generate_llm_stream(prompt):
escaped_chunk = escape(item).replace("\n", "<br>").replace(" ", " ")
yield f"data:{escaped_chunk}\n\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
if __name__ == "__main__":
import uvicorn
uvicorn.run("easy:app", host="0.0.0.0", port=8000, reload=True)
```
### Expected behavior
```Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 10.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 1076, in create_connection
raise exceptions[0]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 1060, in create_connection
sock = await self._connect_sock(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 969, in _connect_sock
await self.sock_connect(sock, address)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\selector_events.py", line 501, in sock_connect
return await fut
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\selector_events.py", line 541, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
TimeoutError: [Errno 10060] Connect call failed ('108.160.166.253', 443)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 592, in arequest_raw
result = await session.request(**request_kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\client.py", line 536, in _request
conn = await self._connector.connect(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection
raise last_exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.openai.com:443 ssl:default [Connect call failed ('108.160.166.253', 443)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\applications.py", line 282, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\cors.py", line 91, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
raise e
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 69, in app
await response(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\anyio\_backends\_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 273, in wrap
await func()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "C:\AI\openai\easy.py", line 61, in event_stream
async for item in generate_llm_stream(prompt):
File "C:\AI\openai\easy.py", line 47, in generate_llm_stream
result = await llm.agenerate([prompt])
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\base.py", line 287, in agenerate
raise e
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\base.py", line 279, in agenerate
await self._agenerate(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 355, in _agenerate
async for stream_resp in await acompletion_with_retry(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 120, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\__init__.py", line 325, in iter
raise retry_exc.reraise()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\__init__.py", line 158, in reraise
raise self.last_attempt.result()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\concurrent\futures\_base.py", line 451, in result
return self.__get_result()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 118, in _completion_with_retry
return await llm.client.acreate(**kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_resources\completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 304, in arequest
result = await self.arequest_raw(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 609, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI``` | Stream a response from LangChain's OpenAI with python fastapi | https://api.github.com/repos/langchain-ai/langchain/issues/6762/comments | 1 | 2023-06-26T14:40:55Z | 2023-10-02T16:05:19Z | https://github.com/langchain-ai/langchain/issues/6762 | 1,774,957,722 | 6,762 |
[
"langchain-ai",
"langchain"
] | ### System Info
Adding memory to a LLMChain with OpenAI functions enabled fails because of `AIMessage` are generated instead of `FunctionMessage`
```python
self.add_message(AIMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pas/development/advanced-stack/sandbox/sublime-plugin-maker/venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AIMessage
content
str type expected (type=type_error.str)
```
where `AIMessage.content` is in fact a dict.
Tested version : `0.0.215`
### Who can help?
@dev2049
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a LLMChain with both functions and memory
2. Enjoy :)
### Expected behavior
When LLMChain is created with `functions` and memory, generate FunctionMessage and accept dict (or json dumps) | ConversationBufferMemory fails to capture OpenAI functions messages in LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/6761/comments | 12 | 2023-06-26T14:36:44Z | 2024-06-20T15:48:16Z | https://github.com/langchain-ai/langchain/issues/6761 | 1,774,947,874 | 6,761 |
[
"langchain-ai",
"langchain"
] | ### System Info
There is a lack of support for the streaming option with AzureOpenAI. As you can see from the following article (https://thivy.hashnode.dev/streaming-response-with-azure-openai) official API support on Azure's side is present.
Specifically, when trying to utilize the streaming argument with AzureOpenAI, we recieve the following error (with gpt-35-turbo model, which is possible on direct Azure interfacing):
`InvalidRequestError: logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346.`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the following code in a notebook:
```
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import (
ConversationalRetrievalChain,
LLMChain
)
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
template = """Given the following chat history and a follow up question, rephrase the follow up input question to be a standalone question.
Or end the conversation if it seems like it's done.
Chat History:\"""
{chat_history}
\"""
Follow Up Input: \"""
{question}
\"""
Standalone question:"""
condense_question_prompt = PromptTemplate.from_template(template)
template = """You are a friendly, conversational retail shopping assistant. Use the following context including product names, descriptions, and keywords to show the shopper whats available, help find what they want, and answer any questions.
It's ok if you don't know the answer.
Context:\"""
{context}
\"""
Question:\"""
\"""
Helpful Answer:"""
qa_prompt= PromptTemplate.from_template(template)
llm = AzureOpenAI(deployment_name="gpt-35-turbo", temperature=0)
streaming_llm = AzureOpenAI(
streaming=True,
callback_manager=CallbackManager([
StreamingStdOutCallbackHandler()]),
verbose=True,
engine=deployment_name,
temperature=0.2,
max_tokens=150
)
# use the LLM Chain to create a question creation chain
question_generator = LLMChain(
llm=llm,
prompt=condense_question_prompt
)
# use the streaming LLM to create a question answering chain
doc_chain = load_qa_chain(
llm=streaming_llm,
#llm=llm,
chain_type="stuff",
prompt=qa_prompt
)
chatbot = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator
)
# create a chat history buffer
chat_history = []
# gather user input for the first question to kick off the bot
question = input("Hi! What are you looking for today?")
# keep the bot running in a loop to simulate a conversation
while True:
result = chatbot(
{"question": question, "chat_history": chat_history}
)
print("\n")
chat_history.append((result["question"], result["answer"]))
question = input()
```
Engine is "gpt-35-turbo" and the correct environment variables are provided.
2. Input anything in the input field.
3. Error Occurs.
Results in the following error:
`InvalidRequestError: logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346.`
### Expected behavior
To execute without error.
### P.S.
This is my first bug report on GitHub, so if anything is missing please let me know. | Streaming Support For AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6760/comments | 7 | 2023-06-26T13:44:29Z | 2024-01-30T00:42:44Z | https://github.com/langchain-ai/langchain/issues/6760 | 1,774,817,963 | 6,760 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 11,
python==3.10.5
langchain==0.0.215
openai==0.27.8
faiss-cpu==1.7.4
### Who can help?
@zeke
@sbusso
@deepblue
when run this example code:
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai
I get the following error...
"InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."
The code works if you downgrade langchain to 0.0.132
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the instructions in the notebook.
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai
### Expected behavior
Get a response. | AzureChatOpenAI: InvalidRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/6759/comments | 4 | 2023-06-26T13:32:27Z | 2023-10-05T16:07:56Z | https://github.com/langchain-ai/langchain/issues/6759 | 1,774,795,023 | 6,759 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/1742db0c3076772db652c747df1524cd07695f51/langchain/vectorstores/faiss.py#L458
this from method can set 'normalize_L2' for un-norm embeddings
https://github.com/hwchase17/langchain/blob/1742db0c3076772db652c747df1524cd07695f51/langchain/vectorstores/faiss.py#L588
but load_local no 'normalize_L2' argument, so cache&load cant't work as expected
please add 'normalize_L2',or cache 'normalize_L2' to cache_file and restore it when load_local
| vectorstores/faiss.py load_local can't set normalize_L2 | https://api.github.com/repos/langchain-ai/langchain/issues/6758/comments | 4 | 2023-06-26T12:26:30Z | 2023-10-19T16:06:58Z | https://github.com/langchain-ai/langchain/issues/6758 | 1,774,668,475 | 6,758 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Fails if agent initialized as follows:
```python
agent = initialize_agent(
agent='zero-shot-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=30,
memory=ConversationBufferMemory(),
handle_parsing_errors=True)
```
With
```
...
lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent
tags_.append(agent.value)
AttributeError: 'str' object has no attribute 'value'
````
### Expected behavior
Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown). | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call | https://api.github.com/repos/langchain-ai/langchain/issues/6756/comments | 4 | 2023-06-26T11:00:29Z | 2023-06-27T02:03:29Z | https://github.com/langchain-ai/langchain/issues/6756 | 1,774,503,627 | 6,756 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import LlamaCppEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
assert llama_path.exists()
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
encoder.client.verbose = False
readme_path = Path(__file__).parent.parent / 'README.md'
loader = UnstructuredMarkdownLoader(str(readme_path))
data = loader.load()
text_splitter = CharacterTextSplitter(
separator="\n\n",
chunk_size=10,
chunk_overlap=2,
length_function=len,
)
texts = text_splitter.split_documents(data)
db = Weaviate.from_documents(texts, encoder, weaviate_url='http://localhost:8080', by_text=False)
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path=str(llama_path), temperature=0, callback_manager=callback_manager, stop=[], verbose=False)
llm.client.verbose = False
chain = RetrievalQAWithSourcesChain.from_chain_type(llm, chain_type="stuff", retriever=db.as_retriever(),
reduce_k_below_max_tokens=True, max_tokens_limit=512)
query = "How do I install this package?"
chain({"question": query})
```
### Expected behavior
When setting `max_tokens_limit` I expect it to be the limit for the final prompt passed to the llm
Seeing this error message is very confusing after checking that the question and loaded source documents do not reach the token limit
When `BaseQAWithSourcesChain.from_llm` is called, it uses a long `combine_prompt_template` by default, which in the case of LlamaCpp is already over the token limit
I would expect `max_tokens_limit` to apply to the full prompt, or to receive an error message explaining that the limit was breached because of the template, and ideally an example of how to alter the template | `RetrievalQAWithSourcesChain` with `max_tokens_limit` throws error `Requested tokens exceed context window` | https://api.github.com/repos/langchain-ai/langchain/issues/6754/comments | 1 | 2023-06-26T10:47:36Z | 2023-10-02T16:05:29Z | https://github.com/langchain-ai/langchain/issues/6754 | 1,774,480,128 | 6,754 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.9.6
_**Requirement**_
aiohttp==3.8.4
aiosignal==1.3.1
async-timeout==4.0.2
attrs==23.1.0
certifi==2023.5.7
charset-normalizer==3.1.0
dataclasses-json==0.5.8
docopt==0.6.2
frozenlist==1.3.3
idna==3.4
langchain==0.0.215
langchainplus-sdk==0.0.17
marshmallow==3.19.0
marshmallow-enum==1.5.1
multidict==6.0.4
mypy-extensions==1.0.0
numexpr==2.8.4
numpy==1.25.0
openai==0.27.8
openapi-schema-pydantic==1.2.4
packaging==23.1
pipreqs==0.4.13
pydantic==1.10.9
PyYAML==6.0
requests==2.31.0
SQLAlchemy==2.0.17
tenacity==8.2.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.6.3
urllib3==2.0.3
yarg==0.1.9
yarl==1.9.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Endpoint:
**URL**: https://api.gopluslabs.io//api/v1/token_security/{chain_id}
**arguments**:
<img width="593" alt="image" src="https://github.com/hwchase17/langchain/assets/24714804/b2d1226d-2d0b-45d8-9400-7f420d463f38">
In openapi.py: 160
<img width="596" alt="image" src="https://github.com/hwchase17/langchain/assets/24714804/06d70353-47de-4e32-90c3-7f5f6c316047">
After the _format_url, url doesn't change, the result of printer is also https://api.gopluslabs.io//api/v1/token_security/{chain_id}
### Expected behavior
Expect URL and path parameters to be properly combined after formatting like:
**https://api.gopluslabs.io//api/v1/token_security/1**
| path_params and url format not work | https://api.github.com/repos/langchain-ai/langchain/issues/6753/comments | 1 | 2023-06-26T10:39:55Z | 2023-10-02T16:05:34Z | https://github.com/langchain-ai/langchain/issues/6753 | 1,774,468,457 | 6,753 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | from langchain.utilities import RequestsWrapper ImportError: cannot import name 'RequestsWrapper' from 'langchain.utilities'Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6752/comments | 2 | 2023-06-26T10:13:50Z | 2023-10-02T16:05:39Z | https://github.com/langchain-ai/langchain/issues/6752 | 1,774,423,676 | 6,752 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import LlamaCppEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
readme_path = Path(__file__).parent.parent / 'README.md'
loader = UnstructuredMarkdownLoader(readme_path)
data = loader.load()
text_splitter = CharacterTextSplitter(
separator="\n\n",
chunk_size=10,
chunk_overlap=2,
length_function=len,
)
texts = text_splitter.split_documents(data)
db = Weaviate.from_documents(texts, encoder, weaviate_url='http://localhost:8080', by_text=False)
```
### Expected behavior
The `UnstructuredMarkdownLoader` loads the metadata as a `PosixPath` object
`Weaviate.from_documents` then throws an error because it can't post this metadata, as `PosixPath` is not serializable
If I change `loader = UnstructuredMarkdownLoader(readme_path)` to `loader = UnstructuredMarkdownLoader(str(readme_path))` then the metadata is loaded as a string, and the posting to Weaviate works
I would expect `UnstructuredMarkdownLoader` to have the same behaviour when I pass it a string or a path like object
I would expect Weaviate to handle serialising a path like object to a string
| Weaviate.from_documents throws PosixPath is not JSON serializable when documents loaded via Pathlib | https://api.github.com/repos/langchain-ai/langchain/issues/6751/comments | 2 | 2023-06-26T09:48:43Z | 2023-10-02T16:05:44Z | https://github.com/langchain-ai/langchain/issues/6751 | 1,774,373,240 | 6,751 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The Faiss index is too large, the loading time is very long, and the experience is not good. Is there any way to optimize it?
### Suggestion:
_No response_ | Is there a way to improve the faiss index loading speed? | https://api.github.com/repos/langchain-ai/langchain/issues/6749/comments | 2 | 2023-06-26T09:17:14Z | 2023-11-02T09:53:30Z | https://github.com/langchain-ai/langchain/issues/6749 | 1,774,318,073 | 6,749 |
[
"langchain-ai",
"langchain"
] | Hello everyone.
Oddly enough, I've recently run into a problem with memory.
In the first version, I had no issues, but now it has stopped working. It's as though my agent has Alzheimer's disease.
Does anyone have any suggestions as to why it might have stopped working?
There doesn't seem to be any error message or any apparent reason. Thank you!
I already tried reinstalling chromedb.
```
def agent(tools):
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
agent_kwargs = {
"user name is bob": [MessagesPlaceholder(variable_name="chat_history")],
}
template = """This is a conversation between a human and a bot:
{chat_history}
Write a summary of the conversation for {input}:
"""
prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
#agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain=initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, prompt = prompt, verbose=True, agent_kwargs=agent_kwargs, memory=memory, max_iterations=5, early_stopping_method="generate")
return agent_chain
```
Perhaps this error is realated
´´´
WARNING:root:Failed to load default session, using empty session: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Failed to load default session, using empty session: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
´´´
´´´
WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connectio
n.HTTPConnection object at 0x7f1458a82770>: Failed to establish a new connection: [Errno 111] Connection refused')) Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnect
ion object at 0x7f1458a82770>: Failed to establish a new connection: [Errno 111] Connection refused')) I'm sorry, but I don't have access to personal information.
´´´
| Issue: ConversationBufferMemory stopped working | https://api.github.com/repos/langchain-ai/langchain/issues/6748/comments | 10 | 2023-06-26T09:14:44Z | 2023-10-07T16:06:05Z | https://github.com/langchain-ai/langchain/issues/6748 | 1,774,313,253 | 6,748 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I would like to perform a query on a database using natural language. However, running direct queries is not possible, and I have to do it via an API. For that, given a sentence, I'd like to extract some custom entities from it.
For example, if the sentence is: "How many more than 20 years old male users viewed a page or logged in in the last 30 days?"
The entities are:
```
<gender, equals, male>,
<age, greater than, 20>,
<event name, equals, view page>,
<event name, equals, login>,
<event timestamp, more than, 30 days>
```
The first element of each entity (triplet) comes from the list of columns
The second element is inferred from context (nature of the operator if it's a single value or array to compare with)
The third element is also inferred from the context and must belong to the chosen column (first element)
I'm not able to restrict either of these elements for the entity. I'd like an agent first to check all the columns that are available, choose one and view their unique values. Once it gets that, either choose that column (first element) and value (third element) or look again and repeat these steps.
Any help on this would be great!
### Suggestion:
_No response_ | Issue: Entity extraction using custom rules | https://api.github.com/repos/langchain-ai/langchain/issues/6747/comments | 9 | 2023-06-26T08:13:23Z | 2023-10-05T16:08:07Z | https://github.com/langchain-ai/langchain/issues/6747 | 1,774,202,161 | 6,747 |
[
"langchain-ai",
"langchain"
] |
LLM中文应用和技术交流群,如果二维码过期可加微信备注LLM应用:yydsa0007

| LLM中文应用交流微信群 | https://api.github.com/repos/langchain-ai/langchain/issues/6745/comments | 1 | 2023-06-26T07:53:56Z | 2024-01-23T15:40:33Z | https://github.com/langchain-ai/langchain/issues/6745 | 1,774,162,267 | 6,745 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version i got from
!pip install langchain
import nest_asyncio
nest_asyncio.apply()
from langchain.document_loaders.sitemap import SitemapLoader
SitemapLoader.requests_per_second = 2
# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue
#SitemapLoader.requests_kwargs = {'verify':True}
loader = SitemapLoader(
"https://www.infoblox.com/sitemap_index.xml",
)
docs = loader.load() #this is where it fails it works with version
TypeError Traceback (most recent call last)
[<ipython-input-5-609988fd11f7>](https://localhost:8080/#) in <cell line: 13>()
11 "https://www.infoblox.com/sitemap_index.xml",
12 )
---> 13 docs = loader.load()
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain/document_loaders/web_base.py](https://localhost:8080/#) in _scrape(self, url, parser)
186 self._check_parser(parser)
187
--> 188 html_doc = self.session.get(url, verify=self.verify, **self.requests_kwargs)
189 html_doc.encoding = html_doc.apparent_encoding
190 return BeautifulSoup(html_doc.text, parser)
TypeError: requests.sessions.Session.get() got multiple values for keyword argument 'verify'
#this was working in older version
!pip install langchain==0.0.189
thanks
nick
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
follow the code above
### Expected behavior
it should run not crash | SitemapLoader is not working verify error from module | https://api.github.com/repos/langchain-ai/langchain/issues/6744/comments | 1 | 2023-06-26T07:02:41Z | 2023-10-02T16:06:04Z | https://github.com/langchain-ai/langchain/issues/6744 | 1,774,074,429 | 6,744 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.214
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Same as [Supabase docs](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase) with the exact table and function created, where `id` column has type of bigint.
```Python
from supabase.client import Client, create_client
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import SupabaseVectorStore
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
supabase_client: Client = create_client(
supabase_url=os.getenv("SUPABASE_URL"),
supabase_key=os.getenv("SUPABASE_SERVICE_KEY"),
)
supabase_vector_store = SupabaseVectorStore.from_documents(
documents=docs,
client=supabase_client,
embedding=embeddings,
)
```
Got
> APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "64f03aff-0c0e-4f24-91e2-e01fcaxxxxxx"'}
### Expected behavior
Successfully insert and embed the split docs into Supabase.
To be helpful, I believe it was introduced by [this commit](https://github.com/hwchase17/langchain/commit/be02572d586bcb33fffe89c37b81d5ba26762bec) regarding the List[str] type `ids`. But not sure if it was intended with docs not updated yet, or otherwise. | id type in SupabaseVectorStore doesn't match SQL column | https://api.github.com/repos/langchain-ai/langchain/issues/6743/comments | 9 | 2023-06-26T06:17:47Z | 2023-10-10T03:06:10Z | https://github.com/langchain-ai/langchain/issues/6743 | 1,773,971,733 | 6,743 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain-0.0.207, Windows, Python-3.9.16
Memory (VectorStoreRetrieverMemory) Settings:
dimension = 768
index = faiss.IndexFlatL2(dimension)
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS(embeddings.embed_query, index, InMemoryDocstore({}), {})
retriever = vectorstore.as_retriever(search_kwargs=dict(k=3))
memory = VectorStoreRetrieverMemory(
memory_key="chat_history",
return_docs=True,
retriever=retriever,
return_messages=True,
)
ConversationalRetrievalChain Settings:
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\
Make sure to avoid using any unclear pronouns.
Chat History:
{chat_history}
(You do not need to use these pieces of information if not relevant)
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=_template
)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(
llm, chain_type="refine"
) # stuff map_reduce refine
qa = ConversationalRetrievalChain(
retriever=chroma.as_retriever(search_kwargs=dict(k=5)),
memory=memory,
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
# verbose=True,
)
responses = qa({"question": user_input})
ISSUE: At first cal, it is giving results. BUT in second call, it is throwing an error as follows:
ValueError: Unsupported chat history format: <class 'langchain.schema.Document'>. Full chat history:
What am I doing wrong here?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Memory (VectorStoreRetrieverMemory) Settings:
dimension = 768
index = faiss.IndexFlatL2(dimension)
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS(embeddings.embed_query, index, InMemoryDocstore({}), {})
retriever = vectorstore.as_retriever(search_kwargs=dict(k=3))
memory = VectorStoreRetrieverMemory(
memory_key="chat_history",
return_docs=True,
retriever=retriever,
return_messages=True,
)
2. ConversationalRetrievalChain Settings:
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\
Make sure to avoid using any unclear pronouns.
Chat History:
{chat_history}
(You do not need to use these pieces of information if not relevant)
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=_template
)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(
llm, chain_type="refine"
) # stuff map_reduce refine
qa = ConversationalRetrievalChain(
retriever=chroma.as_retriever(search_kwargs=dict(k=5)),
memory=memory,
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
# verbose=True,
)
responses = qa({"question": user_input})
ISSUE: At first cal, it is giving results. BUT in second call, it is throwing an error as follows:
ValueError: Unsupported chat history format: <class 'langchain.schema.Document'>. Full chat history:
What am I doing wrong here?
### Expected behavior
Chat history should be injected in chain | Getting "ValueError: Unsupported chat history format:" while using ConversationalRetrievalChain with memory type VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6741/comments | 8 | 2023-06-26T04:47:59Z | 2023-10-24T16:07:18Z | https://github.com/langchain-ai/langchain/issues/6741 | 1,773,840,557 | 6,741 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.74
openai Version: 0.27.8
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key=INSERT_API_KEY, temperature=0.9)
llm.predict("What would be a good company name for a company that makes colorful socks?")
### Expected behavior
I was following the langchain quickstart guide, expected to see something similar to the output. | AttributeError: 'OpenAI' object has no attribute 'predict' | https://api.github.com/repos/langchain-ai/langchain/issues/6740/comments | 9 | 2023-06-26T04:01:06Z | 2023-08-31T10:13:23Z | https://github.com/langchain-ai/langchain/issues/6740 | 1,773,789,387 | 6,740 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am looking for help to continue langchain Refine chain from a saved checkpoint. None of my code got completed because of GPT API overload. Any help would be appreciated.
### Suggestion:
_No response_ | Issue: Continue from a saved checkpoint | https://api.github.com/repos/langchain-ai/langchain/issues/6733/comments | 4 | 2023-06-26T02:17:33Z | 2024-01-30T00:44:50Z | https://github.com/langchain-ai/langchain/issues/6733 | 1,773,678,248 | 6,733 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain:0.0.215
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import OpenAIEmbeddings
index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings('gpt2')).from_loaders([loader])
```
error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[6], line 3
1 from langchain.indexes import VectorstoreIndexCreator
2 from langchain.embeddings import OpenAIEmbeddings
----> 3 index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings('gpt2')).from_loaders([loader])
File [~/micromamba/envs/openai/lib/python3.11/site-packages/pydantic/main.py:332], in pydantic.main.BaseModel.__init__()
TypeError: __init__() takes exactly 1 positional argument (2 given)
```
### Expected behavior
no | TypeError: __init__() takes exactly 1 positional argument (2 given) | https://api.github.com/repos/langchain-ai/langchain/issues/6730/comments | 3 | 2023-06-25T23:58:49Z | 2023-06-26T23:52:15Z | https://github.com/langchain-ai/langchain/issues/6730 | 1,773,575,506 | 6,730 |
[
"langchain-ai",
"langchain"
] | ### System Info
no
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when running the following code:
```
from langchain.embeddings import OpenAIEmbeddings
embedding_model = OpenAIEmbeddings()
embeddings = embedding_model.embed_documents(
[
"Hi there!",
"Oh, hello!",
"What's your name?",
"My friends call me World",
"Hello World!"
]
)
```
such errors occurred:
```
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 embeddings = embedding_model.embed_documents(
2 [
3 "Hi there!",
4 "Oh, hello!",
5 "What's your name?",
6 "My friends call me World",
7 "Hello World!"
8 ]
9 )
10 len(embeddings), len(embeddings[0])
File ~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/embeddings/openai.py:305, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
293 """Call out to OpenAI's embedding endpoint for embedding search docs.
294
295 Args:
(...)
301 List of embeddings, one for each text.
302 """
303 # NOTE: to keep things simple, we assume the list may contain texts longer
304 # than the maximum context and use length-safe embedding function.
--> 305 return self._get_len_safe_embeddings(texts, engine=self.deployment)
File ~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/embeddings/openai.py:225, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
223 tokens = []
224 indices = []
--> 225 encoding = tiktoken.get_encoding(self.model)
226 for i, text in enumerate(texts):
227 if self.model.endswith("001"):
228 # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500
229 # replace newlines, which can negatively affect performance.
File ~/micromamba/envs/openai/lib/python3.11/site-packages/tiktoken/registry.py:60, in get_encoding(encoding_name)
57 assert ENCODING_CONSTRUCTORS is not None
59 if encoding_name not in ENCODING_CONSTRUCTORS:
---> 60 raise ValueError(f"Unknown encoding {encoding_name}")
62 constructor = ENCODING_CONSTRUCTORS[encoding_name]
63 enc = Encoding(**constructor())
ValueError: Unknown encoding text-embedding-ada-002
```
how to fix it?
### Expected behavior
no | ValueError: Unknown encoding text-embedding-ada-002 | https://api.github.com/repos/langchain-ai/langchain/issues/6726/comments | 6 | 2023-06-25T23:08:34Z | 2024-01-28T23:15:37Z | https://github.com/langchain-ai/langchain/issues/6726 | 1,773,551,290 | 6,726 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.215
platform: ubuntu 22.04 LTS
python: 3.10
### Who can help?
@eyurtsev :)
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1vraycYUuF-BZ0UA8EoUT0hPt15HoeGPV?usp=sharing
### Expected behavior
The first result of DuckDuckGoSearch to be returned in the `get_snippets` and `results` methods of the DuckDuckGoSearchAPIWrapper. | DuckDuckGoSearchAPIWrapper Consumes results w/o returning them | https://api.github.com/repos/langchain-ai/langchain/issues/6724/comments | 2 | 2023-06-25T22:54:36Z | 2023-06-26T13:58:17Z | https://github.com/langchain-ai/langchain/issues/6724 | 1,773,540,838 | 6,724 |
[
"langchain-ai",
"langchain"
] | ### System Info
colab
### Who can help?
@hwchase17 hi i am trying to use Context aware text splitting and QA / Chat
the code doesnt work starting the vectore indes
# Build vectorstore and keep the metadata
from langchain.vectorstores import Chroma
vectorstore = Chroma.from_documents(texts=all_splits,metadatas=all_metadatas,embedding=OpenAIEmbeddings())
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i am only following the notebood
### Expected behavior
it should work | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA code is not working, | https://api.github.com/repos/langchain-ai/langchain/issues/6723/comments | 8 | 2023-06-25T21:07:51Z | 2023-10-12T16:08:27Z | https://github.com/langchain-ai/langchain/issues/6723 | 1,773,490,075 | 6,723 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, the API chain takes the result from the API and gives it directly to a LLM chain. If the result is larger then the context length of the LLM, the chain will give an error. To address this issue I propose that the chain should split the API result using a text splitter, then give the result to combine documents chain that answers the question.
### Motivation
I have found that certain question given to the API chain gives results from the API that exede the context length of the standard openai model. For example:
```python
from langchain.chains import APIChain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
from langchain.chains.api import open_meteo_docs
chain_new = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)
chain_new.run('What is the weather of [latitude:52.52, longitude:13.419998]?')
```
The LLM creates the this URL: https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.419998&hourly=temperature_2m,weathercode,snow_depth,freezinglevel_height,visibility. The content from this URL is kind of large. This causes the prompt to have 6382 tokens and be lager then the max context length. I think the chain should be able to handle larger responses.
### Your contribution
I can try to submit a PR to implement this solution. | Handling larger responses in the API chain | https://api.github.com/repos/langchain-ai/langchain/issues/6722/comments | 4 | 2023-06-25T20:44:41Z | 2024-01-30T09:20:22Z | https://github.com/langchain-ai/langchain/issues/6722 | 1,773,482,364 | 6,722 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows 11, python 3.9.16 , langchain-0.0.215
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The loaded documents succesfully get indexd in FAISS, Deeplake and Pinecone, so I guess the issue is not there. When using Chroma.from_documents the error is thrown:
```
File ~\...langchain\vectorstores\chroma.py:149 in add_texts
self._collection.upsert(
AttributeError: 'Collection' object has no attribute 'upsert'
```
Code:
```python
with open('docs.pkl', 'rb') as file:
documents= pickle.load(file)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=200)
docs = text_splitter.split_documents(documents)
chroma_dir = base_dir + "ChromaDB" + "_" + str(chunk_size) + "_" + str(overlap)
db2 = Chroma.from_documents(docs, embeddings, persist_directory=chroma_dir)
```
Uploaded the pickle zipped as .pkl is not supported for upload.
[docs.zip](https://github.com/hwchase17/langchain/files/11860249/docs.zip)
### Expected behavior
Index the docs | ChromaDB Chroma.from_documents error: AttributeError: 'Collection' object has no attribute 'upsert' | https://api.github.com/repos/langchain-ai/langchain/issues/6721/comments | 2 | 2023-06-25T16:40:21Z | 2023-06-25T17:48:54Z | https://github.com/langchain-ai/langchain/issues/6721 | 1,773,371,161 | 6,721 |
[
"langchain-ai",
"langchain"
] | Hi teams,
Can I make the embedding with tag or any key, I want to make user with different auth. to query specific vector of document.
I was trying to use Redis as vectorstore.Would it recommend to be my vectorstore if I want to implement something authentication.
I'll embed my document into vectorstore and each embedding has its permission. And then I'll have users with different permission. The user who be granted the permission of document can query that vector of document.
For example, Embed three docuemnts into vectorstore and there are two user with different permission.
| Docs | key |
| :----- | :--: |
| Docs_A | A |
| Docs_B | B |
| Docs_C | C |
| User| permission |
| :----- | :--: |
| User_A | A, C |
| User_B | B |
At this use case, User_A can query the vector of Docs_A and Docs_C, and User_B just Docs_B | search specific vector by tag or key | https://api.github.com/repos/langchain-ai/langchain/issues/6720/comments | 5 | 2023-06-25T15:52:23Z | 2023-10-24T21:10:03Z | https://github.com/langchain-ai/langchain/issues/6720 | 1,773,353,594 | 6,720 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
i've made some comments on https://langchainers.hashnode.dev/getting-started-with-langchainjs
and https://langchainers.hashnode.dev/learn-how-to-integrate-language-models-llms-with-sql-databases-using-langchainjs
some things to update to make it working with the last langchain JS/TS packages
### Idea or request for content:
_No response_ | DOC: some upgrades to https://langchainers.hashnode.dev/getting-started-with-langchainjs | https://api.github.com/repos/langchain-ai/langchain/issues/6718/comments | 1 | 2023-06-25T15:05:51Z | 2023-06-25T15:19:55Z | https://github.com/langchain-ai/langchain/issues/6718 | 1,773,336,111 | 6,718 |
[
"langchain-ai",
"langchain"
] | ### System Info
All regular retrievers have an add_document method that handles adding a list of documents, and this one retriever only handles add_text, a list of strings and not langchain documents.
For comparison, weaviate hybrid search is able to handle document lists:
` def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:
"""Upload documents to Weaviate."""
from weaviate.util import get_valid_uuid
with self._client.batch as batch:
ids = []
for i, doc in enumerate(docs):
metadata = doc.metadata or {}
data_properties = {self._text_key: doc.page_content, **metadata}
# If the UUID of one of the objects already exists
# then the existing objectwill be replaced by the new object.
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
else:
_id = get_valid_uuid(uuid4())
batch.add_data_object(data_properties, self._index_name, _id)
ids.append(_id)
return ids`
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Import PineconeHybridSearch
### Expected behavior
retriever.add_documents(DocumentList) | Pinecone hybrid search is incomplete, missing add_documents method | https://api.github.com/repos/langchain-ai/langchain/issues/6716/comments | 2 | 2023-06-25T14:29:47Z | 2023-10-01T16:04:53Z | https://github.com/langchain-ai/langchain/issues/6716 | 1,773,318,895 | 6,716 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.125, Python 3.9 within Databricks
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
### Expected behavior
Installation without issues | cannot import name 'get_callback_manager' from 'langchain.callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/6715/comments | 2 | 2023-06-25T12:28:06Z | 2023-10-01T16:04:58Z | https://github.com/langchain-ai/langchain/issues/6715 | 1,773,252,216 | 6,715 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The value of token_max in mp_reduce should vary according to the model, not the fixed 3000.

### Motivation
When I use the gpt-3.5-turbo-16k model, I often encounter this ValueError 'A single document was so long it could not be combined' due to the small token_max.I always need to modify a larger value for token_max to resolve the issue.

### Your contribution
I hope to enhance the functionality by dynamically modifying the value of token_max through obtaining the model used by the current chain.

| Update the token_max value in mp_reduce.ValueError:"A single document was so long it could not be combined "、"A single document was longer than the context length," | https://api.github.com/repos/langchain-ai/langchain/issues/6714/comments | 3 | 2023-06-25T10:19:42Z | 2023-07-05T08:13:50Z | https://github.com/langchain-ai/langchain/issues/6714 | 1,773,197,314 | 6,714 |
[
"langchain-ai",
"langchain"
] | Hi there, awesome project!
https://github.com/buhe/langchain-swift is a swift langchain copy, for ios or mac apps.
Chatbots , QA bot and Agent is done.
Current step:
- [ ] LLMs
- [x] OpenAI
- [ ] Vectorstore
- [x] Supabase
- [ ] Embedding
- [x] OpenAI
- [ ] Chain
- [x] Base
- [x] LLM
- [ ] Tools
- [x] Dummy
- [x] InvalidTool
- [ ] Serper
- [ ] Zapier
- [ ] Agent
- [x] ZeroShotAgent
- [ ] Memory
- [x] BaseMemory
- [x] BaseChatMemory
- [x] ConversationBufferWindowMemory
- [ ] Text Splitter
- [x] CharacterTextSplitter
- [ ] Document Loader
- [x] TextLoader
### Suggestion:
_No response_ | A swift langchain copy, for ios or mac apps. | https://api.github.com/repos/langchain-ai/langchain/issues/6712/comments | 1 | 2023-06-25T07:29:55Z | 2023-10-01T16:05:03Z | https://github.com/langchain-ai/langchain/issues/6712 | 1,773,116,562 | 6,712 |
[
"langchain-ai",
"langchain"
] | I defined some agents and some tools.
How can i get execution order about llm, tool inside agent? Use Callback or other?
If i use callback print, there results is in order?
```
agent_orange = ChatAgent(llm_chain=orange_chain, allowed_tools=tool_names, output_parser=MyChatOutputParser())
agent_chatgpt = ChatAgent(llm_chain=chatgpt_chain, allowed_tools=tool_names,
output_parser=MyChatOutputParser())
agent_executor = MyAgentExecutor.from_agent_and_tools(agent=[agent_orange, agent_chatgpt],
tools=tools,
verbose=True
)
result = agent_executor.run("my query", callbacks=[my_handle])
``` | How can i get execution order inside agent? | https://api.github.com/repos/langchain-ai/langchain/issues/6711/comments | 2 | 2023-06-25T07:26:30Z | 2023-10-01T16:05:08Z | https://github.com/langchain-ai/langchain/issues/6711 | 1,773,115,263 | 6,711 |
[
"langchain-ai",
"langchain"
] | ### System Info
Apple M1
### Who can help?
I get this error : (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just run the model
### Expected behavior
must run on apple m1 | Error while running on Apple M1 Pro | https://api.github.com/repos/langchain-ai/langchain/issues/6709/comments | 0 | 2023-06-25T06:37:18Z | 2023-06-25T08:28:05Z | https://github.com/langchain-ai/langchain/issues/6709 | 1,773,096,079 | 6,709 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: v0.0.214
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/base.py#L410
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/chroma.py#L225
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/chroma.py#L198
The `filter` option does not work when search_type is similarity_score_threshold
### Expected behavior
work:
```python
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
return self.similarity_search_with_score(query, k, **kwargs)
``` | chroma func _similarity_search_with_relevance_scores missing "kwargs" parameter | https://api.github.com/repos/langchain-ai/langchain/issues/6707/comments | 2 | 2023-06-25T05:57:30Z | 2023-10-01T16:05:14Z | https://github.com/langchain-ai/langchain/issues/6707 | 1,773,077,539 | 6,707 |
[
"langchain-ai",
"langchain"
] | ### System Info
- LangChain version: 0.0.214
- tiktoken version: 0.4.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Executing following code raises `TypeError: expected string or buffer`.
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613", temperature=0)
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
response = llm([HumanMessage(content="What is the weather like in Boston?")], functions=functions)
llm.get_num_tokens_from_messages([response])
```
`get_num_tokens_from_messages` internally converts messages to dict with `_convert_message_to_dict` and then interates all key-value pairs to count the number of tokens.
The code expects value to be a string, but when a function call is included, an exception seems to be raised because value contains a dictionary.
### Expected behavior
As far as I know, there is no officially documented way to calculate the exact token count consumption when using function call.
Someone on the OpenAI forum has [posted](https://community.openai.com/t/how-to-calculate-the-tokens-when-using-function-call/266573/10) a method for calculating the tokens, so perhaps that method could be adopted.
| ChatOpenAI.get_num_tokens_from_messages raises TypeError with function call response | https://api.github.com/repos/langchain-ai/langchain/issues/6706/comments | 2 | 2023-06-25T05:36:02Z | 2023-10-01T16:05:18Z | https://github.com/langchain-ai/langchain/issues/6706 | 1,773,070,930 | 6,706 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
when i use RecusiveUrlLoader ,
NotImplementedError: WebBaseLoader does not implement lazy_load()

### Suggestion:
_No response_ | NotImplementedError: WebBaseLoader does not implement lazy_load() | https://api.github.com/repos/langchain-ai/langchain/issues/6704/comments | 3 | 2023-06-25T05:06:40Z | 2023-10-05T16:08:30Z | https://github.com/langchain-ai/langchain/issues/6704 | 1,773,061,920 | 6,704 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am trying to add memory to create_pandas_dataframe_agent to perform post processing on a model that I trained using Langchain. I am using the following code at the moment.
```
from langchain.llms import OpenAI
import pandas as pd
df = pd.read_csv('titanic.csv')
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), [df], verbose=True)
```
I tried adding memory = ConversationBufferMemory(memory_key="chat_history") but that didnt help. Tried many other methods but seems like the memory for create_pandas_dataframe_agent is not implemented
### Motivation
There is a major need in pandas processing to save models as pickle files along with adding new features to the studied dataset which alters the original dataset for the next step. It seems like langchain currently doesnt support that.
### Your contribution
I can help with the implementation if necessary. | Memory seems not to be supported in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/6699/comments | 7 | 2023-06-25T03:05:36Z | 2023-12-13T16:08:48Z | https://github.com/langchain-ai/langchain/issues/6699 | 1,773,025,734 | 6,699 |
[
"langchain-ai",
"langchain"
] |
### System Info
$ pip freeze | grep langchain
langchain==0.0.197
langchainplus-sdk==0.0.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For some reason :
db = Chroma.from_documents(texts, self.embeddings, persist_directory=db_path, client_settings=settings)
persist_directory=db_path, has no effect ... upon db.persist() it stores into the default directory 'db', instead of using db_path.
Only if you explicitly set Settings(persist_directory=db_path, ... ) it works.
Probable reason is that in langchain chroma.py if you pass client_settings and 'persist_directory' is not part of the settings, it will not add 'persist_directory' as it is done in the ELSE case, but ...:
(line 77 ++)
```
if client is not None:
self._client = client
else:
if client_settings:
self._client_settings = client_settings <<< .... here ..........
else:
self._client_settings = chromadb.config.Settings()
if persist_directory is not None:
self._client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=persist_directory,
)
self._client = chromadb.Client(self._client_settings)
self._embedding_function = embedding_function
self._persist_directory = persist_directory
```
but ... chromadb.__init__.py expects 'persist_directory' in settings i.e. (line 44), otherwise it will use the default :
```
elif setting == "duckdb+parquet":
require("persist_directory")
logger.warning(
f"Using embedded DuckDB with persistence: data will be stored in: {settings.persist_directory}"
)
```
### Expected behavior
db = Chroma.from_documents(texts, self.embeddings, persist_directory=db_path, client_settings=settings)
should use db_path instead of 'db' | Chroma.from_documents(...persist_directory=db_path) has no effect | https://api.github.com/repos/langchain-ai/langchain/issues/6696/comments | 3 | 2023-06-24T23:24:49Z | 2023-10-29T16:05:41Z | https://github.com/langchain-ai/langchain/issues/6696 | 1,772,962,572 | 6,696 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.