issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | Hi I have one question I want to use search_distance with ConversationalRetrievalChain
Here is my code:
```
vectordbkwargs = {"search_distance": 0.9}
bot_message = qa.run({"question": history[-1][0], "chat_history": history[:-1], "vectordbkwargs": vectordbkwargs})
```
But I am getting the following error
```
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['question', 'vectordbkwargs']
Anybody has any idea what I am doing wrong
```
Am I doing something wrong | Error in running search_distance with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/3178/comments | 6 | 2023-04-19T21:10:44Z | 2023-11-22T16:09:14Z | https://github.com/langchain-ai/langchain/issues/3178 | 1,675,643,417 | 3,178 |
[
"langchain-ai",
"langchain"
] | null | create_python_agent doesnt return intermediate step | https://api.github.com/repos/langchain-ai/langchain/issues/3177/comments | 1 | 2023-04-19T21:09:07Z | 2023-09-10T16:30:25Z | https://github.com/langchain-ai/langchain/issues/3177 | 1,675,640,827 | 3,177 |
[
"langchain-ai",
"langchain"
] | The following link describes the VectorStoreRetrieverMemory class, which would be extremely useful for referencing an external vector DB & it's text/vectors:
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html#create-your-the-vectorstoreretrievermemory)
However, I'm following the documentation in the following link, and used the import copied from the documentation:
`from langchain.memory import VectorStoreRetrieverMemory`
Here's the error that I'm receiving:
ImportError: cannot import name 'VectorStoreRetrieverMemory' from 'langchain.memory' (D:\Anaconda_3\lib\site-packages\langchain\memory\__init__.py)
I've updated langchain as a dependency, and the issue is persisting. Is there a workaround or an issue with langchain? This is also my first github issue, so please let me know if more information is needed. | VectorStoreRetrieverMemory Not Available In Langchain.memory | https://api.github.com/repos/langchain-ai/langchain/issues/3175/comments | 2 | 2023-04-19T21:01:08Z | 2023-05-03T14:28:51Z | https://github.com/langchain-ai/langchain/issues/3175 | 1,675,628,226 | 3,175 |
[
"langchain-ai",
"langchain"
] | When I get a rate limit or API key error, I get the following:
```
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/anthropic.py", line 184, in _call
return response["completion"]
KeyError: 'completion'
```
This is confusing, and could be a more informative error message. | Anthropic error handling is unclear | https://api.github.com/repos/langchain-ai/langchain/issues/3170/comments | 2 | 2023-04-19T19:43:52Z | 2023-04-20T00:51:18Z | https://github.com/langchain-ai/langchain/issues/3170 | 1,675,517,935 | 3,170 |
[
"langchain-ai",
"langchain"
] | When installing requirements via `poetry install -E all` getting error for debugpy and SQLAlchemy:
debugpy:
```
• Installing debugpy (1.6.7): Failed
CalledProcessError
Command '['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl']' returned non-zero exit status 1.
at /opt/anaconda3/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .__exit__ does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl'] errored with the following return code 1
Output:
ERROR: debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl is not a supported wheel on this platform.
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1545 in _run
1541│ return subprocess.call(cmd, stderr=stderr, env=env, **kwargs)
1542│ else:
1543│ output = subprocess.check_output(cmd, stderr=stderr, env=env, **kwargs)
1544│ except CalledProcessError as e:
→ 1545│ raise EnvCommandError(e, input=input_)
1546│
1547│ return decode(output)
1548│
1549│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│
```
SQLAlchemy:
```
• Installing sqlalchemy (1.4.47): Failed
CalledProcessError
Command '['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl']' returned non-zero exit status 1.
at /opt/anaconda3/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .__exit__ does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl'] errored with the following return code 1
Output:
ERROR: SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl is not a supported wheel on this platform.
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1545 in _run
1541│ return subprocess.call(cmd, stderr=stderr, env=env, **kwargs)
1542│ else:
1543│ output = subprocess.check_output(cmd, stderr=stderr, env=env, **kwargs)
1544│ except CalledProcessError as e:
→ 1545│ raise EnvCommandError(e, input=input_)
1546│
1547│ return decode(output)
1548│
1549│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│
``` | poetry install -E all on M1 13.2.1 | https://api.github.com/repos/langchain-ai/langchain/issues/3169/comments | 2 | 2023-04-19T19:31:31Z | 2023-09-10T16:30:30Z | https://github.com/langchain-ai/langchain/issues/3169 | 1,675,500,196 | 3,169 |
[
"langchain-ai",
"langchain"
] | I'm exploring [this great notebook](https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html) and trying to produce something similar, however I get an error when `fetch_memories` calls `self.memory_retriever.get_relevant_documents(observation)`:
```
AttributeError: 'FAISS' object has no attribute 'similarity_search_with_relevance_scores'
```
I'm running `langchain` `0.0.144` and I've installed `faiss` using `conda install -c conda-forge faiss-cpu` (as indicated [here](https://github.com/facebookresearch/faiss/blob/main/INSTALL.md)) which installed version `1.7.2`. Also running python `3.10`.
Poking inside `faiss.py` I indeed can't find a method called `similarity_search_with_relevance_scores`, only one called `_similarity_search_with_relevance_scores`.
Which version(s) of `langchain` and `faiss` should I be running for this to work?
Any help would be greatly appreciated, thanks 🙏 | Error using TimeWeightedVectorStoreRetriever.get_relevant_documents with FAISS: 'FAISS' object has no attribute 'similarity_search_with_relevance_scores' | https://api.github.com/repos/langchain-ai/langchain/issues/3167/comments | 6 | 2023-04-19T19:21:29Z | 2023-09-18T09:55:40Z | https://github.com/langchain-ai/langchain/issues/3167 | 1,675,484,274 | 3,167 |
[
"langchain-ai",
"langchain"
] | I am trying to improve BabyAGI with agents example from this repo to read a file, optimize it and write the file.
I added this to the tools as from the AutoGPT example next to search and todo:
` WriteFileTool(),
ReadFileTool(),`
Prompt is to "Write the optimized code into a file".
ReadFileTool is working fine...
It looks like type is missing as a parameter even. Do I need to pass something to the tool or what am I doing wrong?
I get the following error:
`Traceback (most recent call last):
File "baby_agi_with_agent.py", line 136, in <module>
baby_agi({"objective": OBJECTIVE})
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 130, in _call
result = self.execute_task(objective, task["task_name"])
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 111, in execute_task
return self.execution_chain.run(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py", line 695, in _take_next_step
observation = tool.run(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/tools/base.py", line 90, in run
self._parse_input(tool_input)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/tools/base.py", line 58, in _parse_input
input_args.validate({key_: tool_input})
File "pydantic/main.py", line 711, in pydantic.main.BaseModel.validate
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for WriteFileInput
text
field required (type=value_error.missing)`
| File cannot be written: WriteFileTool() throws validation error for WriteFileInput text | https://api.github.com/repos/langchain-ai/langchain/issues/3165/comments | 6 | 2023-04-19T18:21:47Z | 2023-09-28T16:07:55Z | https://github.com/langchain-ai/langchain/issues/3165 | 1,675,398,921 | 3,165 |
[
"langchain-ai",
"langchain"
] | Just wanted to flag this. Not sure if it's a me problem or if there's an issue elsewhere. Searched the issues for the same problem and couldn't find it.
 | Docs Chat Rate Limited | https://api.github.com/repos/langchain-ai/langchain/issues/3162/comments | 1 | 2023-04-19T17:07:40Z | 2023-09-10T16:30:36Z | https://github.com/langchain-ai/langchain/issues/3162 | 1,675,301,097 | 3,162 |
[
"langchain-ai",
"langchain"
] | The console output when running a tool is missing the "Observation" and "Thought" prefixes.
I noticed this when using the SQL Toolkit, but other tools are likely affected.
Here is the current INCORRECT output format:
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""invoice_items, invoices, tracks, sqlite_sequence, employees, media_types, sqlite_stat1, customers, playlists, playlist_track, albums, genres, artistsThere is a table called "employees" that I can query.
Action: schema_sql_db
Action Input: "employees"
```
Here is the expected output format:
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: invoice_items, invoices, tracks, sqlite_sequence, employees, media_types, sqlite_stat1, customers, playlists, playlist_track, albums, genres, artists
Thought:There is a table called "employees" that I can query.
Action: schema_sql_db
Action Input: "employees"
```
Note: this appears to only affect the console output. The `agent_scratchpad` is updated correctly with the "Observation" and "Thought" prefixes. | Missing Observation and Thought prefix in output | https://api.github.com/repos/langchain-ai/langchain/issues/3157/comments | 4 | 2023-04-19T15:15:26Z | 2023-04-20T16:46:28Z | https://github.com/langchain-ai/langchain/issues/3157 | 1,675,129,012 | 3,157 |
[
"langchain-ai",
"langchain"
] | I am considering implementing a new tool to give LLMs the ability to send SMS text using the Twilio API. @hwchase17 is this worth implementing? if so I'll submit a PR shortly. | New Tool: Twilio | https://api.github.com/repos/langchain-ai/langchain/issues/3156/comments | 7 | 2023-04-19T14:37:14Z | 2023-09-27T16:07:56Z | https://github.com/langchain-ai/langchain/issues/3156 | 1,675,043,935 | 3,156 |
[
"langchain-ai",
"langchain"
] | For writing more abstracted code the variables in langchain.chat_model and model in langchain.llms should have the same way of retrieving the model version. Therefor both variables should have the same name. | [feat] the variable model_name from langchain.chat_model should have the same name as the variable model from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/3154/comments | 1 | 2023-04-19T13:51:42Z | 2023-09-10T16:30:41Z | https://github.com/langchain-ai/langchain/issues/3154 | 1,674,945,820 | 3,154 |
[
"langchain-ai",
"langchain"
] | I tried to merge two FAISS indices:
index1 = FAISS.from_documents(doc1, embeddings)
index2=FAISS.from_documents(doc2, embeddings)
after I do index1.merge_from(index2), and then do
ret = index2.as_retriever()
ret.get_relevant_documents(query)
it returns and empty list [].
If index1 is saved_local, can I add index2 without loading index1 to memory, perhaps called merge_local? (open index.faiss and index.pkl and write line by line)
| does merging index1.merge_from(index2) dump index2? | https://api.github.com/repos/langchain-ai/langchain/issues/3152/comments | 2 | 2023-04-19T11:17:25Z | 2023-07-21T15:11:27Z | https://github.com/langchain-ai/langchain/issues/3152 | 1,674,697,052 | 3,152 |
[
"langchain-ai",
"langchain"
] | I'm talking about langchain, langchain-backend and langchain-frontend here.
https://python.langchain.com/en/latest/tracing/local_installation.html
Could give people a starting off point if they want to use langchain with tracing. I'll share mine here:
```
version: '3.7'
services:
your-app-name-containing-langchain:
container_name: your-app-name-containing-langchain
image: your/image
ports:
- 5000:5000
build:
context: .
dockerfile: Dockerfile
env_file: .env
volumes:
- ./:/app
langchain-frontend:
container_name: langchain-frontend
image: notlangchain/langchainplus-frontend:latest
ports:
- 4173:4173
environment:
- BACKEND_URL=http://langchain-backend:8000
- PUBLIC_BASE_URL=http://localhost:8000
- PUBLIC_DEV_MODE=true
depends_on:
- langchain-backend
langchain-backend:
container_name: langchain-backend
image: notlangchain/langchainplus:latest
environment:
- PORT=8000
- LANGCHAIN_ENV=local
ports:
- 8000:8000
depends_on:
- langchain-db
langchain-db:
container_name: langchain-db
image: postgres:14.1
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
``` | Thought about a docker-compose file to run langchain's whole ecosystem? | https://api.github.com/repos/langchain-ai/langchain/issues/3150/comments | 6 | 2023-04-19T10:33:28Z | 2023-09-24T16:09:43Z | https://github.com/langchain-ai/langchain/issues/3150 | 1,674,632,635 | 3,150 |
[
"langchain-ai",
"langchain"
] | I would like to implement the following feature.
### Description:
currently the sitemap loader (at `document_laoders.sitemap`) only works if the sitemap URL as web_path is passed directly. I propose adding a feature to improve the sitemap loader's functionality by enabling it to automatically discover sitemap URLs given any website URL (not necessarily the root URL). Typically, sitemap URLs can be found in the robots.txt or the homepage HTML as an href attribute.
The new feature would involve:
Checking the robots.txt file for sitemap URLs.
if not found in robots.txt, searching the homepage HTML for href attributes containing sitemap URLs.
If you believe this feature would be useful and beneficial for the project, please le me know, and I can submit a PR.
Best,
Pi | FEAT: Extend SitemapLoader to automatically discover sitemap URLs. | https://api.github.com/repos/langchain-ai/langchain/issues/3149/comments | 1 | 2023-04-19T10:19:24Z | 2023-09-10T16:30:46Z | https://github.com/langchain-ai/langchain/issues/3149 | 1,674,612,447 | 3,149 |
[
"langchain-ai",
"langchain"
] | I have a gpt-3.5-turbo model deployed on AzureOpenAI, however, I always keep getting this error.
**openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.**
`index_creator = VectorstoreIndexCreator(
embedding=OpenAIEmbeddings(
openai_api_key=openai_api_key,
model="gpt-3.5-turbo",
chunk_size=1,
)
)`
`indexed_document = index_creator.from_loaders([file_loader])`
`chain = RetrievalQA.from_chain_type(
llm=AzureOpenAI(
openai_api_key=openai_api_key,
deployment_name="gpt_35_turbo",
model_name="gpt-3.5-turbo",
),
chain_type="stuff",
retriever=indexed_document.vectorstore.as_retriever(),
input_key="user_prompt",
return_source_documents=True,
)`
`open_ai_response = chain({"user_prompt": query_param})` | Unable to use gpt-3.5-turbo deployed on Azure OpenAI with langchain embeddings. | https://api.github.com/repos/langchain-ai/langchain/issues/3148/comments | 10 | 2023-04-19T10:00:10Z | 2023-08-10T14:50:58Z | https://github.com/langchain-ai/langchain/issues/3148 | 1,674,581,862 | 3,148 |
[
"langchain-ai",
"langchain"
] | Recently I got a strange error when using FAISS `similarity_search_with_score_by_vector`. The line (https://github.com/hwchase17/langchain/blob/575b717d108984676e25afd0910ccccfdaf9693d/langchain/vectorstores/faiss.py#L170) generates errors:
```
TypeError: IndexFlat.search() missing 3 required positional arguments: 'k', 'distances', and 'labels'
```
It looks like FAISS own `search` function has different arguments (using `IndexFlat`)
```
search(self, n, x, k, distances, labels)
```
But `similarity_search_with_score_by_vector` worked one day ago. So is there any hints for this? | FAISS similarity search issue | https://api.github.com/repos/langchain-ai/langchain/issues/3147/comments | 11 | 2023-04-19T09:55:13Z | 2023-06-11T23:46:27Z | https://github.com/langchain-ai/langchain/issues/3147 | 1,674,572,859 | 3,147 |
[
"langchain-ai",
"langchain"
] | Hi, I'm trying to implement the memory stream in <[Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442)>.
This memory mode is built on top of a concept called `observation` which is something like `Lily is watching a movie`, `desk is idle`.
I think the most closest concept to `observation` in Langchain are entity memory and KG triple. All these three are describing a simple fact about a entity.
But what confuses me is that, in theory, a KG also foucses on entities(vertices). It seems KG can be seen as a storage form for the `entity memory`. In langchain, they differ in the prompt template for info extraction, which affects how LLM extract the info. Once the info has been extracted, the format of the text used to present it to the LLM is also nearly identical.
So my questions are, what's the positioning for entity memory and KG memory, and if I try to implement the memory stream, is it appropriate to reuse one of these two to generate `observation`?
| Implementing the memory stream in <Generative Agents: Interactive Simulacra of Human Behavior> | https://api.github.com/repos/langchain-ai/langchain/issues/3145/comments | 1 | 2023-04-19T09:28:23Z | 2023-09-10T16:30:51Z | https://github.com/langchain-ai/langchain/issues/3145 | 1,674,529,524 | 3,145 |
[
"langchain-ai",
"langchain"
] | ```
from langchain import OpenAI
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.prompts import PromptTemplate
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=rds.as_retriever())
```
While running above codes, it outputs the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[14], line 5
4 from langchain.chains.qa_with_sources import load_qa_with_sources_chain
----> 5 chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=rds.as_retriever())
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py:76, in BaseQAWithSourcesChain.from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
74 """Load chain from chain type."""
75 _chain_kwargs = chain_type_kwargs or {}
---> 76 combine_document_chain = load_qa_with_sources_chain(
77 llm, chain_type=chain_type, **_chain_kwargs
78 )
79 return cls(combine_documents_chain=combine_document_chain, **kwargs)
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/loading.py:171, in load_qa_with_sources_chain(llm, chain_type, verbose, **kwargs)
166 raise ValueError(
167 f"Got unsupported chain type: {chain_type}. "
168 f"Should be one of {loader_mapping.keys()}"
169 )
170 _func: LoadingCallable = loader_mapping[chain_type]
--> 171 return _func(llm, verbose=verbose, **kwargs)
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/loading.py:56, in _load_stuff_chain(llm, prompt, document_prompt, document_variable_name, verbose, **kwargs)
48 def _load_stuff_chain(
49 llm: BaseLanguageModel,
50 prompt: BasePromptTemplate = stuff_prompt.PROMPT,
(...)
54 **kwargs: Any,
55 ) -> StuffDocumentsChain:
---> 56 llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
57 return StuffDocumentsChain(
58 llm_chain=llm_chain,
59 document_variable_name=document_variable_name,
(...)
62 **kwargs,
63 )
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/fields.py:867, in pydantic.fields.ModelField.validate()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/fields.py:1151, in pydantic.fields.ModelField._apply_validators()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/class_validators.py:304, in pydantic.class_validators._generic_validator_cls.lambda4()
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:57, in Chain.set_verbose(cls, verbose)
52 """If verbose is None, set it.
53
54 This allows users to pass in None as verbose to access the global setting.
55 """
56 if verbose is None:
---> 57 return _get_verbosity()
58 else:
59 return verbose
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:17, in _get_verbosity()
16 def _get_verbosity() -> bool:
---> 17 return langchain.verbose
AttributeError: module 'langchain' has no attribute 'verbose'
```
However, when I run with langchain==0.0.142, this error doesn't occur.
| Error occured after updating langchain version to the latest 0.0.144 for the example code using RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/3144/comments | 2 | 2023-04-19T09:26:44Z | 2024-05-21T06:02:37Z | https://github.com/langchain-ai/langchain/issues/3144 | 1,674,526,880 | 3,144 |
[
"langchain-ai",
"langchain"
] | null | How to change the max_token for different requests | https://api.github.com/repos/langchain-ai/langchain/issues/3138/comments | 1 | 2023-04-19T07:05:24Z | 2023-09-10T16:31:01Z | https://github.com/langchain-ai/langchain/issues/3138 | 1,674,296,816 | 3,138 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/hwchase17/langchain/discussions/3132
<div type='discussions-op-text'>
<sup>Originally posted by **srithedesigner** April 19, 2023</sup>
We used to use AzureOpenAI llm from langchain.llms with the text-davinci-003 model but after deploying GPT4 in Azure when trying to use GPT4, this error is being thrown:
`Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>`
This is how we are initializing the model:
```python
model = AzureOpenAI(
streaming=streaming,
client=openai.ChatCompletion(),
callback_manager=callback,
deployment_name= "gpt4",
model_name="gpt-4-32k",
openai_api_key=env.cloud.openai_api_key,
temperature=temperature,
max_tokens=max_tokens,
verbose=verbose,
)
```
How do we use GPT4 withe the AzureOpenAI chain? Is it currently being supported? or are we initializing it wrong?
</div> | Not able to use GPT4 model with AzureOpenAI from from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/3137/comments | 12 | 2023-04-19T06:53:50Z | 2023-10-16T19:48:27Z | https://github.com/langchain-ai/langchain/issues/3137 | 1,674,281,383 | 3,137 |
[
"langchain-ai",
"langchain"
] | <br>
I'm trying to integrate gpt 4 azure open ai with Langchain but when i try to use it inside ConversationalRetrievalChain it is throwing some error <be>
```
raise error.InvalidRequestError(\\n *~*
openai.error.InvalidRequestError: Must provide an \'engine\' or
\'deployment_id\' parameter to create a <class
\'openai.api_resources.chat_completion.ChatCompletion\
```
<br>
but if I run standalone openai instance with Azure openai config it is working, I'm confused about whether langchain supports gpt 4 or if am I missing anything | InvalidRequestError Must provide an engine ( gpt 4 azure open ai ) | https://api.github.com/repos/langchain-ai/langchain/issues/3134/comments | 3 | 2023-04-19T05:57:43Z | 2023-09-24T16:09:53Z | https://github.com/langchain-ai/langchain/issues/3134 | 1,674,221,239 | 3,134 |
[
"langchain-ai",
"langchain"
] | Python 3.11.1
langchain==0.0.143
llama-cpp-python==0.1.34
Model works when I use Dalai. Also happens with Llama 7B.
Code here (from [langchain documentation](https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html)):
```
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
import os
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
file_path = os.path.abspath("<my path>/dalai/alpaca/models/7B/ggml-model-q4_0.bin")
llm = LlamaCpp(model_path=file_path, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
```
Output:
```
llama.cpp: loading model from <my_path>
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113739.11 KB
llama_model_load_internal: mem required = 5809.32 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size = 512.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
> Entering new LLMChain chain...
Prompt after formatting:
Question: What NFL team won the Super Bowl in the year Justin Bieber was born?
Answer: Let's think step by step.
llama_print_timings: load time = 906.87 ms
llama_print_timings: sample time = 173.02 ms / 123 runs ( 1.41 ms per run)
llama_print_timings: prompt eval time = 3821.62 ms / 34 tokens ( 112.40 ms per token)
llama_print_timings: eval time = 24772.41 ms / 122 runs ( 203.05 ms per run)
llama_print_timings: total time = 28788.58 ms
> Finished chain.
```
What could be causing this? It seems like the model loads properly and does something. Thanks!
| Alpaca 7B loads with LlamaCpp, no response from model | https://api.github.com/repos/langchain-ai/langchain/issues/3129/comments | 1 | 2023-04-19T04:56:54Z | 2023-04-19T06:45:04Z | https://github.com/langchain-ai/langchain/issues/3129 | 1,674,171,032 | 3,129 |
[
"langchain-ai",
"langchain"
] | I'm trying to use the WebBrowserTool and I got the foirbidden code (403) on some sites. I tried to add the user agent and also use proxies but I still get 403. I tried to send the headers directly on the tool contructor or the axios config.
```
const headers = {
'user-agent':
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36',
'upgrade-insecure-requests': '1',
'accept':
'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,en;q=0.8',
};
new WebBrowser({
model,
headers,
embeddings,
axiosConfig: {
headers
},
})
``` | WebBrowserTool 403 error | https://api.github.com/repos/langchain-ai/langchain/issues/3118/comments | 0 | 2023-04-18T22:48:32Z | 2023-04-18T23:24:13Z | https://github.com/langchain-ai/langchain/issues/3118 | 1,673,926,522 | 3,118 |
[
"langchain-ai",
"langchain"
] | I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | Missing new lines or empty spaces in refine default prompt. | https://api.github.com/repos/langchain-ai/langchain/issues/3117/comments | 4 | 2023-04-18T22:32:58Z | 2023-08-31T14:29:51Z | https://github.com/langchain-ai/langchain/issues/3117 | 1,673,914,308 | 3,117 |
[
"langchain-ai",
"langchain"
] | I run the following code:
```py
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
gpt_4 = ChatOpenAI(model_name="gpt-4", streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
template="You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=gpt_4, prompt=prompt)
from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
text = "How are you?"
res = chain.run(text=text)
print(cb)
```
However when I print the callback value, I get back info that I used 0 credits, even though I know I used some.
```
I'm an AI language model, so I don't have feelings or emotions like humans do. However, I'm here to help you with any questions or information you need. What can I help you with today?Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```
Am I doing something wrong, or is this an issue? | get_openai_callback doesn't return the credits for ChatGPT chain | https://api.github.com/repos/langchain-ai/langchain/issues/3114/comments | 22 | 2023-04-18T21:28:20Z | 2024-02-19T08:46:12Z | https://github.com/langchain-ai/langchain/issues/3114 | 1,673,855,423 | 3,114 |
[
"langchain-ai",
"langchain"
] | langchain was installed via `pip`
```
Traceback (most recent call last):
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/test.py", line 1, in <module>
from langchain.agents import load_tools, initialize_agent, AgentType
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/agents/agent.py", line 17, in <module>
from langchain.chains.base import Chain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.api.base import APIChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/api/base.py", line 8, in <module>
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/api/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/__init__.py", line 3, in <module>
from langchain.prompts.chat import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/chat.py", line 10, in <module>
from langchain.memory.buffer import get_buffer_string
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/memory/__init__.py", line 11, in <module>
from langchain.memory.entity import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/memory/entity.py", line 8, in <module>
from langchain.chains.llm import LLMChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/llm.py", line 11, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/prompt.py", line 8, in <module>
from jinja2 import Environment, meta
ModuleNotFoundError: No module named 'jinja2'
```
since `prompt.py` is one of the fundamental modules and it unconditionally imports `jinja2`, `jinja2` should probably be added to the list of required dependencies. | jinja2 is not optional | https://api.github.com/repos/langchain-ai/langchain/issues/3113/comments | 4 | 2023-04-18T21:19:47Z | 2023-09-24T16:09:58Z | https://github.com/langchain-ai/langchain/issues/3113 | 1,673,847,319 | 3,113 |
[
"langchain-ai",
"langchain"
] | Gathering more details on this... but the solution needs to include `search` or `searchlight` as a starting point. (see https://github.com/hwchase17/langchain/issues/2113) | Support more robust redis module list | https://api.github.com/repos/langchain-ai/langchain/issues/3111/comments | 1 | 2023-04-18T20:35:46Z | 2023-05-03T13:54:45Z | https://github.com/langchain-ai/langchain/issues/3111 | 1,673,798,260 | 3,111 |
[
"langchain-ai",
"langchain"
] | The ChatOpenAI LLM retries a completion if a content-moderation exception is raised by OpenAI.
Code [here](https://github.com/hwchase17/langchain/blob/d54c88aa2140f27c36fa18375f942e5b239799ee/langchain/chat_models/openai.py#L45)
#### Request : Do not retry if exception type is 'content moderation'
In our experience, Content Moderation errors have a near 100% reproducibility, which means that the prompt fails on every retry. This means that langchain racks up unnecessary billable calls for an unfixable exception.
#### Related [Request](https://github.com/hwchase17/langchain/issues/3109) - Allow custom retry_decorator to be passed by the user at LLM definition
| Do not retry if content moderation exception is raised by OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3110/comments | 1 | 2023-04-18T19:48:04Z | 2023-09-10T16:31:06Z | https://github.com/langchain-ai/langchain/issues/3110 | 1,673,736,825 | 3,110 |
[
"langchain-ai",
"langchain"
] | The retry decorator for ChatOpenAI is hardcoded [here](https://github.com/hwchase17/langchain/blob/d54c88aa2140f27c36fa18375f942e5b239799ee/langchain/chat_models/openai.py#L39)
Allow the user to supply a custom retry_decorator. | Allow custom retry_decorator to be passed to the LLM | https://api.github.com/repos/langchain-ai/langchain/issues/3109/comments | 3 | 2023-04-18T19:45:57Z | 2023-12-14T16:09:08Z | https://github.com/langchain-ai/langchain/issues/3109 | 1,673,734,340 | 3,109 |
[
"langchain-ai",
"langchain"
] | langchain Version: 0.0.143
SHA: aad0a498ac693acd304cf66e16a6430f5c0410a8
---
In [1]: import numexpr
In [2]: numexpr.__version__
Out[2]: '2.8.4'
-----
```python
llm_math.run("what is the common denominator of 2 and 5")
```
Stack trace:
> Entering new LLMMathChain chain...
what is the common denominator of 2 and 5
```text
LCM(2, 5)
```
...numexpr.evaluate("LCM(2, 5)")...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/src/langchain/langchain/chains/llm_math/base.py:60, in LLMMathChain._evaluate_expression(self, expression)
58 local_dict = {"pi": math.pi, "e": math.e}
59 output = str(
---> 60 numexpr.evaluate(
61 expression.strip(),
62 global_dict={}, # restrict access to globals
63 local_dict=local_dict, # add common mathematical functions
64 )
65 )
66 except Exception as e:
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:817, in evaluate(ex, local_dict, global_dict, out, order, casting, **kwargs)
816 if expr_key not in _names_cache:
--> 817 _names_cache[expr_key] = getExprNames(ex, context)
818 names, ex_uses_vml = _names_cache[expr_key]
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:704, in getExprNames(text, context)
703 def getExprNames(text, context):
--> 704 ex = stringToExpression(text, {}, context)
705 ast = expressionToAST(ex)
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:289, in stringToExpression(s, types, context)
288 # now build the expression
--> 289 ex = eval(c, names)
290 if expressions.isConstant(ex):
File <expr>:1
TypeError: 'VariableNode' object is not callable
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[11], line 1
----> 1 llm_math.run("what is the common denominator of 2 and 5")
File ~/src/langchain/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File ~/src/langchain/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~/src/langchain/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File ~/src/langchain/langchain/chains/llm_math/base.py:131, in LLMMathChain._call(self, inputs)
127 self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose)
128 llm_output = llm_executor.predict(
129 question=inputs[self.input_key], stop=["```output"]
130 )
--> 131 return self._process_llm_result(llm_output)
File ~/src/langchain/langchain/chains/llm_math/base.py:78, in LLMMathChain._process_llm_result(self, llm_output)
76 if text_match:
77 expression = text_match.group(1)
---> 78 output = self._evaluate_expression(expression)
79 self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose)
80 self.callback_manager.on_text(output, color="yellow", verbose=self.verbose)
File ~/src/langchain/langchain/chains/llm_math/base.py:67, in LLMMathChain._evaluate_expression(self, expression)
59 output = str(
60 numexpr.evaluate(
61 expression.strip(),
(...)
64 )
65 )
66 except Exception as e:
---> 67 raise ValueError(f"{e}. Please try again with a valid numerical expression")
69 # Remove any leading and trailing brackets from the output
70 return re.sub(r"^\[|\]$", "", output)
ValueError: 'VariableNode' object is not callable. Please try again with a valid numerical expression
| Encountering exceptions when using LLMathChain on master | https://api.github.com/repos/langchain-ai/langchain/issues/3108/comments | 7 | 2023-04-18T19:44:09Z | 2023-10-02T18:52:56Z | https://github.com/langchain-ai/langchain/issues/3108 | 1,673,732,281 | 3,108 |
[
"langchain-ai",
"langchain"
] | I have been trying to add memory to my `create_pandas_dataframe_agent` agent and ran into some issues.
I created the agent like this
```python
agent = create_pandas_dataframe_agent(
llm=llm,
df=df,
prefix=prefix,
suffix=suffix,
max_iterations=4,
input_variables=["df", "chat_history", "input", "agent_scratchpad"],
)
```
and ran into
```Traceback (most recent call last):
File "/path/projects/test/langchain/main.py", line 42, in <module>
a = agent.run("This is a test")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 106, in __call__
inputs = self.prep_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 185, in prep_inputs
raise ValueError(
ValueError: A single string input was passed in, but this chain expects multiple inputs ({'input', 'chat_history'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})
```
I was able to fix it by modifying the `create_pandas_dataframe_agent` to accept the memory object and then passing that along to the `AgentCreator` like so:
``` python
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=verbose,
return_intermediate_steps=return_intermediate_steps,
max_iterations=max_iterations,
max_execution_time=max_execution_time,
early_stopping_method=early_stopping_method,
memory=memory,
)
```
Not sure what I did wrong or if I am misunderstanding something in general, maybe this is just the current behavior and adding memory would be a feature request? | Getting ConversationBufferMemory to work with create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/3106/comments | 26 | 2023-04-18T19:20:33Z | 2024-06-30T16:02:47Z | https://github.com/langchain-ai/langchain/issues/3106 | 1,673,703,559 | 3,106 |
[
"langchain-ai",
"langchain"
] | Is there any ETA on this new LLM integration? | Is Google PaLM integration in the pipeline? | https://api.github.com/repos/langchain-ai/langchain/issues/3101/comments | 4 | 2023-04-18T17:30:34Z | 2023-09-26T16:08:00Z | https://github.com/langchain-ai/langchain/issues/3101 | 1,673,559,016 | 3,101 |
[
"langchain-ai",
"langchain"
] | Model page here: https://huggingface.co/Writer/camel-5b-hf
| Add bindings for Camel model API | https://api.github.com/repos/langchain-ai/langchain/issues/3099/comments | 1 | 2023-04-18T17:18:55Z | 2023-04-21T01:07:06Z | https://github.com/langchain-ai/langchain/issues/3099 | 1,673,543,750 | 3,099 |
[
"langchain-ai",
"langchain"
] | Glad to see in #2859 @hwchase17 added a `TimeWeightedVectorStoreRetriever`.
I'm creating a game so I want `last_accessed_at` can be things like... number of rounds, turns and others.
I would make it in one or two days. Is there anyone want to review it?
---
I'm reproducing the Generative Agent article, repo: [ofey404/WalkingShadows](https://github.com/ofey404/WalkingShadows)
And I'd like to create a more generic `TimeWeightedVectorStoreRetriever`. Currently it's based on datetime like this:
```python
expected_score = (
1.0 - time_weighted_retriever.decay_rate
) ** expected_hours_passed + vector_salience
```
In my case `last_accessed_at` can be the number of rounds. | [feat] I want to contribute a more generic `TimeWeightedVectorStoreRetriever` | https://api.github.com/repos/langchain-ai/langchain/issues/3098/comments | 1 | 2023-04-18T17:00:09Z | 2023-09-10T16:31:11Z | https://github.com/langchain-ai/langchain/issues/3098 | 1,673,518,661 | 3,098 |
[
"langchain-ai",
"langchain"
] | Trying to import langchain 0.0.128 and up in AWS Lambda (Using serverless framework) fails with this:
I suspect that this is the PR that causes the issue.
Maybe the __version__ line should be wrapped in a try catch in case the code is ran on an environment where the metadata for packages is not available, as it happens with serverless python requirements.
https://github.com/hwchase17/langchain/pull/2221
```
[ERROR] PackageNotFoundError: No package metadata was found for langchain
Traceback (most recent call last):
File "/var/task/serverless_sdk/__init__.py", line 144, in wrapped_handler
return user_handler(event, context)
File "/var/task/s_event_webhook.py", line 25, in error_handler
raise e
File "/var/task/s_event_webhook.py", line 20, in <module>
user_handler = serverless_sdk.get_user_handler('endpoints.event_webhook.handler')
File "/var/task/serverless_sdk/__init__.py", line 56, in get_user_handler
user_module = import_module(user_module_name)
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/endpoints/event_webhook.py", line 8, in <module>
from chatlib.agent import handle_event
File "/var/task/chatlib/agent.py", line 7, in <module>
from langchain import LLMChain
File "/var/task/langchain/__init__.py", line 58, in <module>
__version__ = metadata.version(__package__)
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 996, in version
return distribution(distribution_name).version
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 969, in distribution
return Distribution.from_name(distribution_name)
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 548, in from_name
raise PackageNotFoundError(name)
``` | Langchain not working on Lambda from 0.0.128 | https://api.github.com/repos/langchain-ai/langchain/issues/3097/comments | 1 | 2023-04-18T16:48:58Z | 2023-04-19T00:38:20Z | https://github.com/langchain-ai/langchain/issues/3097 | 1,673,503,749 | 3,097 |
[
"langchain-ai",
"langchain"
] | I was wondering how to use the `return_intermediate_step` flag for agent executors this is the current Appoach I found to be working:
Ok ive dug a little deeper and it seems like setting `return_intermediate_step = True` when creating the agent with `initialize_agent` works. Only when using a memory you need to set `memory.output = "output"` otherwise it will error when trying to save the context.
I had to do a minor modification in https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/memory/chat_memory.py#L32-L36
Cause when using agents the outputs can be lists wich would error when saving the context.
If I modify it like this:
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
input_str, output_str = self._get_input_output(inputs, outputs)
if not isinstance(input_str, list):
input_str = [input_str]
if not isinstance(output_str, list):
output_str = [output_str]
for input in input_str:
self.chat_memory.add_user_message(input)
for output in output_str:
self.chat_memory.add_ai_message(output)
```
Then even saving the context with memory works for the agent. ( you can also load an inital context from dict.
```python
import os
from langchain.callbacks import get_openai_callback
from langchain.agents import Tool
from langchain.agents import AgentType
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.schema import messages_from_dict
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.memory import ConversationBufferMemory, ConversationSummaryBufferMemory, ConversationSummaryMemory
from langchain import OpenAI, ConversationChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_KEY, model_name="gpt-3.5-turbo",
streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
memory = ConversationSummaryBufferMemory(
return_messages=True, llm=llm, max_token_limit=150, memory_key="chat_history")
# set the output key so that memory doesn't error on save
memory.output_key = "output"
# You an input a previously saved agent state. like this:
state = [{'type': 'ai', 'data': {
'content': 'Nice to meet you, Tim!', 'additional_kwargs': {}}}]
search = GoogleSearchAPIWrapper(
google_api_key=GOOGLE_API_KEY, google_cse_id=my_cse_id)
tools = [
Tool(
name="Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world"
),
]
memory.chat_memory.messages = messages_from_dict(state)
agent_chain = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, return_intermediate_steps=True)
MESSAGE = "What is currently the most popular web browser?"
with get_openai_callback() as cb:
out = agent_chain(
inputs=[MESSAGE])
# The output dict will also contain a 'intermediate_steps' key.
```
example output:
```python
out = {
...,
"intermediate_steps: [(AgentAction(tool='Current Search', tool_input='current most popular web browser', log='Do I need to use a tool? Yes\nAction: Current Search\nAction Input: current most popular web browser'), "Zooming into the internet browser market shares on different platforms, Chrome continues to dominate as the most popular browser on desktops with a market share\xa0... Feb 21, 2023 ... The most popular current browsers are Google Chrome, Apple's Safari, Microsoft Edge, and Firefox. Historically one of the large players in the\xa0... The usage share of web browsers is the portion, often expressed as a percentage, of visitors to a group of web sites that use a particular web browser. This graph shows the market share of browsers worldwide from Mar 2022 - Mar ... 56% 70% Chrome Safari Edge Firefox Samsung Internet Opera UC Browser Android\xa0... Feb 11, 2021 ... Google Chrome, then, is by far the most used browser, accounting for more than half of all web traffic, followed by Safari in a distant second\xa0... Here we examine the top five browsers in the US, in order of popularity. ... basically a pinned tab of recent sites that syncs between the desktop and\xa0... Google Chrome is the most popular and widely-used desktop web browser by far. ... This browser's current market share is slightly less than it was at this\xa0... Mar 15, 2023 ... Firefox; Google Chrome; Microsoft Edge; Apple Safari; Opera; Brave; Vivaldi; DuckDuckgo; Chromium; Epic. Comparison of Best Browser\xa0... Web browser, cookie & cache settings. HealthCare.gov is compatible with most popular web browsing software. This includes the most recent and commonly used\xa0... Browser Statistics. ❮ Home Next ❯. W3Schools' famous ... The Most Popular Browsers ... W3Schools' statistics may not be relevant to your web site.")]
...
```
This is most definitely not the right way to do this but also I'm not sure if there is a correct way yet :D
What I think would be really cool is to have something like a callback_manager also for agent actions.
That way you could develop applications using agents with immediate feedback while the agent is executed. | Usage of `return_intermediate_step` on Agents, and agent step callbacks | https://api.github.com/repos/langchain-ai/langchain/issues/3091/comments | 4 | 2023-04-18T14:09:22Z | 2023-09-20T11:26:27Z | https://github.com/langchain-ai/langchain/issues/3091 | 1,673,220,342 | 3,091 |
[
"langchain-ai",
"langchain"
] | Got a loop when asking for "What's the best BBQ in Kansas City" . When added to the prompt - "say cannot answer,if you don´t know the answer " did not stop the loop.
The loop only stopped after blowing up the context length
On the other hand, if using OpenAI old models, it worked fine . Using SerpAPIWrapper | Agent SELF_ASK_WITH_SEARCH does not work with ChatOpenAI models | https://api.github.com/repos/langchain-ai/langchain/issues/3090/comments | 5 | 2023-04-18T13:26:25Z | 2023-10-02T16:08:52Z | https://github.com/langchain-ai/langchain/issues/3090 | 1,673,135,511 | 3,090 |
[
"langchain-ai",
"langchain"
] | After reviewing the work done on https://github.com/hwchase17/langchain/pull/2859 and its accompanying examples, I propose creating Generative Characters as a set of langchain components. These components would include Memory, Chain, and Agent Classes.
- Memory: This includes the ability to retrieve documents from VectorStore using TimeWeightedVectorStoreRetriever, calculate their score, summarize them, add memory and fetch memory.
- Chain: This involves generating reactions and dialogue responses.
- Agent: I'm not entirely sure about this one. Since the chain can generate reactions, it may be able to use tools as well.
I would like to work on this. Any suggestions or help would be greatly appreciated.
@vowelparrot | [feat] Create Memory, Chain and Agent Classes for Generative Characters | https://api.github.com/repos/langchain-ai/langchain/issues/3087/comments | 3 | 2023-04-18T12:15:46Z | 2023-09-18T16:15:33Z | https://github.com/langchain-ai/langchain/issues/3087 | 1,672,996,991 | 3,087 |
[
"langchain-ai",
"langchain"
] | Sitemap data ingestion is a super powerful tool and I love that you already have it built-in. However, sitemaps are potentially huge, covering hundreds or even thousands of sub-sites.
If one starts to crawl through the sitemap of a large website, there is little information on how the progress is going.
Therefore, I suggest adding a `tqdm` progressbar in the async web base loader to give the user some estimate.
While we're at it, we could also add a retry logic because on long runs, there are higher risk of running against anti-scraping policy and forced timeouts or disconnections.
See below screenshot for my implementation. Code change in the linked [PR](https://github.com/hwchase17/langchain/pull/3131).
 | Add tqdm progress bar to base web base loader | https://api.github.com/repos/langchain-ai/langchain/issues/3083/comments | 1 | 2023-04-18T10:58:48Z | 2023-04-23T02:19:39Z | https://github.com/langchain-ai/langchain/issues/3083 | 1,672,867,984 | 3,083 |
[
"langchain-ai",
"langchain"
] | null | Need a Simple Example or method to get stream response of ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/3080/comments | 2 | 2023-04-18T10:04:21Z | 2023-09-10T16:31:16Z | https://github.com/langchain-ai/langchain/issues/3080 | 1,672,777,542 | 3,080 |
[
"langchain-ai",
"langchain"
] | ## Motivation
Right now, HuggingFaceEmbeddings doesn't support loading an embedding model's weights from the cache but downloading the weights every time. Fixing this would be a low hanging fruit by allowing the user to pass their cache directory.
## Suggestion
The only change has only a few lines in __init__()
```python
class HuggingFaceEmbeddings(BaseModel, Embeddings):
"""Wrapper around sentence_transformers embedding models.
To use, you should have the ``sentence_transformers`` python package installed.
Example:
.. code-block:: python
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceEmbeddings(model_name=model_name)
"""
client: Any #: :meta private:
model_name: str = DEFAULT_MODEL_NAME
"""Model name to use."""
def __init__(self, cache_folder=None, **kwargs: Any):
"""Initialize the sentence_transformer."""
super().__init__(**kwargs)
try:
import sentence_transformers
self.client = sentence_transformers.SentenceTransformer(model_name_or_path=self.model_name, cache_folder=cache_folder)
except ImportError:
raise ValueError(
"Could not import sentence_transformers python package. "
"Please install it with `pip install sentence_transformers`."
)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a HuggingFace transformer model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
texts = list(map(lambda x: x.replace("\n", " "), texts))
embeddings = self.client.encode(texts)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a HuggingFace transformer model.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
text = text.replace("\n", " ")
embedding = self.client.encode(text)
return embedding.tolist()
``` | Feature Request: Allow initializing HuggingFaceEmbeddings from the cached weight | https://api.github.com/repos/langchain-ai/langchain/issues/3079/comments | 9 | 2023-04-18T09:43:38Z | 2024-02-13T16:17:08Z | https://github.com/langchain-ai/langchain/issues/3079 | 1,672,736,711 | 3,079 |
[
"langchain-ai",
"langchain"
] | I'm facing a weird issue with the `ConversationBufferWindowMemory`
Running `memory.load_memory_variables({})` prints:
```
{'chat_history': [HumanMessage(content='Hi my name is Ismail', additional_kwargs={}), AIMessage(content='Hello Ismail! How can I assist you today?', additional_kwargs={})]}
```
The error I get after sending a second message to the chain is:
```
> Entering new ConversationalRetrievalChain chain...
[2023-04-18 10:34:52,512] ERROR in app: Exception on /api/v1/chat [POST]
Traceback (most recent call last):
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/homanp/Projects/ad-gpt/app.py", line 46, in chat
result = chain({"question": message, "chat_history": []})
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 71, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 25, in _get_chat_history
human = "Human: " + human_s
TypeError: can only concatenate str (not "tuple") to str
```
Current implementaion:
```
memory = ConversationBufferWindowMemory(memory_key='chat_history', k=2, return_messages=True)
chain = ConversationalRetrievalChain.from_llm(model,
memory=memory,
verbose=True,
retriever=retriever,
qa_prompt=QA_PROMPT,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,)
``` | Error `can only concatenate str (not "tuple") to str` when using `ConversationBufferWindowMemory` | https://api.github.com/repos/langchain-ai/langchain/issues/3077/comments | 11 | 2023-04-18T08:38:57Z | 2023-11-13T16:10:00Z | https://github.com/langchain-ai/langchain/issues/3077 | 1,672,633,625 | 3,077 |
[
"langchain-ai",
"langchain"
] | I notice they use different API, but what's the difference between these 2 apis?
Question Answering:
docs = docsearch.get_relevant_documents(query)
Question Answering with Sources:
docs = docsearch.similarity_search(query) | Difference between "Question Answering with Sources" and "Question Answering" | https://api.github.com/repos/langchain-ai/langchain/issues/3073/comments | 8 | 2023-04-18T08:05:55Z | 2023-10-12T16:10:19Z | https://github.com/langchain-ai/langchain/issues/3073 | 1,672,578,778 | 3,073 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/memory/summary_buffer.py#L57-L70
The ```ConversationSummaryBufferMemory``` class in ```langchain/memory/summary_buffer.py``` currently prunes chat_memory's messages using the ```List.pop()``` method (line 66). This approach works as expected for the in-memory implementation ```ChatMessageHistory```, where messages are stored as a simple List.
However, this method of pruning messages is not suitable for other implementations where messages are calculated output Lists, such as ```DynamoDBChatMessageHistory``` or ```RedisChatMessageHistory```. In these cases, the current implementation fails to prune messages as intended.
To address this issue, we may need to modify the ```BaseChatMessageHistory``` class to provide a unified interface for pruning messages, which can then be overridden as needed by specific implementations. | Issue with ConversationSummaryBufferMemory pruning messages for non-in-memory chat message histories | https://api.github.com/repos/langchain-ai/langchain/issues/3072/comments | 6 | 2023-04-18T07:48:11Z | 2024-05-20T08:06:21Z | https://github.com/langchain-ai/langchain/issues/3072 | 1,672,549,574 | 3,072 |
[
"langchain-ai",
"langchain"
] | I'm testing out the tutorial code for Agents:
`from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")`
And so far it generates the result:
`> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: High: 60.8ºf @3:10 PM Low: 48.2ºf @2:05 AM Approx.
Thought: I need to convert the temperature to a number
Action: Calculator
Action Input: 60.8`
But raises an issue and doesn't calculate 60.8^.023
` raise ValueError(f"unknown format from LLM: {llm_output}")
ValueError: unknown format from LLM: This is not a math problem and cannot be solved using the numexpr library.`
What's the reason behind this error? | llm-math raising an issue | https://api.github.com/repos/langchain-ai/langchain/issues/3071/comments | 16 | 2023-04-18T06:44:50Z | 2023-11-16T05:52:22Z | https://github.com/langchain-ai/langchain/issues/3071 | 1,672,458,216 | 3,071 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/document_loaders/git.py#L8 | Once we Clone the Repo using the Git Document loader. How we can auth the Private Repos and How we can chunk the code files into meaning full code and create Embeddings? | https://api.github.com/repos/langchain-ai/langchain/issues/3069/comments | 1 | 2023-04-18T06:08:55Z | 2023-09-10T16:31:22Z | https://github.com/langchain-ai/langchain/issues/3069 | 1,672,416,914 | 3,069 |
[
"langchain-ai",
"langchain"
] | Axios is at v 1.3.5, why does langchain set the dependency to major version 0?
It is set to "axios": "^0.26.0",
Do we want: "axios": ">=0.26.0" ?
Does the whole world need to downgrade in order to use Langchain?
Or is this just me and my setup is screwed up somehow. I don't see anyone else making noise about it, so i'm a little concerned I have something wrong with what i'm working on.

| Axios dependency forcing a downgrade on nextJS build. | https://api.github.com/repos/langchain-ai/langchain/issues/3065/comments | 5 | 2023-04-18T05:38:52Z | 2023-09-26T16:08:10Z | https://github.com/langchain-ai/langchain/issues/3065 | 1,672,386,014 | 3,065 |
[
"langchain-ai",
"langchain"
] | Hey folks. I am experimenting with OpenAPI agents and the most recent [Spotify API](https://github.com/sonallux/spotify-web-api/releases). The API defines the endpoint `/me/top/{type}`. _Type_ can be, for example, `tracks`. A GET to `/me/top/tracks` will return the top tracks for the user making the request.
The planned actions coming out of the LLM, if you ask it to list your favorite tracks, will correctly include `GET /me/top/tracks`. In [planner.py](https://github.com/hwchase17/langchain/blob/577ec92f16813565d788da03f6ce830f4657c7b0/langchain/agents/agent_toolkits/openapi/planner.py#L225) there is validation check that will verify if the suggested endpoint exists. But, it compares `GET /me/top/tracks` with `GET /me/top/{type}`, which will cause an error: `ValueError: GET /me/top/tracks endpoint does not exist`.
A change to `reduce_openapi_spec` or `planner.py` would fix it.
| Validation check in planner.py not working as intended? | https://api.github.com/repos/langchain-ai/langchain/issues/3064/comments | 6 | 2023-04-18T05:12:32Z | 2023-09-27T16:08:06Z | https://github.com/langchain-ai/langchain/issues/3064 | 1,672,364,479 | 3,064 |
[
"langchain-ai",
"langchain"
] | ## Problem
The current `DirectoryLoader` class relies on the python `glob` and `rglob` utilities to load the filepaths. These utilities in python don't support advanced file patterns, for example specifying files with multiple extensions. For example, consider a sample directory with these files.
```bash
- a.py
- b.js
- c.json
- d.yml
```
Currently, there is no way to load only the files with `.py` or `.yml` extension.
## Proposed Solution
### Preferred
Include the [wcmatch](https://github.com/facelessuser/wcmatch) library as a dependency that replaces the built-in glob and rglob, and supports all unix supported options for specifying file patterns. For example, with `wcmatch`, users can include a pattern like `['*.py', *'.yml']` to include files with `.py` or `.yml` extension.
### Alternate
Add an `include` or `exclude` list to the `DirectoryLoader` interface, so that users can specify the file patterns to include or exclude. | DirectoryLoader doesn't support including unix file patterns | https://api.github.com/repos/langchain-ai/langchain/issues/3062/comments | 3 | 2023-04-18T05:00:24Z | 2023-09-18T16:19:42Z | https://github.com/langchain-ai/langchain/issues/3062 | 1,672,352,495 | 3,062 |
[
"langchain-ai",
"langchain"
] | Sometimes the LLM response (generated code) tends to miss the ending ticks "```". Therefore causing the text parsing to fail due to `not enough values to unpack`.
Suggest to simply the `_, action, _' to just `action` then with index
Error message below
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\chat\output_parser.py", line 17, in parse
_, action, _ = text.split("```")
ValueError: not enough values to unpack (expected 3, got 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\open_source_contrib\test.py", line 67, in <module>
agent_msg = agent.run(prompt_template)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 116, in __call__
raise e
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 113, in __call__
outputs = self._call(inputs)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 672, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 385, in plan
return self.output_parser.parse(full_output)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\chat\output_parser.py", line 23, in parse
raise ValueError(f"Could not parse LLM output: {text}")
ValueError: Could not parse LLM output: Question: How do I put the given data into a pandas dataframe and save it into a csv file at the specified path?
Thought: I need to use the Python REPL tool to import pandas, create a dataframe with the given data, and then use the to_csv method to save it to the specified file path.
Action:
```
{
"action": "Python REPL",
"action_input": "import pandas as pd\n\n# create dataframe\ndata = {\n 'Quarter': ['Q4-2021', 'Q1-2022', 'Q2-2022', 'Q3-2022', 'Q4-2022'],\n 'EPS attributable to common stockholders, diluted (GAAP)': [1.07, 0.95, 0.76, 0.95, 1.07],\n 'EPS attributable to common stockholders, diluted (non-GAAP)': [1.19, 1.05, 0.85, 1.05, 1.19]\n}\ndf = pd.DataFrame(data)\n\n# save to csv\ndf.to_csv('E:\\\\open_source_contrib\\\\output\\\\agent_output.xlsx', index=False)"
}
(langchain-venv) PS E:\open_source_contrib>
``` | Error when parsing code from LLM response ValueError: Could not parse LLM output: | https://api.github.com/repos/langchain-ai/langchain/issues/3057/comments | 1 | 2023-04-18T04:13:20Z | 2023-04-24T04:19:22Z | https://github.com/langchain-ai/langchain/issues/3057 | 1,672,318,279 | 3,057 |
[
"langchain-ai",
"langchain"
] | Hi fellas.
Langchain is awesome. I have an agent that I created for an app and it will be interacted with via an api. The agent of course needs to run asynchronously. I can run it without any issues synchronously but with agent.arun(inputs) I cannot connect with openai. It throws the "Error communicating with OpenAI" error.
before I followed this notebook with my own amends to the task at hand: https://python.langchain.com/en/latest/modules/agents/agents/custom_llm_chat_agent.html
and just changed the final part from agent.run to agent.arun according to the blogpost.
But clearly this wasnt enough and I went on youtube and came accross this video: https://youtu.be/eAikW9o1Ros
I followed what he was doing for a simple agent and customized mine by creating this function:
async def async_agent_executor(inputs):
manager = CallbackManager([StdOutCallbackHandler()])
llm = ChatOpenAI(temperature=0, callback_manager=manager)
llm_chain = LLMChain(llm=llm, prompt=custom_prompt, callback_manager=manager)
async_tools = load_tools(["serpapi"], llm=llm, callback_manager=manager)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
callback_manager=manager
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=async_tools, verbose=True, callback_manager=manager)
return await agent_executor.arun(inputs)
I dont have too much experience working with async as well. Can you please help why it cannot connect with OpenAI in serial execution but not in async? Thank you. | Error communicating with OpenAI when running agent in async | https://api.github.com/repos/langchain-ai/langchain/issues/3056/comments | 7 | 2023-04-18T04:11:58Z | 2023-11-20T16:07:27Z | https://github.com/langchain-ai/langchain/issues/3056 | 1,672,317,378 | 3,056 |
[
"langchain-ai",
"langchain"
] | Conservation instead of conversation. PR pending. Putting in this issue to link. | Spelling Error in ConstitutionalAI Chain Prompt | https://api.github.com/repos/langchain-ai/langchain/issues/3048/comments | 0 | 2023-04-18T03:28:10Z | 2023-04-19T02:45:06Z | https://github.com/langchain-ai/langchain/issues/3048 | 1,672,287,885 | 3,048 |
[
"langchain-ai",
"langchain"
] |
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter)
result = retriever.get_relevant_documents(given_str)
gives TypeError: init() got an unexpected keyword argument 'padding'
with bm25_encoder
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter , sparse_encoder=bm25_encoder)
it gives
ValidationError: 1 validation error for PineconeHybridSearchRetriever
sparse_encoder
extra fields not permitted (type=value_error.extra)
| pinecone_hybrid_search doesn't work by following the documents. | https://api.github.com/repos/langchain-ai/langchain/issues/3043/comments | 2 | 2023-04-18T01:27:31Z | 2023-09-18T16:19:47Z | https://github.com/langchain-ai/langchain/issues/3043 | 1,672,192,416 | 3,043 |
[
"langchain-ai",
"langchain"
] | The error is random, it only occurs sometimes.
`loader = YoutubeLoader.from_youtube_url(vidurl, add_video_info=True, language=lang)` | YoutubeLoader : Error: Exception while accessing title of https://youtube.com/watch?v=XXX. Please file a bug report at https://github.com/pytube/pytube | https://api.github.com/repos/langchain-ai/langchain/issues/3040/comments | 8 | 2023-04-17T23:22:49Z | 2023-09-27T16:08:12Z | https://github.com/langchain-ai/langchain/issues/3040 | 1,672,104,671 | 3,040 |
[
"langchain-ai",
"langchain"
] | When using the agent to call the tool, some situations may cause an escape, returning the action and final answer at the same time, causing the tool not to run. It is recommended to add appropriate prompt words at the end of “Final Answer: The final answer to the original input question” prompt template to avoid this situation. | Some situations cause the tool to not work | https://api.github.com/repos/langchain-ai/langchain/issues/3037/comments | 2 | 2023-04-17T21:59:53Z | 2023-09-10T16:31:37Z | https://github.com/langchain-ai/langchain/issues/3037 | 1,672,028,543 | 3,037 |
[
"langchain-ai",
"langchain"
] | I'd like to be able to run a query via SQLDatabaseSequentialChain or SQLDatabaseChain involving multiple tables living in multiple different schemas, but it seems that as it is, the code is set up to only allow and look through just the one schema provided. | Unable to use multiple schemas in SQLDatabase | https://api.github.com/repos/langchain-ai/langchain/issues/3036/comments | 18 | 2023-04-17T21:40:01Z | 2024-07-11T11:30:14Z | https://github.com/langchain-ai/langchain/issues/3036 | 1,672,009,290 | 3,036 |
[
"langchain-ai",
"langchain"
] | I'm running the openai "todo" manifest and swagger.
After 2013-04-16, I get the following error parsing response:
```ValueError: Could not parse LLM output: `I need to check the TODO Plugin API to see if it can help me answer this question.```
Input question was
```agent_chain.run("Do I have a todo to check my mailbox?")```
The LLM response was:
```
I need to check the TODO Plugin API to see if it can help me answer this question.
Action: todo
```
The response doesn't have any Action Input section! Fix is incoming. | TODO action example from openai fails | https://api.github.com/repos/langchain-ai/langchain/issues/3035/comments | 1 | 2023-04-17T20:49:58Z | 2023-09-10T16:31:42Z | https://github.com/langchain-ai/langchain/issues/3035 | 1,671,945,585 | 3,035 |
[
"langchain-ai",
"langchain"
] | Greetings,
The
```python
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.chat_models import AzureChatOpenAI
db = SQLDatabase.from_uri(connection_string2)
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(
llm=AzureChatOpenAI(deployment_name="gpt-4-32k", model_name="gpt-4-32k",temperature=0.0),
toolkit=toolkit,
verbose=True
)
agent_executor.run("Tell me about this database")
```
I get the error in `query_checker_sql_db`
```
Thought:The TITLE column seems to be related to the topics in the CONTENT table. I should query this column to get the topics.
Action: query_checker_sql_db
Action Input: SELECT TOP 10 TITLE FROM CONTENTTraceback (most recent call last):
File "/code/confluence_test.py", line 57, in <module>
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 695, in _take_next_step
observation = tool.run(
File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 73, in run
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 70, in run
observation = self._run(tool_input)
File "/code/sql_database/tool.py", line 125, in _run
return self.llm_chain.predict(query=query, dialect=self.db.dialect)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 290, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 99, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in wrapped_f
return self(f, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 406, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter
return fut.result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 409, in __call__
result = fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 97, in _completion_with_retry
return llm.client.create(**kwargs)
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/completion.py", line 25, in create
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
```
because it seems that the llm is still defaulting to `llm=OpenAI(cache=None, verbose=False...)`
as seen in this values output
from SQLDatabaseToolkit
```
--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------
```
from create_sql_agent
```
--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------
```
| SQLToolKit not passing correct llm to llm_chain with AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3031/comments | 2 | 2023-04-17T20:17:06Z | 2023-09-10T16:31:47Z | https://github.com/langchain-ai/langchain/issues/3031 | 1,671,905,614 | 3,031 |
[
"langchain-ai",
"langchain"
] | I have generated the Chroma DB from a single file ( basically lots of questions and answers in one text file ), sometimes when I do
```
db.similarity_search("some question", k=4)
```
And the question is too broad, it will rerun a **LOT** of results, since I'm using the result in next LLM query (prompt template) I often can hit the "maximum context length is 4097 tokens" how to deal with this ? | Limit the db.similarity_search("some question", k=4) output. | https://api.github.com/repos/langchain-ai/langchain/issues/3029/comments | 4 | 2023-04-17T20:09:20Z | 2023-04-18T11:08:20Z | https://github.com/langchain-ai/langchain/issues/3029 | 1,671,895,690 | 3,029 |
[
"langchain-ai",
"langchain"
] | Hello everyone I got this error, I already poppler_path in PATH in system. Is there anyone got the same like me?
index = VectorstoreIndexCreator().from_loaders(loaders)
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.9/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout)
567 env["LD_LIBRARY_PATH"] = poppler_path + ":" + env.get("LD_LIBRARY_PATH", "")
--> 568 proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE) | Error index = VectorstoreIndexCreator().from_loaders(loaders) | https://api.github.com/repos/langchain-ai/langchain/issues/3025/comments | 1 | 2023-04-17T16:57:00Z | 2023-09-10T16:31:52Z | https://github.com/langchain-ai/langchain/issues/3025 | 1,671,601,909 | 3,025 |
[
"langchain-ai",
"langchain"
] | Hi,
when I am trying to index the documents using cromadb, I am getting the following error. When looked into it, understood it is the compatibility isssue. But couldn't exactly find what packages are the hnswlib compatible with.
ImportError: /anaconda3/envs/myenv/lib/python3.9/site-packages/hnswlib.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv
| Import error and undefined symbol | https://api.github.com/repos/langchain-ai/langchain/issues/3017/comments | 6 | 2023-04-17T13:51:59Z | 2023-11-16T16:08:31Z | https://github.com/langchain-ai/langchain/issues/3017 | 1,671,234,135 | 3,017 |
[
"langchain-ai",
"langchain"
] | Sometimes it is quick expensive to crawl all the URLs, is it possible to save the Documents and reload it later?
For example:
```
loader = GitbookLoader("https://docs.gitbook.com")
page_data = loader.load()
```
Then save the page_data to gitbook.json, could be in the format of
```
[
{
"page_content": "...",
"metadata": {"source": "...", "title": "..."}
}
]
```
Next time when I want to resplit the document or rebuild embeddlings, I can do:
```
documents = JsonLoader("gitbook.json")
```
It would be great if both the `Save` function and `JsonLoader` can be developed. | Is there any quick way to save generated Documents and reload it? | https://api.github.com/repos/langchain-ai/langchain/issues/3016/comments | 3 | 2023-04-17T12:33:05Z | 2023-09-28T16:08:05Z | https://github.com/langchain-ai/langchain/issues/3016 | 1,671,070,412 | 3,016 |
[
"langchain-ai",
"langchain"
] | I am facing a problem when trying to use the Chroma vector store with a persisted index. I have already loaded a document, created embeddings for it, and saved those embeddings in Chroma. The script ran perfectly with LLM and also created the necessary files in the persistence directory (.chroma\index). The files include:
chroma-collections.parquet
chroma-embeddings.parquet
id_to_uuid_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
index_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.bin
index_metadata_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
uuid_to_id_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
However, when I try to initialize the Chroma instance using the persist_directory to utilize the previously saved embeddings, I encounter a NoIndexException error, stating "Index not found, please create an instance before querying".
Here is a snippet of the code I am using in a Jupyter notebook:
```
# Section 1
import os
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
# Load environment variables
%reload_ext dotenv
%dotenv info.env
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Section 2 - Initialize Chroma without an embedding function
persist_directory = '.chroma\\index'
db = Chroma(persist_directory=persist_directory)
# Section 3
# Load chat model and question answering chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=.5, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
# Section 4
# Run the chain on a sample query
query = "The Question - Can you also cite the information you give after your answer?"
docs = db.similarity_search(query)
response = chain.run(input_documents=docs, question=query)
print(response)
```
Please help me understand what might be causing this problem and suggest possible solutions. Additionally, I am curious if these pre-existing embeddings could be reused without incurring the same cost for generating Ada embeddings again, as the documents I am working with have lots of pages. Thanks in advance! | "NoIndexException: Index not found when initializing Chroma from a persisted directory" | https://api.github.com/repos/langchain-ai/langchain/issues/3011/comments | 38 | 2023-04-17T10:21:07Z | 2023-10-25T16:09:22Z | https://github.com/langchain-ai/langchain/issues/3011 | 1,670,863,970 | 3,011 |
[
"langchain-ai",
"langchain"
] | hello,
i have an instance of chatbotui, configured with the chatgpt api.
how do i integrate longchain, so that it allows me to upload documents, which chatgpt will then have to read and use in the conversation?
also the memory functionality would be useful to integrate.
thanks in advance to all | chatbot ui integration | https://api.github.com/repos/langchain-ai/langchain/issues/3008/comments | 1 | 2023-04-17T09:24:15Z | 2023-09-10T16:31:57Z | https://github.com/langchain-ai/langchain/issues/3008 | 1,670,767,000 | 3,008 |
[
"langchain-ai",
"langchain"
] | I was trying to override the OpenAIEmbeddings class with some customized implementation and got this:
```
In [1]: from langchain.embeddings.openai import OpenAIEmbeddings
In [2]: class O(OpenAIEmbeddings):
...: pass
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-bc8ad24584c2> in <module>
----> 1 class O(OpenAIEmbeddings):
2 pass
3
~/opt/miniconda3/lib/python3.9/site-packages/pydantic/main.cpython-39-darwin.so in pydantic.main.ModelMetaclass._new_()
~/opt/miniconda3/lib/python3.9/site-packages/pydantic/utils.cpython-39-darwin.so in pydantic.utils.smart_deepcopy()
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '_setstate_'):
272 y._setstate_(state)
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_tuple(x, memo, deepcopy)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in <listcomp>(.0)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
262 if deep and args:
263 args = (deepcopy(arg, memo) for arg in args)
--> 264 y = func(*args)
265 if deep:
266 memo[id(x)] = y
~/opt/miniconda3/lib/python3.9/copy.py in <genexpr>(.0)
261 deep = memo is not None
262 if deep and args:
--> 263 args = (deepcopy(arg, memo) for arg in args)
264 y = func(*args)
265 if deep:
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_tuple(x, memo, deepcopy)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in <listcomp>(.0)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
262 if deep and args:
263 args = (deepcopy(arg, memo) for arg in args)
--> 264 y = func(*args)
265 if deep:
266 memo[id(x)] = y
~/opt/miniconda3/lib/python3.9/typing.py in inner(*args, **kwds)
266 except TypeError:
267 pass # All real errors (not unhashable args) are raised below.
--> 268 return func(*args, **kwds)
269 return inner
270
~/opt/miniconda3/lib/python3.9/typing.py in _getitem_(self, params)
901 return self.copy_with((p, _TypingEllipsis))
902 msg = "Tuple[t0, t1, ...]: each t must be a type."
--> 903 params = tuple(_type_check(p, msg) for p in params)
904 return self.copy_with(params)
905
~/opt/miniconda3/lib/python3.9/typing.py in <genexpr>(.0)
901 return self.copy_with((p, _TypingEllipsis))
902 msg = "Tuple[t0, t1, ...]: each t must be a type."
--> 903 params = tuple(_type_check(p, msg) for p in params)
904 return self.copy_with(params)
905
~/opt/miniconda3/lib/python3.9/typing.py in _type_check(arg, msg, is_argument)
155 return arg
156 if not callable(arg):
--> 157 raise TypeError(f"{msg} Got {arg!r:.100}.")
158 return arg
159
TypeError: Tuple[t0, t1, ...]: each t must be a type. Got ().
```
Checked the code and seems not too much code is related to tuple. Any clues about how it happened? | OpenAIEmbeddings can't be inherited | https://api.github.com/repos/langchain-ai/langchain/issues/3007/comments | 1 | 2023-04-17T09:08:50Z | 2023-09-10T16:32:02Z | https://github.com/langchain-ai/langchain/issues/3007 | 1,670,741,139 | 3,007 |
[
"langchain-ai",
"langchain"
] | Hello.
I am trying the Time Weighted VectorStore Retriever example,
but I get an error at the following
ImportError: cannot import name 'TimeWeightedVectorStoreRetriever' from 'langchain.retrievers' (/usr/local/lib/python3.9/dist-packages/langchain/retrievers/__init__.py)
The version of langchain is 0.0.141, I think there is no library for TimeWeightedVectorStoreRetriever. Does anyone know how to solve this problem?
/usr/local/lib/python3.9/dist-packages/langchain/retrievers/ | TimeWeightedVectorStoreRetriever not found | https://api.github.com/repos/langchain-ai/langchain/issues/3006/comments | 1 | 2023-04-17T08:11:24Z | 2023-04-18T04:56:47Z | https://github.com/langchain-ai/langchain/issues/3006 | 1,670,649,685 | 3,006 |
[
"langchain-ai",
"langchain"
] | I am getting this error whenever the time is greater than 60 seconds. I tried giving timeout=120 seconds in ChatOpenAI().
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).`
What is the reason for this issue and how can I rectify it?
| Frequent request timed out error | https://api.github.com/repos/langchain-ai/langchain/issues/3005/comments | 38 | 2023-04-17T07:28:20Z | 2024-01-08T16:22:54Z | https://github.com/langchain-ai/langchain/issues/3005 | 1,670,584,519 | 3,005 |
[
"langchain-ai",
"langchain"
] | Hi,
I am using DirectoryLoader as document loader and for some of csv files getting below error
ValueError: Invalid file union\Book1.csv. The FileType.UNK file type is not supported in partition.
can anyone suggest oplease, how to fix this, I will be thankful to you.
Thank You | ValueError: Invalid file union\Book1.csv. The FileType.UNK file type is not supported in partition. | https://api.github.com/repos/langchain-ai/langchain/issues/3002/comments | 4 | 2023-04-17T06:06:29Z | 2023-05-02T19:25:04Z | https://github.com/langchain-ai/langchain/issues/3002 | 1,670,480,210 | 3,002 |
[
"langchain-ai",
"langchain"
] | CSV/Pandas Dataframe agent actually replies to question irrelevant to the data, this can be easily resolved by including an extra line in the prompt to not reply to questions irrelevant to the dataframe. | CSV/Pandas Dataframe agent replying to questions irrelevant to data | https://api.github.com/repos/langchain-ai/langchain/issues/3000/comments | 4 | 2023-04-17T04:57:28Z | 2023-09-18T16:19:52Z | https://github.com/langchain-ai/langchain/issues/3000 | 1,670,406,409 | 3,000 |
[
"langchain-ai",
"langchain"
] | While trying to figure out how to save persistent memory, I've come across what I believe to be an error in the docs:
Running the example verbatim produces an error.
[Source](https://python.langchain.com/en/latest/modules/memory/types/entity_summary_memory.html#using-in-a-chain)
```
from langchain.chains import ConversationChain
from langchain.memory import ConversationEntityMemory
from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from pydantic import BaseModel
from typing import List, Dict, Any
conversation = ConversationChain(
llm=llm,
verbose=True,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
memory=ConversationEntityMemory(llm=llm)
)
conversation.predict(input="Deven & Sam are working on a hackathon project")
conversation.memory.store
```
conversation.memory.store does not exist.
Are the doc's incorrect? Happy to contribute toward a fix, just looking for some direction on what the correct implementation should be in this usecase. | ConversationChain class variable missing when running example from the docs? | https://api.github.com/repos/langchain-ai/langchain/issues/2997/comments | 1 | 2023-04-17T02:08:33Z | 2023-09-10T16:32:12Z | https://github.com/langchain-ai/langchain/issues/2997 | 1,670,278,618 | 2,997 |
[
"langchain-ai",
"langchain"
] | Hello everyone! I am new to using langchain and I am currently facing an issue with the figma loader. I have followed the steps outlined in the documentation, but I am receiving a TypeError with the following message:
`TypeError: expected string or bytes-like object`
The error occurs when I try to create a vector store index using the code `index = VectorstoreIndexCreator().from_loaders([figma_loader])`. I have been stuck on this for a while and would really appreciate any help that I can get.
`----> 2 index = VectorstoreIndexCreator().from_loaders([figma_loader])
3 figma_doc_retriever = index.vectorstore.as_retriever()
11 frames
[/usr/lib/python3.9/http/client.py](https://localhost:8080/#) in putheader(self, header, *values)
1260 values[i] = str(one_value).encode('ascii')
1261
-> 1262 if _is_illegal_header_value(values[i]):
1263 raise ValueError('Invalid header value %r' % (values[i],))
1264
TypeError: expected string or bytes-like object`
I have already checked the repository's issue tracker but haven't found any solutions that address my specific problem. I have also provided relevant code snippets and steps I have taken so far.
Thank you in advance for your help! I am looking forward to hearing from you soon. | Vector Store Creator from Figma loader throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/2996/comments | 2 | 2023-04-17T02:02:23Z | 2023-06-22T05:40:00Z | https://github.com/langchain-ai/langchain/issues/2996 | 1,670,271,781 | 2,996 |
[
"langchain-ai",
"langchain"
] | Hello all,
I've been encountering an issue while trying to install the dependencies using ```poetry install -E all``` command. I am currently working on the latest commit (a9310a3e) in my development environment. Here is the error message I receive:
```
RuntimeError
Unable to find installation candidates for torch (1.13.1)
at /opt/homebrew/Cellar/poetry/1.4.2/libexec/lib/python3.11/site-packages/poetry/installation/chooser.py:109 in choose_for
105│
106│ links.append(link)
107│
108│ if not links:
→ 109│ raise RuntimeError(f"Unable to find installation candidates for {package}")
110│
111│ # Get the best link
112│ chosen = max(links, key=lambda link: self._sort_key(package, link))
113│
```
Has anyone else experienced this issue, and if so, have you found any solutions or workarounds? Any help or suggestions would be greatly appreciated.
Thank you! | Unable to find installation candidates for torch (1.13.1) | https://api.github.com/repos/langchain-ai/langchain/issues/2991/comments | 6 | 2023-04-16T21:17:07Z | 2024-01-05T23:40:21Z | https://github.com/langchain-ai/langchain/issues/2991 | 1,670,143,578 | 2,991 |
[
"langchain-ai",
"langchain"
] | How can I access the data of a website using 'API TOKEN' and use that data in langchain for custom purpose? | How to access data of a website using API token | https://api.github.com/repos/langchain-ai/langchain/issues/2990/comments | 1 | 2023-04-16T19:50:22Z | 2023-09-10T16:32:17Z | https://github.com/langchain-ai/langchain/issues/2990 | 1,670,111,690 | 2,990 |
[
"langchain-ai",
"langchain"
] | the code is simple
```
def load_vector_memory_from_dir(dir_path):
from langchain.document_loaders import DirectoryLoader
loader = DirectoryLoader(dir_path)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
return FAISS.from_documents(texts, OpenAIEmbeddings())
def get_answer_from_vector_memory(vector_memory, query):
from langchain.agents.agent_toolkits import (
create_vectorstore_agent,
VectorStoreToolkit,
VectorStoreInfo,
)
vectorstore_info = VectorStoreInfo(
name="software_requirement_specification",
description="software requirement specification and other things you want to know",
vectorstore=vector_memory
)
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
agent_executor = create_vectorstore_agent(
llm=ChatOpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
answer = agent_executor.run(query)
return answer
def get_answer_from_vector_memory_and_web(text):
pass
if __name__ == "__main__":
vector_store = load_vector_memory_from_dir("../../docs")
get_answer_from_vector_memory(vector_store, "What will be changed in the next version?")
```
got error
```openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6542 tokens (6286 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.```
I searched for this issue but all of them are saying you should add an argument to retrievalQAChain etc to reduce the prompt length,but I'm using agents(to combine tools with docQA), there's no argument for me to change | Got prompt token length error when using agent | https://api.github.com/repos/langchain-ai/langchain/issues/2988/comments | 1 | 2023-04-16T17:27:10Z | 2023-09-10T16:32:24Z | https://github.com/langchain-ai/langchain/issues/2988 | 1,670,056,700 | 2,988 |
[
"langchain-ai",
"langchain"
] | I'm using these code :
llm=ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
I run this line several times
agent_chain.run(input="Hi, how are you today ?")
then these errors are show up
[agents/conversational/base.py](https://localhost:8080/#) in _extract_tool_and_input(self, llm_output)
83 match = re.search(regex, llm_output)
84 if not match:
---> 85 raise ValueError(f"Could not parse LLM output: `{llm_output}`") | Bug : could not parse LLM output: `{llm_output}`") when I run the same question several times | https://api.github.com/repos/langchain-ai/langchain/issues/2985/comments | 2 | 2023-04-16T16:45:42Z | 2023-09-10T16:32:28Z | https://github.com/langchain-ai/langchain/issues/2985 | 1,670,041,682 | 2,985 |
[
"langchain-ai",
"langchain"
] | Hi,
I'm running official docker image from Chroma and using it via rest API (I need it in server mode for persistent storage in production deployment)
When inserting documents (I'm loading pdfs) I'm getting
`chromadb.api.models.Collection No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction`
even though I'm passing OpenAIEmbeddings() as embedding parameter
```
embeddings = OpenAIEmbeddings()
chroma_settings = Settings(
chroma_api_impl="rest",
chroma_server_host="localhost",
chroma_server_http_port=8000,
anonymized_telemetry=False,
)
loader = PyPDFLoader(pdf_url)
pages = loader.load_and_split()
Chroma.from_documents(
documents=pages, embedding=embeddings, client_settings=chroma_settings
)
```
| embedding function not passed properly to Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/2982/comments | 17 | 2023-04-16T15:46:43Z | 2024-05-11T14:34:48Z | https://github.com/langchain-ai/langchain/issues/2982 | 1,670,023,228 | 2,982 |
[
"langchain-ai",
"langchain"
] | This is actually half an issue, half an open disscussion topic.
Following #2898 , I tried the offline [LLAMA model](https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html) with the same agent. And the result is somehow interesting:
Given the same prompt:
```
Answer the following questions as best you can. You have access to the following tools:
Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Google Search, Calculator]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who is Leo DiCaprio's current girlfriend? What is her current age raised to the 0.43 power?
Thought:
```
The reply from LlamaCpp, using prompttemplate is:
```
Action: Use Google Search
Action Input: type in "Leo DiCaprio\'s girlfriend"
Observation: xxx
...
```
You see, the model is able to perform some "reasoning" from the prompt, and the response it generates, although not strictly consistant with what chatGPT or GPT4 does, is also correct in some sense. However, in his response, "Action" is **Use Google Search** rather than **Google Search**. It is not a deal as natural language. However, it does pose problems when the agent uses regex (or, in a general way, a rule-based method) to select among different tools.
I am thinking that for smaller, off-line models (not restricted to llamacpp or GPT4All), where they might not able to provide GPT4 consistant, but still human acceptable responses, how to make langchain better support them. I came up with 2 options:
1. working on the regex and make them generalize as much as possible to the input diversity, as long as the meaning is correct. Altough It might end up again with a "human engineered" dilema.
2. use some more generalize methods like those of "sentiment classification". I.e, to use the LLMs to classify on which tool to use for the next step, rather than using a regex mather.
Any ideas ? | agent with LLAMA or GPT4All | https://api.github.com/repos/langchain-ai/langchain/issues/2980/comments | 6 | 2023-04-16T13:03:52Z | 2023-11-28T16:12:05Z | https://github.com/langchain-ai/langchain/issues/2980 | 1,669,962,934 | 2,980 |
[
"langchain-ai",
"langchain"
] | I trying to create embeddings of CSV file of size around 137 MB which has both numerical and text column (total of 6). using the following code
`from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path=path, encoding="utf-8")
data = loader.load()
embeddings = CohereEmbeddings(model="multilingual-22-12", cohere_api_key= cohere_api_key)
doc_result = embeddings.embed_documents([data])`
**above gives the following error**
`TypeError Traceback (most recent call last)
[<ipython-input-13-6789a7d649fe>](https://localhost:8080/#) in <cell line: 1>()
----> 1 doc_result = embeddings.embed_documents([data])
15 frames
[/usr/lib/python3.9/json/encoder.py](https://localhost:8080/#) in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
**TypeError: Object of type Document is not JSON serializable**`
I trying to figuring out the solution but seems like it hard. Please help me in this regard, I will be very thankful | Cohere's multilingual model does not creating embeddings of CSV | https://api.github.com/repos/langchain-ai/langchain/issues/2979/comments | 1 | 2023-04-16T12:54:00Z | 2023-09-10T16:32:33Z | https://github.com/langchain-ai/langchain/issues/2979 | 1,669,957,240 | 2,979 |
[
"langchain-ai",
"langchain"
] | I'm using this output parser. But when the agent is passing this to the action state, I'm getting parsing error.
<img width="1441" alt="image" src="https://user-images.githubusercontent.com/19322429/232312291-41effe19-0c95-4bb3-8528-ffe9f6571f89.png">
Here is the parser:
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
print(f"Input to parse function: {llm_output}") # Print the input
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action: (.*?)[\n]*Action Input:([\s\S]*?)(?=\n\n|$)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2).strip()
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output) | Parsing issue when using Python client | https://api.github.com/repos/langchain-ai/langchain/issues/2978/comments | 2 | 2023-04-16T12:44:04Z | 2023-09-10T16:32:38Z | https://github.com/langchain-ai/langchain/issues/2978 | 1,669,945,651 | 2,978 |
[
"langchain-ai",
"langchain"
] | Is there any docs or related issues for caching the Azure chat open ai responses i cannot find one. | Caching for chat based models. | https://api.github.com/repos/langchain-ai/langchain/issues/2976/comments | 8 | 2023-04-16T12:27:30Z | 2023-12-21T16:08:39Z | https://github.com/langchain-ai/langchain/issues/2976 | 1,669,937,971 | 2,976 |
[
"langchain-ai",
"langchain"
] | Hi,
I'm using RetrievalQA.from_chain_type to query local index.
I'm using Custom Prompt as input (query and context)
Is there a way to log or inspect the actual prompt that is sent to OpenAI API including the query, and the context?
Also
How to control the number of documents the retriever returns?
Is there an option to see the accuracy score of each doc in the source_documents returned by the query?
Thanks
Dror | Question: RetrievalQA.from_chain_type - logging the full prompt | https://api.github.com/repos/langchain-ai/langchain/issues/2975/comments | 7 | 2023-04-16T12:21:39Z | 2024-02-08T09:46:58Z | https://github.com/langchain-ai/langchain/issues/2975 | 1,669,936,294 | 2,975 |
[
"langchain-ai",
"langchain"
] | consider rewrite with pypdf2 | pypdf has compatible problems with pdf files which contain complex encodings | https://api.github.com/repos/langchain-ai/langchain/issues/2973/comments | 2 | 2023-04-16T11:10:01Z | 2023-09-10T16:32:43Z | https://github.com/langchain-ai/langchain/issues/2973 | 1,669,870,714 | 2,973 |
[
"langchain-ai",
"langchain"
] | Would be really nice to have support for Googles Vertex AI Matching Engine as a Vector Store:
[Google Cloud Vector Store](https://cloud.google.com/vertex-ai/docs/matching-engine/overview?hl=en)
I'm currently building an AI application with langchain agents using Google Cloud as my backend.
So im trying not to use to many third party services to keep everything as tidy as possible.
Using a Google solution for the vector store would be a huge plus.
| Support for Vertex AI Matching Engine as a Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/2971/comments | 2 | 2023-04-16T10:34:51Z | 2023-09-25T16:08:43Z | https://github.com/langchain-ai/langchain/issues/2971 | 1,669,850,988 | 2,971 |
[
"langchain-ai",
"langchain"
] | When you set a max_iterations on a tool agent and it is down to its last iteration, it doesn't make sense for it to try to use a tool. Using a tool would require another iteration, which will be blocked.
There should be some way for the agent to realize it's out of iterations and just return a `Final answer` with whatever information its managed to get. | Tool agents should not try to use a tool on their last iteration | https://api.github.com/repos/langchain-ai/langchain/issues/2970/comments | 6 | 2023-04-16T10:25:52Z | 2023-12-06T17:46:55Z | https://github.com/langchain-ai/langchain/issues/2970 | 1,669,841,936 | 2,970 |
[
"langchain-ai",
"langchain"
] | using CHAT_CONVERSATIONAL_REACT_DESCRIPTION the agent does not chain together tools - only the first iteration, then stops. The other AgenTypes work as expected.
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = AzureChatOpenAI(
temperature=0,
deployment_name="gpt4",
model_name="gpt-4")
tools = load_tools(["google-search", "requests_all", "llm-math", "wolfram-alpha", "wikipedia", "pal-math"], llm=llm)
agent_zero_shot = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
memory=memory,
verbose=True)
response = agent_zero_shot.run(input="Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
print(response)
# returns: Eden Polani's age raised to the 0.43 power is approximately 3.55.
# agent_conversational = initialize_agent(
# tools,
# llm,
# agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
# memory=memory, # I think it ignores this
# verbose=True)
# response = agent_conversational.run(input="Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
# print(response)
# returns: Finished chain. Leonardo DiCaprio's most recent girlfriend is rumored to be Eden Polani, who is 19 years old. To calculate her age raised to the 0.43 power, I'll need to use a calculator.
```
CONVERSATIONAL_REACT_DESCRIPTION works as expected. However given I'm using Azure and GPT4 I only have the chat interface. | CHAT_CONVERSATIONAL_REACT_DESCRIPTION vs CONVERSATIONAL_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/2968/comments | 15 | 2023-04-16T09:03:51Z | 2024-02-15T16:12:05Z | https://github.com/langchain-ai/langchain/issues/2968 | 1,669,796,108 | 2,968 |
[
"langchain-ai",
"langchain"
] | I tried creating a pandas dataframe agent (using create_dataframe_agent) with ChatOpenai (gpt3-turbo) as the LLM! But langchain isn't able to parse the LLM's output code. Ofcoure when I use davince model it works
### This is the code:
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
agent = create_csv_agent(openaichat, 'fishfry-locations.csv', verbose=True)
x = agent.run("How many rows for church?")
### This is the output and error
> Entering new AgentExecutor chain...
Thought: We need to filter the dataframe to only include rows where the venue_type is "Church" and then count the number of rows.
Action: python_repl_ast
Action Input:
```
len(df[df['venue_type'] == 'Church'])
```
Observation: invalid syntax (<unknown>, line 1)
Thought:I need to fix the syntax error by adding a closing parenthesis at the end of the input.
Action: python_repl_ast
Action Input:
```
len(df[df['venue_type'] == 'Church'])
```
Observation: invalid syntax (<unknown>, line 1)
Thought:
> Finished chain. | ChatOpenai (gpt3-turbo) isn't compatible with create_pandas_dataframe_agent, create_csv_agent etc | https://api.github.com/repos/langchain-ai/langchain/issues/2967/comments | 3 | 2023-04-16T08:21:47Z | 2023-09-18T16:19:58Z | https://github.com/langchain-ai/langchain/issues/2967 | 1,669,768,015 | 2,967 |
[
"langchain-ai",
"langchain"
] | as you have now created a specific dialect pr https://github.com/hwchase17/langchain/pull/2748
you better remove these lines or make them only applicable if the dialect is sqlite,
most of the dialects don't support this
https://github.com/hwchase17/langchain/blob/b634489b2e8951b880c2ec467cdcf00f11830705/langchain/sql_database.py#L218-L219
ps i have to set this value to None once it instantiate sqldatabase otherwise i run in trouble. (i'm using ibm_db_sa. dialect and it works as a charm with eg chatgpt
i also think there are some related tickets around this
ps i set my schema in the connection string
but i still have to set it in the model (otherwise it cannot find my included tables
but this might be another problem
thanks
| SQLDatabase : Remove set search_path (or rewrite it) | https://api.github.com/repos/langchain-ai/langchain/issues/2951/comments | 9 | 2023-04-15T20:09:12Z | 2023-12-06T18:20:32Z | https://github.com/langchain-ai/langchain/issues/2951 | 1,669,555,201 | 2,951 |
[
"langchain-ai",
"langchain"
] | Streaming is supported by llama-cpp-python and works in Jupyter notebooks outside langchain code, but I can't get it to work with langchain. I didn't see any code for streaming in llms/llamacpp.py. I tried to do calls to self.callback_manager..on_llm_new_token() but nothing worked. | LlamaCpp model needs streaming support | https://api.github.com/repos/langchain-ai/langchain/issues/2948/comments | 2 | 2023-04-15T19:22:53Z | 2023-09-10T16:32:53Z | https://github.com/langchain-ai/langchain/issues/2948 | 1,669,542,731 | 2,948 |
[
"langchain-ai",
"langchain"
] | After ingesting some markdown files using a slightly modified version of the question-answering over docs example, I ran the qa.py script as it was in the example
```
# qa.py
import faiss
from langchain import OpenAI, HuggingFaceHub, LLMChain
from langchain.chains import VectorDBQAWithSourcesChain
import pickle
import argparse
parser = argparse.ArgumentParser(description='Ask a question to the notion DB.')
parser.add_argument('question', type=str, help='The question to ask the notion DB')
args = parser.parse_args()
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
chain = VectorDBQAWithSourcesChain.from_llm(llm=OpenAI(temperature=0), vectorstore=store)
result = chain({"question": args.question})
print(f"Answer: {result['answer']}")
```
Only to get this cryptic error
```
Traceback (most recent call last):
File "C:\Users\ahmad\OneDrive\Desktop\Coding\LANGCHAINSSSSSS\notion-qa\qa.py", line 22, in <module>
result = chain({"question": args.question})
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 146, in __call__
raise e
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 142, in __call__
outputs = self._call(inputs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\qa_with_sources\base.py", line 97, in _call
answer, _ = self.combine_document_chain.combine_docs(docs, **inputs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\map_reduce.py", line 150, in combine_docs
num_tokens = length_func(result_docs, **kwargs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 77, in prompt_length
inputs = self._get_inputs(docs, **kwargs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 64, in _get_inputs
document_info = {
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 65, in <dictcomp>
k: base_info[k] for k in self.document_prompt.input_variables
KeyError: 'source'
```
Here is the code I used for ingesting
|
```
"""This is the logic for ingesting Notion data into LangChain."""
from pathlib import Path
from langchain.text_splitter import CharacterTextSplitter
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
import pickle
import time
from tqdm import tqdm
# Here we load in the data in the format that Notion exports it in.
folder = list(Path("Notion_DB/").glob("**/*.md"))
files = []
sources = []
for myFile in folder:
with open(myFile, 'r', encoding='utf-8') as f:
print(myFile.name)
files.append(f.read())
sources.append(myFile)
# Here we split the documents, as needed, into smaller chunks.
# We do this due to the context limits of the LLMs.
text_splitter = CharacterTextSplitter(chunk_size=800, separator="\n")
docs = []
metadatas = []
for i, f in enumerate(files):
splits = text_splitter.split_text(f)
docs.extend(splits)
metadatas.extend([{"source": sources[i]}] * len(splits))
# Add each element in docs into FAISS store, keeping a delay between inserting elements so we don't exceed rate limit
store = None
for (index, chunk) in tqdm(enumerate(docs)):
if index == 0:
store = FAISS.from_texts([chunk], OpenAIEmbeddings())
else:
time.sleep(1) # wait for a second to not exceed any rate limits
store.add_texts([chunk])
# print('finished with index '+index.__str__())
print('Done yayy!')
# # Here we create a vector store from the documents and save it to disk.
faiss.write_index(store.index, "docs.index")
store.index = None
with open("faiss_store.pkl", "wb") as f:
pickle.dump(store, f)
```
| Question Answering over Docs giving cryptic error upon query | https://api.github.com/repos/langchain-ai/langchain/issues/2944/comments | 2 | 2023-04-15T15:38:36Z | 2023-09-10T16:32:58Z | https://github.com/langchain-ai/langchain/issues/2944 | 1,669,458,405 | 2,944 |
[
"langchain-ai",
"langchain"
] | For example, lets say I have a big txt file (WhatsApp chat export). Now when I'm storing it as embeddings in the vector store, I think the source_document is set as the `<name_of_file>.txt` which is fine. But what I want is to attribute a finer source. Like say, the person(s) who said this particular keyword, datetime and so on.
Is this currently supported in Langchain? | Is there a way we can pass in a custom source into vector store? | https://api.github.com/repos/langchain-ai/langchain/issues/2941/comments | 4 | 2023-04-15T14:12:30Z | 2023-09-10T16:33:03Z | https://github.com/langchain-ai/langchain/issues/2941 | 1,669,421,607 | 2,941 |
[
"langchain-ai",
"langchain"
] | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| Allow OpenAPI planner to respect URLs with placeholders | https://api.github.com/repos/langchain-ai/langchain/issues/2938/comments | 1 | 2023-04-15T13:54:15Z | 2023-10-12T23:20:34Z | https://github.com/langchain-ai/langchain/issues/2938 | 1,669,406,711 | 2,938 |
[
"langchain-ai",
"langchain"
] | hey, i'm adding a tool for jira jql, i ran into something weird and not sure what's wrong or how to debug, could anyone help?
the Action Input for the action taken is `summary ~ "add support"`
but the actual instruction passed into _run of my tool is `summary ~ "add support`, missing the closing double quotes.
<img width="1740" alt="Screenshot 2023-04-15 at 9 52 11 pm" src="https://user-images.githubusercontent.com/32046231/232217825-e1f4c62f-998c-4b2d-8890-97a726bfc84d.png">
| agent._extract_tool_and_input removes double quote at the end of action input. | https://api.github.com/repos/langchain-ai/langchain/issues/2936/comments | 2 | 2023-04-15T11:52:24Z | 2023-09-10T16:33:14Z | https://github.com/langchain-ai/langchain/issues/2936 | 1,669,327,179 | 2,936 |
[
"langchain-ai",
"langchain"
] | Just installed Lanchain and followed the tutorials without a problem till I reached the agents part.
The following modules are not recognized.
```
from langchain.agents import initialize_agent
from langchain.agents import AgentType
```
I tried running lanchain in python 3.7, 3.8.11, 3.9 and 3.10 because other people suggested changing versions. | ModuleNotFoundError: No module named 'langchain.agents' | https://api.github.com/repos/langchain-ai/langchain/issues/2935/comments | 1 | 2023-04-15T10:14:16Z | 2023-04-15T10:34:48Z | https://github.com/langchain-ai/langchain/issues/2935 | 1,669,296,610 | 2,935 |
[
"langchain-ai",
"langchain"
] | Hello Dev,
I dont see json_agent_executor executing right.. For my simple requirement, its not able to give desired output.
I have 5 users .. users.json.
`[
{
"username": "john_doe",
"email": "john.doe@example.com"
},
{
"username": "jane_doe",
"email": "jane.doe@example.com"
},
{
"username": "mark_smith",
"email": "mark.smith@example.com"
},
{
"username": "sarah_jones",
"email": "sarah.jones@example.com"
},
{
"username": "david_wilson",
"email": "david.wilson@example.com"
}
]
`
I am using the below code..
*************************************************************
`import os
import json
from langchain.agents import (
create_json_agent,
AgentExecutor
)
from langchain.agents.agent_toolkits import JsonToolkit
from langchain.chains import LLMChain
from langchain.llms.openai import OpenAI
from langchain.requests import TextRequestsWrapper
from langchain.tools.json.tool import JsonSpec
with open("/content/sample_data/users.json") as f:
data = json.load(f)
json_spec = JsonSpec(dict_=data, max_value_length=4000)
json_toolkit = JsonToolkit(spec=json_spec)
json_agent_executor = create_json_agent(
llm=OpenAI(temperature=0),
toolkit=json_toolkit,
verbose=True
)
json_agent_executor.run("What is email id of sarah_jones")`
*********************************************************************
The Agent is unable to find some basic stuff... This the the output..
***************************************************************************
Entering new AgentExecutor chain...
Action: json_spec_list_keys
Action Input: data
Observation: ['username']
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought:
> Finished chain.
Agent stopped due to iteration limit or time limit.`
| json_agent_executor unable to perform some basic stuff | https://api.github.com/repos/langchain-ai/langchain/issues/2931/comments | 8 | 2023-04-15T07:35:01Z | 2024-03-25T07:06:03Z | https://github.com/langchain-ai/langchain/issues/2931 | 1,669,216,123 | 2,931 |
[
"langchain-ai",
"langchain"
] | In "combine_docs" in "MapReduceDocumentsChain" class in "langchain/chains/combine_documents/map_reduce.py"
num_tokens is defaulted to 3000 not flexible to the model type. | AnalyzeDocumentChain cannot work with "text-ada-001" model or any 2k tokens model | https://api.github.com/repos/langchain-ai/langchain/issues/2930/comments | 3 | 2023-04-15T06:34:23Z | 2023-09-10T16:33:20Z | https://github.com/langchain-ai/langchain/issues/2930 | 1,669,191,327 | 2,930 |
[
"langchain-ai",
"langchain"
] | `UnicodeEncodeError Traceback (most recent call last)
Cell In[13], line 11
2 tools = [
3 Tool(
4 name="Intermediate Answer",
(...)
7 )
8 ]
10 self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
---> 11 self_ask_with_search.run("How do I get into an Ivy league college?")
File C:\Python311\Lib\site-packages\langchain\chains\base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File C:\Python311\Lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File C:\Python311\Lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:499, in _call(self, inputs)
494 """Validate that appropriate tools are passed in."""
495 pass
497 @classmethod
498 def from_llm_and_tools(
--> 499 cls,
500 llm: BaseLanguageModel,
501 tools: Sequence[BaseTool],
502 callback_manager: Optional[BaseCallbackManager] = None,
503 **kwargs: Any,
504 ) -> Agent:
505 """Construct an agent from an LLM and tools."""
506 cls._validate_tools(tools)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:409, in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
399 def plan(
400 self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
401 ) -> Union[AgentAction, AgentFinish]:
402 """Given input, decided what to do.
403
404 Args:
405 intermediate_steps: Steps the LLM has taken to date,
406 along with observations
407 **kwargs: User inputs.
408
--> 409 Returns:
410 Action specifying what tool to use.
411 """
412 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
413 action = self._get_next_action(full_inputs)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:105, in plan(self, intermediate_steps, **kwargs)
97 else:
98 raise ValueError(
99 f"Got unsupported early_stopping_method `{early_stopping_method}`"
100 )
102 @classmethod
103 def from_llm_and_tools(
104 cls,
--> 105 llm: BaseLanguageModel,
106 tools: Sequence[BaseTool],
107 callback_manager: Optional[BaseCallbackManager] = None,
108 **kwargs: Any,
109 ) -> BaseSingleActionAgent:
110 raise NotImplementedError
112 @property
113 def _agent_type(self) -> str:
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:71, in _get_next_action(self, full_inputs)
62 @abstractmethod
63 async def aplan(
64 self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
65 ) -> Union[AgentAction, AgentFinish]:
66 """Given input, decided what to do.
67
68 Args:
69 intermediate_steps: Steps the LLM has taken to date,
70 along with observations
---> 71 **kwargs: User inputs.
72
73 Returns:
74 Action specifying what tool to use.
75 """
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:151, in LLMChain.predict(self, **kwargs)
137 def predict(self, **kwargs: Any) -> str:
138 """Format prompt with kwargs and pass to LLM.
139
140 Args:
(...)
149 completion = llm.predict(adjective="funny")
150 """
--> 151 return self(kwargs)[self.output_key]
File C:\Python311\Lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File C:\Python311\Lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:57, in LLMChain._call(self, inputs)
56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
---> 57 return self.apply([inputs])[0]
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:118, in LLMChain.apply(self, input_list)
116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]:
117 """Utilize the LLM generate method for speed gains."""
--> 118 response = self.generate(input_list)
119 return self.create_outputs(response)
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:62, in LLMChain.generate(self, input_list)
60 """Generate LLM result from inputs."""
61 prompts, stop = self.prep_prompts(input_list)
---> 62 return self.llm.generate_prompt(prompts, stop)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:107, in BaseLLM.generate_prompt(self, prompts, stop)
103 def generate_prompt(
104 self, prompts: List[PromptValue], stop: Optional[List[str]] = None
105 ) -> LLMResult:
106 prompt_strings = [p.to_string() for p in prompts]
--> 107 return self.generate(prompt_strings, stop=stop)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:140, in BaseLLM.generate(self, prompts, stop)
138 except (KeyboardInterrupt, Exception) as e:
139 self.callback_manager.on_llm_error(e, verbose=self.verbose)
--> 140 raise e
141 self.callback_manager.on_llm_end(output, verbose=self.verbose)
142 return output
File C:\Python311\Lib\site-packages\langchain\llms\base.py:137, in BaseLLM.generate(self, prompts, stop)
133 self.callback_manager.on_llm_start(
134 {"name": self.__class__.__name__}, prompts, verbose=self.verbose
135 )
136 try:
--> 137 output = self._generate(prompts, stop=stop)
138 except (KeyboardInterrupt, Exception) as e:
139 self.callback_manager.on_llm_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:324, in LLM._generate(self, prompts, stop)
322 generations = []
323 for prompt in prompts:
--> 324 text = self._call(prompt, stop=stop)
325 generations.append([Generation(text=text)])
326 return LLMResult(generations=generations)
File C:\Python311\Lib\site-packages\langchain\llms\anthropic.py:146, in _call(self, prompt, stop)
130 def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
131 r"""Call out to Anthropic's completion endpoint.
132
133 Args:
134 prompt: The prompt to pass into the model.
135 stop: Optional list of stop words to use when generating.
136
137 Returns:
138 The string generated by the model.
139
140 Example:
141 .. code-block:: python
142
143 prompt = "What are the biggest risks facing humanity?"
144 prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
145 response = model(prompt)
--> 146
147 """
148 stop = self._get_anthropic_stop(stop)
149 if self.streaming:
File C:\Python311\Lib\site-packages\anthropic\api.py:239, in Client.completion(self, **kwargs)
238 def completion(self, **kwargs) -> dict:
--> 239 return self._request_as_json(
240 "post",
241 "/v1/complete",
242 params=kwargs,
243 )
File C:\Python311\Lib\site-packages\anthropic\api.py:198, in Client._request_as_json(self, *args, **kwargs)
197 def _request_as_json(self, *args, **kwargs) -> dict:
--> 198 result = self._request_raw(*args, **kwargs)
199 content = result.content.decode("utf-8")
200 json_body = json.loads(content)
File C:\Python311\Lib\site-packages\anthropic\api.py:117, in Client._request_raw(self, method, path, params, headers, request_timeout)
109 def _request_raw(
110 self,
111 method: str,
(...)
115 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
116 ) -> requests.Response:
--> 117 request = self._request_params(headers, method, params, path, request_timeout)
118 result = self._session.request(
119 request.method,
120 request.url,
(...)
124 timeout=request.timeout,
125 )
127 if result.status_code != 200:
File C:\Python311\Lib\site-packages\anthropic\api.py:85, in Client._request_params(self, headers, method, params, path, request_timeout)
79 del params["disable_checks"]
80 else:
81 # NOTE: disabling_checks can lead to very poor sampling quality from our API.
82 # _Please_ read the docs on "Claude instructions when using the API" before disabling this.
83 # Also note, future versions of the API will enforce these as hard constraints automatically,
84 # so please consider these SDK-side checks as things you'll need to handle regardless.
---> 85 _validate_request(params)
86 data = None
87 if params:
File C:\Python311\Lib\site-packages\anthropic\api.py:273, in _validate_request(params)
271 if prompt.endswith(" "):
272 raise ApiException(f"Prompt must not end with a space character")
--> 273 _validate_prompt_length(params)
File C:\Python311\Lib\site-packages\anthropic\api.py:279, in _validate_prompt_length(params)
277 prompt: str = params["prompt"]
278 try:
--> 279 prompt_tokens = tokenizer.count_tokens(prompt)
280 max_tokens_to_sample: int = params["max_tokens_to_sample"]
281 token_limit = 9 * 1024
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:52, in count_tokens(text)
51 def count_tokens(text: str) -> int:
---> 52 tokenizer = get_tokenizer()
53 encoded_text = tokenizer.encode(text)
54 return len(encoded_text.ids)
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:36, in get_tokenizer()
34 if not claude_tokenizer:
35 try:
---> 36 tokenizer_data = _get_cached_tokenizer_file_as_str()
37 except httpx.HTTPError as e:
38 raise TokenizerException(f'Failed to download tokenizer: {e}')
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:26, in _get_cached_tokenizer_file_as_str()
24 response.raise_for_status()
25 with open(tokenizer_file, 'w') as f:
---> 26 f.write(response.text)
28 with open(tokenizer_file, 'r') as f:
29 return f.read()
File C:\Python311\Lib\encodings\cp1252.py:19, in IncrementalEncoder.encode(self, input, final)
18 def encode(self, input, final=False):
---> 19 return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u0100' in position 2452: character maps to <undefined>` | Error when generating text with the Anthropic LLM | https://api.github.com/repos/langchain-ai/langchain/issues/2929/comments | 1 | 2023-04-15T05:33:09Z | 2023-08-06T18:52:23Z | https://github.com/langchain-ai/langchain/issues/2929 | 1,669,165,015 | 2,929 |
[
"langchain-ai",
"langchain"
] | I hope that langchain can support dolly-v2 which is generated by Databricks employees and released under a permissive license (CC-BY-SA). | Will it support Dolly-V2? | https://api.github.com/repos/langchain-ai/langchain/issues/2928/comments | 4 | 2023-04-15T05:21:27Z | 2023-05-02T17:46:43Z | https://github.com/langchain-ai/langchain/issues/2928 | 1,669,159,656 | 2,928 |
[
"langchain-ai",
"langchain"
] | This is the mypy response for the following code:
```
ChatOpenAI(
model_name=args.model_name,
temperature=args.temperature,
)
```
I see in the code that ChatOpenAI has a variable client that in the comments is marked private.
Any remediation? | Mypy: Missing named argument "client" for "ChatOpenAI" | https://api.github.com/repos/langchain-ai/langchain/issues/2925/comments | 11 | 2023-04-15T03:14:17Z | 2024-04-18T20:03:39Z | https://github.com/langchain-ai/langchain/issues/2925 | 1,669,123,006 | 2,925 |
[
"langchain-ai",
"langchain"
] | The OpenSearch documentation notes that you may use a boolean filter for ANN search: https://opensearch.org/docs/latest/search-plugins/knn/filter-search-knn/#boolean-filter-with-ann-search
It would be nice to allow passing in a boolean filter to the OpenSearch vector store `similarity_search` function. | OpenSearch: allow boolean filter search for ANN | https://api.github.com/repos/langchain-ai/langchain/issues/2924/comments | 1 | 2023-04-15T02:26:24Z | 2023-04-18T03:26:28Z | https://github.com/langchain-ai/langchain/issues/2924 | 1,669,111,473 | 2,924 |
[
"langchain-ai",
"langchain"
] | Hello everyone,
Working in an implementation Index-GPT+LangChain.
I'm trying to set a custom prompt where I can set additional context. [Based on the documentation ](https://python.langchain.com/en/latest/reference/modules/chains.html?highlight=langchain.chains.llm.LLMChain#langchain.chains.LLMChain) I'm trying to run this code.
langchain 0.0.139
llama-index 0.5.15
```
template = """Pretend you are Steve Jobs. Answer with motivational content. Steve: How I can help you today?. Person: I want some motivation. Steve: You are amazing you can create any type of business you want.
Person: {question}?
Steve:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm=OpenAI(temperature=0)
print(type(llm))
llm = LLMChain(prompt=prompt, llm=llm)
print(type(llm))
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = create_llama_chat_agent(
toolkit,
llm,
memory=memory,
verbose=True
)
```
But I'm getting the below error:
```
ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error)
```
Thanks for your help. | Can't instantiate langchain.chains.LLMChain to create_llama_chat_agent (Setting custom prompt) | https://api.github.com/repos/langchain-ai/langchain/issues/2922/comments | 7 | 2023-04-15T00:05:05Z | 2023-12-27T16:08:04Z | https://github.com/langchain-ai/langchain/issues/2922 | 1,669,064,432 | 2,922 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.