issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently the AzureSearch VectorStore allows the user to specify a filter that can be used to filter (in the traditional search engine sense) a search index become doing a vector similarity search. This reduces the search space to improve speed as well as to help focus the vector search on the correct subset of documents.
This filtering feature is very hard to effectively use because the current method for adding documents (add_texts) only allows an id, content, content_vector, and metadata fields. None of these fields are suitable for filtering, so this requires the user to go back and add fields manually to the search index.
I propose that we allow the end user to specify extra fields that are added when creating these vectors. The end user would do something like this:
```
extra_fields = {"extra_fields": {"important_field_1": 123, "important_field_2": 456}}
documents.append(doc1)
documents.append(doc2)
documents.append(doc3)
vector_store.add_documents(documents, **extra_fields)
```
Then when the user queries this vector store late they can do something like this:
```
retriever.search_kwargs = {'filters': "important_field_1 eq 123"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
```
### Motivation
My motivation was need for a project I'm working on, but I felt this was a needed general feature, as I stated in the feature request:
This filtering feature is very hard to effectively use because the current method for adding documents (add_texts) only allows an id, content, content_vector, and metadata fields. None of these fields are suitable for filtering, so this requires the user to go back and add fields manually to the search index.
### Your contribution
Hopefully this makes sense, let me know if any clarifications are needed, once the bug #6131 is fixed I will submit a PR that implements this, I have it working locally and just need to write appropriate unit tests. Unit tests will not be possible until this bug is fixed. | Add ability to add extra fields to AzureSearch VectorStore when adding documents | https://api.github.com/repos/langchain-ai/langchain/issues/6134/comments | 9 | 2023-06-14T03:42:05Z | 2023-10-16T16:16:00Z | https://github.com/langchain-ai/langchain/issues/6134 | 1,755,994,897 | 6,134 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | https://api.github.com/repos/langchain-ai/langchain/issues/6131/comments | 5 | 2023-06-14T02:08:49Z | 2023-08-15T08:39:21Z | https://github.com/langchain-ai/langchain/issues/6131 | 1,755,911,246 | 6,131 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
https://openai.com/blog/function-calling-and-other-api-updates
chatgpt-3.5-turbo model can receive 16k so need fixing code l`lm.chains` and `get_openai_callback`
if use max_tokens parameters `openai.error.InvalidRequestError`
```bash
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4133 tokens. Please reduce the length of the messages.
```
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6129/comments | 2 | 2023-06-14T01:23:57Z | 2023-06-14T02:31:12Z | https://github.com/langchain-ai/langchain/issues/6129 | 1,755,875,136 | 6,129 |
[
"langchain-ai",
"langchain"
] | ### System Info
Upon the `poetry install -E all ` command, I'm unable to install awadb version or azure-ai-vision specified in pyproject.toml
awadb = {version = "^0.3.2", optional = true}
azure-ai-vision = {version = "^0.11.1b1", optional = true}
```
Package operations: 1 install, 1 update, 0 removals
• Updating awadb (0.3.1 -> 0.3.2): Failed
RuntimeError
Unable to find installation candidates for awadb (0.3.2)
at ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/poetry/installation/chooser.py:76 in choose_for
72│
73│ links.append(link)
74│
75│ if not links:
→ 76│ raise RuntimeError(f"Unable to find installation candidates for {package}")
77│
78│ # Get the best link
79│ chosen = max(links, key=lambda link: self._sort_key(package, link))
80│
• Installing azure-ai-vision (0.11.1b1): Failed
RuntimeError
Unable to find installation candidates for azure-ai-vision (0.11.1b1)
at ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/poetry/installation/chooser.py:76 in choose_for
72│
73│ links.append(link)
74│
75│ if not links:
→ 76│ raise RuntimeError(f"Unable to find installation candidates for {package}")
77│
78│ # Get the best link
79│ chosen = max(links, key=lambda link: self._sort_key(package, link))
```
However I was able to install the awadb 0.3.1 version.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Freshly install poetry via: curl -sSL https://install.python-poetry.org | python3 -. In Conda venv.
2. After it's installation, Poetry (version 1.5.1), attemped `poetry install -E all ` command
3. Received errors
### Expected behavior
After I installed Poetry, I tried to run poetry install -E all within the langchain directory, however I receive dependencies errors. | awadb and azure-ai-vision Version issue | https://api.github.com/repos/langchain-ai/langchain/issues/6125/comments | 6 | 2023-06-13T23:52:47Z | 2023-10-15T16:06:28Z | https://github.com/langchain-ai/langchain/issues/6125 | 1,755,805,306 | 6,125 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain==0.0.194
weaviate-client==3.19.1
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This works:
```
# Init
weaviate_url = url
client = Client(url=weaviate_url, auth_client_secret=auth.AuthClientPassword(xxx,xxx))
embeddings = OpenAIEmbeddings()
vectorstore= Weaviate.from_texts(splits, embeddings, client=client)
# Search
query = "What is micrograd?"
matched_docs = vectorstore_new.similarity_search(query,k=1)
matched_docs
```
Under the hood, a random `index_name` is created and text key is hard-coded:
```
Index!
LangChain_214a4ded03fd4121ad5f5d0c0c36e051
Text key: text_key!
text
```
Now, I want to get this index that I've created (e.g., in another process or session):
```
# Create connection
vectorstore_weviate = Weaviate(client=client)
```
Of course, this will fail because we need `'index_name'` and `'text_key'`:
```
TypeError: __init__() missing 2 required positional arguments: 'index_name' and 'text_key'
```
So, we re-generate our `vectorstore` with an `index_name`:
```
vectorstore= Weaviate.from_texts(splits, embeddings, client=client, index_name="karpathy-gpt")
```
Confirm:
```
Index!
karpathy_gpt
Text key: text_key!
text
```
But, when we run search:
```
# Search
query = "What is micrograd?"
matched_docs = vectorstore_new.similarity_search(query,k=1)
matched_docs
```
We can see the the index name returned in `result` [here](https://github.com/hwchase17/langchain/blob/11ab0be11aff9128c12178b5ebf62071985fb823/langchain/vectorstores/weaviate.py#L223) has been modified with an upper case start char `Karpathy_gpt`.
This results in a key error when we use our index_name key, `karpathy_gpt`:
```
Index Name in similarity_search_by_vector
***`karpathy_gpt`***
Result
{'data': {'Get': {***'Karpathy_gpt'***: [{'text': "would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient"}]}}}
```
It's possible that the index-name is being modified somewhere on the `Weaviate` side?
The `result` come from Weaviate [here](https://github.com/hwchase17/langchain/blob/11ab0be11aff9128c12178b5ebf62071985fb823/langchain/vectorstores/weaviate.py#LL219C9-L219C71):
```
result = query_obj.with_near_vector(vector).with_limit(k).do()
```
The the index name is modified to `Karpathy_gpt` in the `result`.
### Expected behavior
We expect `result` to have the same `index_name` as we have defined when we initialize Weaviate.
```
{'data': {'Get': {'**karpathy_gpt**': [{'text': "would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient"}]}}}
```
Possible that CShorten can sanity check.
https://github.com/CShorten | Bug with Weaviate vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/6121/comments | 1 | 2023-06-13T21:53:58Z | 2023-06-14T16:54:42Z | https://github.com/langchain-ai/langchain/issues/6121 | 1,755,693,610 | 6,121 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | https://api.github.com/repos/langchain-ai/langchain/issues/6118/comments | 4 | 2023-06-13T21:22:21Z | 2023-06-21T08:37:19Z | https://github.com/langchain-ai/langchain/issues/6118 | 1,755,665,020 | 6,118 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The proposal here is pretty simple, we add two methods to the `Embeddings` base class, `aembed_documents` and `aembed_query`, allowing for async versions of the equivalent synchronous methods. The first implementation of this would be for OpenAI, since that's a popular embedding API.
### Motivation
async is supported within other aspects of langchain, and embeddings are one location where support isn't presently there. For a specific example, in a service my company is currently working on converting to be async, this support would improve the throughput of that support noticeably.
### Your contribution
I have a PR in the works that I'll be putting up shortly. | Async request support for Embeddings, with initial support for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6109/comments | 1 | 2023-06-13T20:19:56Z | 2023-07-03T13:25:23Z | https://github.com/langchain-ai/langchain/issues/6109 | 1,755,590,974 | 6,109 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently few vectorstores like Qdrant has support for MMR (Maximal Marginal Relevance)
Opensearch does not have it.
### Motivation
Since we use OpenSearch as our Vectorstore and we want variance in our results for best entropy I'd like to have MMR implemented for `OpenSearchVectorStore`
### Your contribution
My PR: https://github.com/hwchase17/langchain/pull/6116 | MMR Support for OpenSearch | https://api.github.com/repos/langchain-ai/langchain/issues/6108/comments | 3 | 2023-06-13T19:55:07Z | 2023-09-12T16:38:13Z | https://github.com/langchain-ai/langchain/issues/6108 | 1,755,560,292 | 6,108 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.198,
windows,
SQL Server 16.0.4025.1 windows docker container (linux)
In the output I get the message:
Incorrect syntax near the keyword 'TO'
and program not ends correctly
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**my python program:**
server = 'localhost,1435'
database = 'MyDB'
username = 'me'
pwd = '****'
driver= 'ODBC Driver 17 for SQL Server'
def db_instance():
#Creating SQLAlchemy connection sting
connectionString = 'DRIVER='+driver+';SERVER=tcp:'+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ pwd+';Encrypt=no;TrustServerCertificate=no;Connection Timeout=30;'
print(connectionString)
params = urllib.parse.quote_plus(connectionString)
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
db_instance = SQLDatabase.from_uri(conn_str,schema='ALG')
return db_instance
db = db_instance()
print(db.table_info)
# Setting API Key and API endpoint for OpenAI
os.environ['OPENAI_API_TOKEN'] = '....'
llm = OpenAI(model_name='text-davinci-003')
# LangChain Agent
toolkit = SQLDatabaseToolkit(db=db, llm = llm)
agent_executor = create_sql_agent(
llm= llm,
toolkit=toolkit,
verbose=True,
top_k = 5
)
# Test
agent_executor.run("'how many Messages are in the DB'")
### Expected behavior
**Output from terminal:**
SELECT Message_0_0.*, Message_0_100001.*, Message_0_100002.*, Message_0_100003.*, Message_0_100004.*, Message_0_100000.*, Message_0_99999.*
FROM Message_0_0
INNER JOIN Message_0_100001 ON Message_0_0.id = Message_0_100001.id
INNER JOIN Message_0_100002 ON Message_0_0.id = Message_0_100002.id
INNER JOIN Message_0_100003 ON Message_0_0.id = Message_0_100003.id
INNER JOIN Message_0_100004 ON Message_0_0.id = Message_0_100004.id
INNER JOIN Message_0_100000 ON Message_0_0.id = Message_0_100000.id
INNER JOIN Message_0_99999 ON Message_0_0.id = Message_0_99999.id
Thought: The query looks correct, I can now execute it.
Action: query_sql_db
Action Input: SELECT Message_0_0.*, Message_0_100001.*, Message_0_100002.*, Message_0_100003.*, Message_0_100004.*, Message_0_100000.*, Message_0_99999.*
FROM Message_0_0
INNER JOIN Message_0_100001 ON Message_0_0.id = Message_0_100001.id
INNER JOIN Message_0_100002 ON Message_0_0.id = Message_0_100002.id
INNER JOIN Message_0_100003 ON Message_0_0.id = Message_0_100003.id
INNER JOIN Message_0_100004 ON Message_0_0.id = Message_0_100004.id
INNER JOIN Message_0_100000 ON Message_0_0.id = Message_0_100000.id
INNER JOIN Message_0_99999 ON Message_0_0.id = Message_0_99999.id
Obs
Observation: Error: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near the keyword 'TO'. (156) (SQLExecDirectW)")
[SQL: SET search_path TO ALG]
(Background on this error at: https://sqlalche.me/e/20/f405)
Thought: I should double check my query again with the query checker.
Action: query_checker_sql_db
Action Input: SELECT Message_0_0.*, Message_0_100001.*, Message_0_100002.*, Message_0_100003.*, Message_0_100004.*, Message_0_100000.*, Message_0_99999.*
FROM Message_0_0
INNER JOIN Message_0_100001 ON Message_0_0.id = Message_0_100001.id
INNER JOIN Message_0_100002 ON Message_0_0.id = Message_0_100002.id
INNER JOIN Message_0_100003 ON Message_0_0.id = Message_0_100003.id
INNER JOIN Message_0_100004 ON Message_0_0.id = Message_0_100004.id
INNER JOIN Message_0_100000 ON Message_0_0.id = Message_0_100000.id
INNER JOIN Message_0_99999 ON Message_0_0.id = Message_0_99
The message query looks correct it should get the correct count.
**SQL server trace contains also the query:**
SET search_path TO ALG
I found that string in the github repo under /sql_database.py
line 347
```
with self._engine.begin() as connection:
if self._schema is not None:
if self.dialect == "snowflake":
connection.exec_driver_sql(
f"ALTER SESSION SET search_path='{self._schema}'"
)
elif self.dialect == "bigquery":
connection.exec_driver_sql(f"SET @@dataset_id='{self._schema}'")
else:
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
``` | Incorrect syntax near the keyword 'TO' | https://api.github.com/repos/langchain-ai/langchain/issues/6105/comments | 5 | 2023-06-13T18:04:04Z | 2023-10-21T16:08:40Z | https://github.com/langchain-ai/langchain/issues/6105 | 1,755,410,046 | 6,105 |
[
"langchain-ai",
"langchain"
] | ### Feature request
OpenAI released several major updates today (2023-06-13) that likely have major implications for what is possible. At the very least, it will make things more reliable.
Here's a shortlist from [the blog post](https://openai.com/blog/function-calling-and-other-api-updates):
- Dramatically improved function calling support + JSON consistency
- `gpt-3.5-turbo` with a 16K context window (🤯)
- Token cost changes for completions and embeddings
- Upcoming deprecation for March-versioned models
### Motivation
The release of OpenAI's blog post found here:
https://openai.com/blog/function-calling-and-other-api-updates
I'm adding this issue mostly to flag and track this OpenAI release and kick off a forum for discussion.
### Your contribution
Can potentially add PRs but haven't contributed here previously. | Support and make use of function calling and other OpenAI updates on 2023-06-13 | https://api.github.com/repos/langchain-ai/langchain/issues/6104/comments | 8 | 2023-06-13T17:57:29Z | 2023-09-23T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6104 | 1,755,398,854 | 6,104 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python Version: 3.9.11
Langchain version = 0.0.199
I'm getting a validation error with GPT4All where I'm following the instructions of the notebook and installed all packages but apparently there's some parameters called n_parts that isn't part of the GPT4All attribute
```
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import requests
from pathlib import Path
from tqdm import tqdm
Path(local_path).parent.mkdir(parents=True, exist_ok=True)
url = 'http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin'
response = requests.get(url, stream=True)
with open(local_path, 'wb') as f:
for chunk in tqdm(response.iter_content(chunk_size=8192)):
if chunk:
f.write(chunk)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
```
and this is the error I get
```
Exception ignored in: <function Model.__del__ at 0x2aaaed28aa60>
Traceback (most recent call last):
File "/home/traney/.conda/envs/openai/lib/python3.9/site-packages/pyllamacpp/model.py", line 402, in __del__
if self._ctx:
AttributeError: 'GPT4All' object has no attribute '_ctx'
Exception ignored in: <function Model.__del__ at 0x2aaaed28aa60>
Traceback (most recent call last):
File "/home/traney/.conda/envs/openai/lib/python3.9/site-packages/pyllamacpp/model.py", line 402, in __del__
if self._ctx:
AttributeError: 'GPT4All' object has no attribute '_ctx'
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[31], line 11
9 callbacks = [StreamingStdOutCallbackHandler()]
10 # Verbose is required to pass to the callback manager
---> 11 llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
12 # If you want to use a custom model add the backend parameter
13 # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends
14 llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True)
File ~/.conda/envs/openai/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for GPT4All
__root__
__init__() got an unexpected keyword argument 'n_parts' (type=type_error)
```
I installed all relevant packages and checked the previous reports for these issues but I just get different errors from not being able to find the model, to validation error to GPT4All not existing. Is there any new things I have to do to implement GPT4All on langchain?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow code above in Jupyter Notebook
### Expected behavior
the chat model should answer the question | Validation Error | https://api.github.com/repos/langchain-ai/langchain/issues/6101/comments | 5 | 2023-06-13T17:43:13Z | 2023-10-12T16:08:42Z | https://github.com/langchain-ai/langchain/issues/6101 | 1,755,378,721 | 6,101 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blame/ee3d0513addae0680a30afa384431a032244e76b/langchain/chains/graph_qa/cypher.py#L40
@tomasonjo This new feature **return_intermediate_steps=True** is not working as intended, can you please update
There is some issue in line number 258 of "langchain\chains\base.py"
**return self(args[0], callbacks=callbacks)[self.output_keys[0]]**
the above line always makes it to return only the result and not the intermediate steps.
As a temporary solution I modified line no: 130 of "langchain\chains\graph_qa\cypher.py" to return **{"result":chain_result}** which works.
Thanks in advance! | This new feature return_intermediate_steps=True is not working as intended, can you please update | https://api.github.com/repos/langchain-ai/langchain/issues/6098/comments | 2 | 2023-06-13T17:05:24Z | 2023-06-14T05:59:32Z | https://github.com/langchain-ai/langchain/issues/6098 | 1,755,327,431 | 6,098 |
[
"langchain-ai",
"langchain"
] | ### System Info
I've run a prompt that said `1 + 1 = ?` with my agent, I've used `get_openai_callback` to show some metric (see the image):

Used LLM model is `GPT-3.5-turbo`
On OpenAI website we have for `GPT-3.5-turbo` model a `0.002/1k tokens`, that means my test/prompt should cost:
2206 ÷ 1000 x 0.002 = 0.004412
The weird thing based on my test is the cost, it shows `$0.04412` and not `$0.004412`
It could be a bug, Any ideas?
Here is the code:
```python
with get_openai_callback() as cb:
response = agent.run(prompt)
# Show OpenAI Cost
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
```
Can anyone please explain what's going on?
Thanks in advance.
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
@agola11
@hwchase17 | OpenAI Cost calculation could have a bug! | https://api.github.com/repos/langchain-ai/langchain/issues/6097/comments | 2 | 2023-06-13T17:02:34Z | 2023-09-19T16:08:11Z | https://github.com/langchain-ai/langchain/issues/6097 | 1,755,322,847 | 6,097 |
[
"langchain-ai",
"langchain"
] | ### System Info
from this format (example):
[Document(page_content='Team: **Athletics**'', metadata={'source': '**my source1**', 'row': **0**}, lookup_index=0), Document(page_content='Team: **Rangers**', lookup_str='', metadata={'source': '**my source2**', 'row': **1**}, lookup_index=0),
Document(page_content='Team: **Yankees**', lookup_str='', metadata={'source': '**my source3**', 'row': **2**},lookup_index=0)]
To this:
[Document(**lc_kwargs**={page_content='Team: **Athletics**'', metadata={'source': '**my source1**', 'row': **0**}, lookup_index=0),
page_content='Team: **Athletics**'', metadata={'source': '**my source1**', 'row': **0**}, lookup_index=0),
Document(**lc_kwargs**={page_content='Team: **Rangers**', lookup_str='', metadata={'source': '**my source2**', 'row': **1**}, lookup_index=0),page_content='Team: **Rangers**', lookup_str='', metadata={'source': '**my source2**', 'row': **1**}, lookup_index=0),
Document(**lc_kwargs**={page_content='Team: **Yankees**', lookup_str='', metadata={'source': '**my source3**', 'row': **2**},lookup_index=0),
page_content='Team: **Yankees**', lookup_str='', metadata={'source': '**my source3**', 'row': **2**},lookup_index=0)]
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Check out my colab notebook for better understanding:
https://colab.research.google.com/drive/1w8ZTAkapRev8KHI9w4IAk37T9GfFGWW2?usp=drive_link#scrollTo=tl502z9-RRJC
### Expected behavior
expected output (as you can see in the official doc: [https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html] ) :
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0),
Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0)] | A change in the output format of documents loaded with CSVLoader: (+ weird redundancy in the output) | https://api.github.com/repos/langchain-ai/langchain/issues/6096/comments | 2 | 2023-06-13T16:08:21Z | 2023-09-26T16:05:53Z | https://github.com/langchain-ai/langchain/issues/6096 | 1,755,240,397 | 6,096 |
[
"langchain-ai",
"langchain"
] | ### Feature request
when comparison.value is int the "valueText" should change to "valueInt"
### Motivation
when comparison.value is int the "valueText" should change to "valueInt"
### Your contribution
self_query.weaviate
```python
def visit_comparison(self, comparison: Comparison) -> Dict:
if isinstance(comparison.value,(int,float)):
return {
"path": [comparison.attribute],
"operator": self._format_func(comparison.comparator),
"valueInt": comparison.value,
}
else:
return {
"path": [comparison.attribute],
"operator": self._format_func(comparison.comparator),
"valueText": comparison.value,
}
``
| when comparison.value is int the "valueText" should change to "valueInt" | https://api.github.com/repos/langchain-ai/langchain/issues/6092/comments | 1 | 2023-06-13T12:36:27Z | 2023-09-19T16:08:03Z | https://github.com/langchain-ai/langchain/issues/6092 | 1,754,787,976 | 6,092 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.198
Platform: Ubuntu 20.04 LTS
Python version: 3.10.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Reproduce tutorial ["Entity Memory with SQLite storage"](https://python.langchain.com/en/latest/modules/memory/examples/entity_memory_with_sqlite.html)
2. While executing the following code: ```entity_store = SQLiteEntityStore()```,
get the error: ValueError: "SQLiteEntityStore" object has no field "conn"
### Expected behavior
SQLiteEntityStore() must be executed correctly. | 'ValueError: "SQLiteEntityStore" object has no field "conn"' error for tutorial "Entity Memory with SQLite storage" | https://api.github.com/repos/langchain-ai/langchain/issues/6091/comments | 8 | 2023-06-13T10:50:00Z | 2024-04-05T16:05:40Z | https://github.com/langchain-ai/langchain/issues/6091 | 1,754,597,112 | 6,091 |
[
"langchain-ai",
"langchain"
] | ### Feature request
While, I could pass gl and hl parameters in SerperAPI, one cannot do it for the GoogleSearchAPIWrapper even though the CSE API supports it.
### Motivation
This should be a priority addition to make the library more inclusive. I have tried passing it with serper and it does a great job at language likes German, Hindi, French, Spanish.
### Your contribution
I tried to see the base utility, but couldn't figure out a way to add the gl, hl parameters in the base utility file. | Non English Language Support in GoogleSearchAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/6090/comments | 2 | 2023-06-13T09:55:50Z | 2023-09-20T16:08:20Z | https://github.com/langchain-ai/langchain/issues/6090 | 1,754,499,202 | 6,090 |
[
"langchain-ai",
"langchain"
] | ### System Info
If the dyanamodb table does not exist when retrieving conversation history then a generic "local variable 'response' referenced before assignment" error is returned. This is due to the exception handling at https://github.com/hwchase17/langchain/blob/master/langchain/memory/chat_message_histories/dynamodb.py#L50 does not determine if the table does not exist
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a dynamodb chat message history, initialised with a DynamoDB table name that doesn't exist
2. Interact with the message history
### Expected behavior
An error should be logged stating that the dynamodb table was not found | Issue: If DynamoDB table does not exist conversation message history fails with "local variable 'response' referenced before assignment" | https://api.github.com/repos/langchain-ai/langchain/issues/6088/comments | 1 | 2023-06-13T09:41:41Z | 2023-06-19T00:39:20Z | https://github.com/langchain-ai/langchain/issues/6088 | 1,754,472,871 | 6,088 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.198,
Python 3.10
AWS Sagemaker environment
### Who can help?
@agola11, @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`from langchain.chains import ConversationalRetrievalChain
import json
from langchain.prompts.prompt import PromptTemplate
prompt_template = """Answer based on context
Context: {context}
Question: {question}"""
TEST_PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
question = 'How do I bake cake?'
chain = ConversationalRetrievalChain.from_llm(llm=llm, condense_question_prompt=TEST_PROMPT, retriever=retriever, return_source_documents=True, verbose=True)
chat_history = []
chain({ "chat_history": chat_history, "question": question})
`
### Expected behavior
Expected behavior is that it should take the given TEST_PROMPT, while sending the PROMPT to LLM, which is not doing in the original behavior | Conversational Retriever Chain - condense_question_prompt parameter is not being considered. | https://api.github.com/repos/langchain-ai/langchain/issues/6087/comments | 2 | 2023-06-13T09:35:40Z | 2023-09-22T16:08:04Z | https://github.com/langchain-ai/langchain/issues/6087 | 1,754,459,647 | 6,087 |
[
"langchain-ai",
"langchain"
] | ### search tool only uses its snippets to answer the question, that is not sufficient
I found that when calling search tools like 'bing search', 'google search' in an agent, these original api simply returns several urls and their snippets(main topic of the whole webpage).
However, LangChain directly use these snippets to answer the question, like in bing search:
```python
def run(self, query: str) -> str:
"""Run query through BingSearch and parse result."""
snippets = []
results = self._bing_search_results(query, count=self.k)
if len(results) == 0:
return "No good Bing Search Result was found"
for result in results:
snippets.append(result["snippet"])
return " ".join(snippets)
```
This concise message passed to the large language model is not sufficient to answer detailed questions, thus leading to bad answers.
Does anyone has a solution to this?
### Suggestion:
_No response_ | Issue: <search tool only uses its snippets to answer the question, that is not sufficient> | https://api.github.com/repos/langchain-ai/langchain/issues/6085/comments | 2 | 2023-06-13T09:31:24Z | 2023-07-17T10:25:04Z | https://github.com/langchain-ai/langchain/issues/6085 | 1,754,450,599 | 6,085 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Custom Prompt formatted and used by `load_qa_with_sources` chain is not giving the answer and saying "Sorry, I am unable to answer" (message to be given in the prompt) but the same prompt when used with simple `LLMChain` or `LLM` directly gives back the expected answer.
1) custom prompt + retrieved documents provided to `load_qa_with_sources` (stuff) chain:

2) Same prompt when used directly with `LLMChain`:

Why the same prompt is causing two different results if the underlying LLM is the same?
### Suggestion:
Following are the versions of the packages used for testing purposes:
- OpenAI: 0.27.8 and 0.27.2
- LangChain: 0.0.198 and 0.0.142 | Weird: Same prompt works with LLMChain but not with load_qa_with_sources Chain | https://api.github.com/repos/langchain-ai/langchain/issues/6084/comments | 1 | 2023-06-13T09:24:12Z | 2023-09-19T16:08:19Z | https://github.com/langchain-ai/langchain/issues/6084 | 1,754,435,752 | 6,084 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.198
Platform: Ubuntu 20.04 LTS
Python version: 3.10.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Reproduce tutorial [How to add Memory to an LLMChain](https://python.langchain.com/en/latest/modules/memory/examples/adding_memory.html)
2. Get the same warning for each invocation of llm_chain.predict():
```Error in on_chain_start callback: 'name'```
3. The same warning will be for the following different configurations (only difference from original code is specified):
3.1.```llm = ChatOpenAI(temperature=0)```
3.2 ```chain = ConversationChain(
memory=memory,
verbose=True,
prompt=prompt_template,
llm=llm,
input_key='human_input'
)```
3.3 ```memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)```
### Expected behavior
There will be no such warnings (or the actions which resolved it must be specified in warning and/or in documentation). | "Error in on_chain_start callback: 'name'" warning for tutorial "How to add Memory to an LLMChain" | https://api.github.com/repos/langchain-ai/langchain/issues/6083/comments | 14 | 2023-06-13T09:12:55Z | 2024-02-01T10:19:43Z | https://github.com/langchain-ai/langchain/issues/6083 | 1,754,411,684 | 6,083 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When executing `arun()` of the `MapReduceDocumentsChain` multiple times in an async loop, the requests are not run concurrently.
Given the example:
```python
import asyncio
from langchain.llms import OpenAI
from langchain.chains.summarize import load_summarize_chain
from langchain.schema import Document
summary_chain = load_summarize_chain(
OpenAI(temperature=0),
chain_type="map_reduce",
)
docs = [
Document(page_content="A doc"),
Document(page_content="Another doc"),
]
tasks = [
summary_chain.arun({"input_documents": docs}),
summary_chain.arun({"input_documents": docs})
]
await asyncio.gather(*tasks)
```
While the mapping step is run asynchronously, the combine step is not. We can see that that the last two requests run sequentially:
<img width="817" alt="image" src="https://github.com/hwchase17/langchain/assets/872712/0472a000-4eb2-43bd-8f97-2e66869c857c">
This is due to the `async acombine()` method also calling the synchronous _process_results() method.
```python
async def acombine_docs(…):
…
return self._process_results(...)
def _process_results(…):
...
```
### Suggestion:
## How I can contribute
Will provide a PR!
The fact that an redundant mapping call is made in case the `input_documents` array contains a single document, is addressed in https://github.com/hwchase17/langchain/pull/5942
@agola11 | Enhancement: Multiple calls of MapReduceDocumentsChain should run asynchronously | https://api.github.com/repos/langchain-ai/langchain/issues/6082/comments | 3 | 2023-06-13T07:28:16Z | 2023-07-06T07:30:03Z | https://github.com/langchain-ai/langchain/issues/6082 | 1,754,224,763 | 6,082 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It will be really cool if we can specify a website like : www.example.edu/* and it would get all the possible accessible pages from it and load it using WebBaseLoader.
### Motivation
I'm always frustrated when I have to make a list of websites and pass it in the loader. Instead, using this way we can query the loader and we can get all the relevant web pages.
### Your contribution
Not sure how I can contribute. | For web page loaders | https://api.github.com/repos/langchain-ai/langchain/issues/6081/comments | 3 | 2023-06-13T05:45:49Z | 2023-09-12T16:17:01Z | https://github.com/langchain-ai/langchain/issues/6081 | 1,754,083,714 | 6,081 |
[
"langchain-ai",
"langchain"
] | ### HuggingFaceEmbeddings can not take trust_remote_code argument

### Suggestion:
_No response_ | Issue: HuggingFaceEmbeddings can not take trust_remote_code argument | https://api.github.com/repos/langchain-ai/langchain/issues/6080/comments | 19 | 2023-06-13T05:42:23Z | 2024-05-19T15:11:34Z | https://github.com/langchain-ai/langchain/issues/6080 | 1,754,080,865 | 6,080 |
[
"langchain-ai",
"langchain"
] | ### I want to load in the webpage below.
Hi,
Trying to extract some webpage using webbaseloader:
"""
loader = WebBaseLoader("https://researchadmin.asu.edu/)
data = loader.load()
"""
But gives the following error:
SSLError: HTTPSConnectionPool(host='researchadmin.asu.edu', port=443): Max retries exceeded with url: / (Caused by
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get
local issuer certificate (_ssl.c:1002)')))
It is a public web page. Can anyone help?
### Suggestion:
_No response_ | Issue: Can't load a public webpage | https://api.github.com/repos/langchain-ai/langchain/issues/6079/comments | 1 | 2023-06-13T05:40:52Z | 2023-06-17T18:10:50Z | https://github.com/langchain-ai/langchain/issues/6079 | 1,754,079,026 | 6,079 |
[
"langchain-ai",
"langchain"
] | ### System Info
when trying to connect to azure redis I get the following error:
unknown command `MODULE`, with args beginning with: `LIST`,
Here is the code:
fileName = "somefile.pdf"
loader = PyPDFLoader(fileName)
docs = loader.load_and_split()
redis_conn = f"rediss://:{utils.REDIS_PWD}@{utils.REDIS_HOST}:{utils.REDIS_PORT}"
rds = Redis.from_documents(docs, embeddings, redis_url=redis_conn, index_name='link')
Important:
Connecting to redis like this, works!
r = redis.StrictRedis(host=utils.REDIS_HOST,port=utils.REDIS_PORT, db=0, password=utils.REDIS_PWD, ssl=True)
r.set('foo', 'bar')
r.get('foo')
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This works:
import redis
r = redis.StrictRedis(host=utils.REDIS_HOST,port=utils.REDIS_PORT, db=0, password=utils.REDIS_PWD, ssl=True)
r.set('foo', 'bar')
r.get('foo')
This does not:
from langchain.document_loaders import PyPDFLoader # for loading the pdf
from langchain.vectorstores.redis import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
fileName = "somefile.pdf"
loader = PyPDFLoader(fileName)
docs = loader.load_and_split()
redis_conn = f"rediss://:{utils.REDIS_PWD}@{utils.REDIS_HOST}:{utils.REDIS_PORT}"
rds = Redis.from_documents(docs, embeddings, redis_url=redis_conn, index_name='link')
### Expected behavior
save embeddings into REdis | unknown command `MODULE`, with args beginning with: `LIST`, | https://api.github.com/repos/langchain-ai/langchain/issues/6075/comments | 13 | 2023-06-13T03:03:19Z | 2024-02-14T10:19:47Z | https://github.com/langchain-ai/langchain/issues/6075 | 1,753,945,024 | 6,075 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to use the chroma where_document parameter to do precise filtering of the content during data retrieval, but I found that the where_document parameter is not passed after the similarity_search_by_vector method is finally called in the source code

### Suggestion:
I hope that I can generate where_document in the retrieval parameter of the retriever, just like the existing filter will be passed to where, the code is as follows:
```py
search_kwargs = {
"k": k,
"filter": filter,
"where_document": {"$contains": "1000001"}
}
retriever = vectordb.as_retriever(
search_kwargs=search_kwargs
)
``` | Issue: chroma where_document parameter passed in search_kwargs is invalid | https://api.github.com/repos/langchain-ai/langchain/issues/6073/comments | 2 | 2023-06-13T02:46:14Z | 2023-12-08T16:06:50Z | https://github.com/langchain-ai/langchain/issues/6073 | 1,753,930,104 | 6,073 |
[
"langchain-ai",
"langchain"
] | ### System Info
## Description
While using the Langchain application, I am frequently encountering an error that relates to rate limiting when invoking OpenAI's API. This tends to occur when I try to perform multiple translations consecutively or concurrently, causing a significant interruption to the user experience.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Steps to Reproduce
1. Open the Langchain application.
2. Select a source language and enter some text for translation.
3. Choose the target language and submit for translation.
4. Repeat steps 2-3 multiple times in quick succession or concurrently.
### Expected behavior
## Expected Behavior
The application should be able to handle multiple translation requests without any disruptions, including but not limited to rate limit errors from OpenAI's API.
## Actual Behavior
When submitting multiple translation requests quickly or at the same time, a rate limit error is produced and no translations are returned. The error message is as follows:
`Error: OpenAI API rate limit exceeded`
How to implement retry logic with LangChain? | Langchain rate limit error while invoking OpenAI API | https://api.github.com/repos/langchain-ai/langchain/issues/6071/comments | 2 | 2023-06-13T00:04:42Z | 2023-09-20T16:08:31Z | https://github.com/langchain-ai/langchain/issues/6071 | 1,753,776,827 | 6,071 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Started a local server just to learn more about with `langchain plus start`. It says that I can generate a new api as it appears to have an endpoint in port 1984. But when I click the menu there's no **Settings** option there.
---
<img width="1787" alt="Screenshot 2023-06-12 at 16 32 31" src="https://github.com/hwchase17/langchain/assets/3484029/e42ef803-f216-486a-84f3-6c4026a37a8f">
### Suggestion:
_No response_ | Issue: LangChain Plus Api Key | https://api.github.com/repos/langchain-ai/langchain/issues/6059/comments | 2 | 2023-06-12T19:35:50Z | 2023-09-23T16:05:18Z | https://github.com/langchain-ai/langchain/issues/6059 | 1,753,439,350 | 6,059 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I am having trouble running my code due to an error that I encountered. The error message that I received is "ModuleNotFoundError: No module named 'langchain.callbacks.shared'". I am not sure how to resolve this issue and would appreciate any help or guidance.
This line of code is using the `pickle` module to load data from a file. The `pickle.load(file)` function reads the pickled representation of an object from the open file object `file` and returns the reconstituted object hierarchy specified therein.
In this case, the returned object is expected to be a tuple with two elements, which are assigned to the variables `self.chain` and `self.vectorstore`, respectively. This means that the first element of the tuple is assigned to `self.chain`, and the second element is assigned to `self.vectorstore`.
Here is some information about my environment that may be helpful:
- OS: Ubuntu
- Python version: 3.10
- Langchain Version: 0.0.198
Issue Screen Shot:
<br />

Thank you in advance for your help and support in resolving this issue.
### Suggestion:
_No response_ | Issue: ModuleNotFoundError: No module named 'langchain.callbacks.shared' | https://api.github.com/repos/langchain-ai/langchain/issues/6058/comments | 2 | 2023-06-12T18:33:04Z | 2023-09-27T16:06:14Z | https://github.com/langchain-ai/langchain/issues/6058 | 1,753,330,704 | 6,058 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python version: 3.10.11 (main, May 17 2023, 14:30:36) [Clang 14.0.6 ]
pymongo 4.3.3
langchain 0.0.190
### Who can help?
@eyurtsev @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I followed religiously the following tutorial:
https://python.langchain.com/en/stable/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.html
Despite that I get the following error.
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[14], line 12
10 # perform a similarity search between the embedding of the query and the embeddings of the documents
11 query = "my query?"
---> 12 docs = vectorStore.similarity_search(query)
File ~/opt/miniconda3/envs/LLM_langchain_exploration/lib/python3.10/site-packages/langchain/vectorstores/mongodb_atlas.py:222, in MongoDBAtlasVectorSearch.similarity_search(self, query, k, pre_filter, post_filter_pipeline, **kwargs)
194 def similarity_search(
195 self,
196 query: str,
(...)
200 **kwargs: Any,
201 ) -> List[Document]:
202 """Return MongoDB documents most similar to query.
203
204 Use the knnBeta Operator available in MongoDB Atlas Search
(...)
220 List of Documents most similar to the query and score for each
221 """
--> 222 docs_and_scores = self.similarity_search_with_score(
223 query,
224 k=k,
225 pre_filter=pre_filter,
226 post_filter_pipeline=post_filter_pipeline,
227 )
228 return [doc for doc, _ in docs_and_scores]
File ~/opt/miniconda3/envs/LLM_langchain_exploration/lib/python3.10/site-packages/langchain/vectorstores/mongodb_atlas.py:189, in MongoDBAtlasVectorSearch.similarity_search_with_score(self, query, k, pre_filter, post_filter_pipeline)
187 docs = []
188 for res in cursor:
--> 189 text = res.pop(self._text_key)
190 score = res.pop("score")
191 docs.append((Document(page_content=text, metadata=res), score))
KeyError: 'text'
```
### Expected behavior
should get the docs back | When using MongoDBAtlasVectorSearch i get KeyError: 'text' despite having the collection populated | https://api.github.com/repos/langchain-ai/langchain/issues/6055/comments | 3 | 2023-06-12T17:58:39Z | 2023-10-14T20:12:22Z | https://github.com/langchain-ai/langchain/issues/6055 | 1,753,265,916 | 6,055 |
[
"langchain-ai",
"langchain"
] |
Hello,
Is there a way to track progress when giving a list of inputs to a LLMChain object using tqdm for example?
I didn't see any parameter that would allow me to use tqdm.
I also checked if I could write a Callback for this. But the hooks doesn't seem to allow for that.
Anyone managed to use some progress bar?
| Progress bar for LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/6053/comments | 7 | 2023-06-12T16:21:29Z | 2024-07-10T16:52:55Z | https://github.com/langchain-ai/langchain/issues/6053 | 1,753,106,960 | 6,053 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.195
python 3.9
the client doesn't recognize 'MY_TABLE' and 'my_table' as the same table.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when initializing the sql client, add the param `include_tables` and set the strings in upper case fashion.
`sql_database = SQLDatabase(engine,view_support=True, include_tables=["MY_TRADE"])`
you should see this error:
`ValueError: include_tables {'MY_TRADE'} not found in database`
However, everything should go through with:
`sql_database = SQLDatabase(engine,view_support=True, include_tables=["my_trade"])`
### Expected behavior
`sql_database = SQLDatabase(..., include_tables=["MY_TRADE"])`
to be equal to
`sql_database = SQLDatabase(..., include_tables=["my_trade"])` | tables names are not case insensitive in the Snowflake Client | https://api.github.com/repos/langchain-ai/langchain/issues/6052/comments | 1 | 2023-06-12T16:01:50Z | 2023-06-12T17:44:27Z | https://github.com/langchain-ai/langchain/issues/6052 | 1,753,074,942 | 6,052 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.6
chromadb==0.3.22
langchain==0.0.194
### Who can help?
similarity_search_with_score witn Chroma DB keeps higher score for less relevant documents.
```
embeddings = OpenAIEmbeddings(
model="text-embedding-ada-002",
openai_api_key = openai.api_key,
chunk_size=1
)
db = Chroma(collection_name="docs", embedding_function=embeddings, persist_directory=vector_db_path)
question = """What is the rate of arthralgia in the combined RCP and OLP inebilizumab population?"""
[d[1] for d in db.similarity_search_with_score(question, k=5 )]
```
```
[0.3035728335380554,
0.3159480392932892,
0.3345768451690674,
0.3543674945831299,
0.36075425148010254]
```
```
[d[1] for d in db.similarity_search_with_score(question, k=10 )]
[0.3035728335380554,
0.3159480392932892,
0.3345768451690674,
0.3543674945831299,
0.36075425148010254,
0.36337000131607056,
0.3656774163246155,
0.36993658542633057,
0.37518084049224854,
0.3755079507827759]
```
Seems more like i should be doing (1-score) , to filter more relevant documents. So i tried to test with similarity threshold of .35 , but then it returns the least similar docs ( as .30 was more similar )
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
question = """What is the rate of arthralgia in the combined RCP and OLP inebilizumab population?"""
[d[1] for d in db.similarity_search_with_score(question, k=5 )]
[d[1] for d in db.similarity_search_with_score(question, k=10 )]
### Expected behavior
I would expect higher similarity score for the documents that are earlier in the retruned list ( which the document is more related but has a lower score ) | similarity_search_with_score witn Chroma DB keeps higher score for less relevant documents. | https://api.github.com/repos/langchain-ai/langchain/issues/6046/comments | 3 | 2023-06-12T13:57:49Z | 2023-12-30T08:59:06Z | https://github.com/langchain-ai/langchain/issues/6046 | 1,752,825,122 | 6,046 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to load llama-ccp embeddings into a postgresql with the vector extension. The db has the vector extension installed. I am using a user that has the correct privileges to create a new table and insert the embeddings. The code runs smoothly without any errors, but the collection is not being created as a new table in the db. The db name is `embeddings` and the schema name is `vector_store`.
From pgadmin I can see that the there is the following query, which means the embeddings are created and are ready to be loaded:
`INSERT INTO langchain_pg_embedding (collection_id, embedding, document, cmetadata, custom_id, uuid) VALUES ('d10c0cd8-300f-4592-aa2a-42f827...`
However, it seems that no new table is created and the table names do not correspond to the collection name I am setting - `vector_store.test_table`. Any help will be appreciated.
packages:
pgvector 0.1.8
psycopg2-binary 2.9.6
langchain 0.0.149
This is my code:
```
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.document_loaders import TextLoader
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import LlamaCppEmbeddings
from langchain.vectorstores.faiss import FAISS
import os
import datetime
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector
from langchain.docstore.document import Document
gpt4all_path = '../models/gpt4all-converted.bin'
llama_path = '../models/ggml-model-q4_0.bin'
embeddings = LlamaCppEmbeddings(model_path=llama_path)
loader = TextLoader('../data/test.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
docs = text_splitter.split_documents(documents)
import os
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("PGVECTOR_HOST", "xxxxxxxxxxx"),
port=int(os.environ.get("PGVECTOR_PORT", "5432")),
database=os.environ.get("PGVECTOR_DATABASE", "embeddings"),
user=os.environ.get("PGVECTOR_USER", "xxxxxxxx"),
password=os.environ.get("PGVECTOR_PASSWORD", "xxxxxxxxxxx"),
)
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="vector_store.test_table",
connection_string=CONNECTION_STRING,
pre_delete_collection=True
)
```
### Suggestion:
_No response_ | Issue: langchain + pgvector fail to create new tables in postgresql db | https://api.github.com/repos/langchain-ai/langchain/issues/6045/comments | 9 | 2023-06-12T13:29:06Z | 2024-02-05T12:55:09Z | https://github.com/langchain-ai/langchain/issues/6045 | 1,752,773,658 | 6,045 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Langchain agent performs multiple intermediate steps before returning a response. We want to emit an event for every intermediate step during processing, so that the client-side can be updated about what's being processed at any given time.
For example, if an intermediate steps involves searching Google, we can then emit an event that informs the client about the search taking place.
This could massively improve UX.
### Motivation
The motivation comes from wanting to improve UX. We're developing a langchain-based app and this feature is sorely missed.
### Your contribution
We could definitely submit a PR if you give a little bit of guidance on how to go about with it. | Emiting events during processing of intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/6041/comments | 2 | 2023-06-12T12:00:43Z | 2023-09-18T16:07:58Z | https://github.com/langchain-ai/langchain/issues/6041 | 1,752,603,109 | 6,041 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Make [modelname_to_contextsize](https://github.com/hwchase17/langchain/blob/289e9aeb9d122d689d68b2e77236ce3dfcd606a7/langchain/llms/openai.py#L503) as staticmethod to use it without create an object.
### Motivation
While using ChatOpenAI or AzureChatOpenAI, to use modelname_to_contextsize we need to create OpenAI or AzureOpenAI object whether we don't use it.
For example, llama-index using [modelname_to_contextsize](https://github.com/jerryjliu/llama_index/blob/f614448a045788c9c5c9a774f407a992ae1f7743/llama_index/llm_predictor/base.py#L42) to get context size, but it raise an error if we using AzureOpenAI without setting OPENAI_API_TOKEN.
### Your contribution
#6040 | Make modelname_to_contextsize as a staticmethod to use it without create an object | https://api.github.com/repos/langchain-ai/langchain/issues/6039/comments | 0 | 2023-06-12T10:23:07Z | 2023-06-23T11:58:44Z | https://github.com/langchain-ai/langchain/issues/6039 | 1,752,416,131 | 6,039 |
[
"langchain-ai",
"langchain"
] | ### System Info
I followed the steps to install gpt4all based on this repo
https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python
I have the latest version of langchain
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def mainllm():
template = """Question: {question}
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = './models/gpt4all-converted.bin'
llm = gpt4all.GPT4All("ggml-gpt4all-j-v1.3-groovy.bin")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
if __name__ == '__main__':
mainllm()
```
This is the error I get related to this lines on code:
```
Traceback (most recent call last):
File "gpt4all.py", line 174, in <module>
mainllm()
File "gpt4all.py", line 166, in mainllm
llm = gpt4all.GPT4All("ggml-gpt4all-j-v1.3-groovy.bin")
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
```
### Expected behavior
Could you help me understand what is wrong?
Thanks | Validation error for gpt4all | https://api.github.com/repos/langchain-ai/langchain/issues/6038/comments | 5 | 2023-06-12T09:44:27Z | 2023-09-20T16:08:46Z | https://github.com/langchain-ai/langchain/issues/6038 | 1,752,342,167 | 6,038 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Report
I am currently reading the documentation for version 0.0.197 and testing step by step. When I ran into the "agents-with-chat-models" section, I encountered a type error in the code. The specific situation is as follows:
[doc link](https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-with-chat-models)
```python
# this is my code
llm = AzureOpenAI(
model_name = os.environ["GPT_ENGINE"],
deployment_name = os.environ["GPT_ENGINE"],
openai_api_key = os.environ["API_KEY"],
temperature = 0,
max_tokens = 1000,
top_p = 0.95,
frequency_penalty = 0,
presence_penalty = 0
)
chat = AzureChatOpenAI(
model_name = os.environ["GPT_ENGINE"],
deployment_name = os.environ["GPT_ENGINE"],
openai_api_key = os.environ["API_KEY"],
openai_api_base = os.environ["API_BASE"],
openai_api_version= os.environ["API_VERSION"],
temperature = 0,
max_tokens = 1000
)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?").format
```
```log-output
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle – and the actress' ...
Thought:Now I need to use a calculator to raise Jason Sudeikis' age to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "pow(47, 0.23)"
}
```
```error-output
ValueError: LLMMathChain._evaluate("
pow(47, 0.23)
") raised error: 'VariableNode' object is not callable. Please try again with a valid numerical expression
```
### Idea or request for content:
_No response_ | Report:Discussion about the bug mentioned in the documentation | https://api.github.com/repos/langchain-ai/langchain/issues/6037/comments | 1 | 2023-06-12T09:43:17Z | 2023-09-15T22:13:04Z | https://github.com/langchain-ai/langchain/issues/6037 | 1,752,340,270 | 6,037 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = "^0.0.197"
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
``` python
import asyncio
from langchain.utilities import TextRequestsWrapper
requests = TextRequestsWrapper()
async def fun():
ret = await requests.apost("http://127.0.0.1:8080", data={"data": 123})
if __name__ == '__main__':
asyncio.run(fun())
```
### Expected behavior

| TypeError("Requests.apost() missing 1 required positional argument: 'data'") | https://api.github.com/repos/langchain-ai/langchain/issues/6034/comments | 3 | 2023-06-12T08:37:26Z | 2023-09-25T09:49:03Z | https://github.com/langchain-ai/langchain/issues/6034 | 1,752,222,940 | 6,034 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Let users easily define the value of `k`, which is the number of documents retrieved, when specifying a VectorStore.
### Motivation
When working with documents, users typically need to specify VectorStore, and users may often want to define the specific number of documents being retrieved, i.e., the `k`, when similarity search runs behind the scene. For example, I am doing
```
docsearch = Chroma.from_documents(texts1 + texts3, embeddings)
retriever=docsearch.as_retriever(search_kwargs = {'k':1})
```
However, there is no easy way or clear documentation for this. I have to go through the source code all the ways to find that I can do it with adding `search_kwargs = {'k':1}` when I specify the retriever. This is not user-friendly, especially noticing that this is a common feature that users need.
### Your contribution
I would love to open a PR for this. I am willing to either change the code or writing a clearer documentation for this feature. | Add clearer API for defining `k` (number of documents retrieved) in VectorStore/retriever defining functions | https://api.github.com/repos/langchain-ai/langchain/issues/6033/comments | 2 | 2023-06-12T08:24:14Z | 2023-10-25T16:08:23Z | https://github.com/langchain-ai/langchain/issues/6033 | 1,752,197,041 | 6,033 |
[
"langchain-ai",
"langchain"
] | ### System Info
@hwchase17
@agola11
Hi. From time to time, I am getting the following error:
```
2023-06-12 02:27:55.993 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in run_script
exec(code, module.dict)
File "C:\Users\v-alakubov\OneDrive\Desktop\app_v2\src\pages\AskData.py", line 7, in <module>
from modules.table_tool import PandasAgent
File "C:\Users\v-alakubov\OneDrive\Desktop\Listens\app_v2.\src\modules\table_tool.py", line 6, in <module>
from langchain.callbacks import get_openai_callback
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain_init.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\agents_init_.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\tools_init_.py", line 46, in <module>
from langchain.tools.powerbi.tool import (
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\tools\powerbi\tool.py", line 11, in <module>
from langchain.chains.llm import LLMChain
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chains_init_.py", line 7, in <module>
from langchain.chains.conversational_retrieval.base import (
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 22, in <module>
from langchain.chains.question_answering import load_qa_chain
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chains\question_answering_init_.py", line 13, in <module>
from langchain.chains.question_answering import (
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chains\question_answering\map_reduce_prompt.py", line 2, in <module>
from langchain.chains.prompt_selector import ConditionalPromptSelector, is_chat_model
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chains\prompt_selector.py", line 7, in <module>
from langchain.chat_models.base import BaseChatModel
ImportError: cannot import name 'BaseChatModel' from partially initialized module 'langchain.chat_models.base' (most likely due to a circular import) (C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\langchain\chat_models\base.py)
2023-06-12 02:27:56.013 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\v-alakubov\AppData\Local\anaconda3\envs\py311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "C:\Users\v-alakubov\OneDrive\Desktop\Listens\app_v2\src\pages\AI-Chat.py", line 8, in <module>
from modules.utils import Utilities
File "C:\Users\v-alakubov\OneDrive\Desktop\Listens\app_v2.\src\modules\utils.py", line 6, in <module>
from modules.chatbot import Chatbot
File "C:\Users\v-alakubov\OneDrive\Desktop\Listens\app_v2.\src\modules\chatbot.py", line 3, in <module>
from langchain.chat_models import AzureChatOpenAI
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1155, in _find_and_load_unlocked
KeyError: 'langchain'
```
any ideas why?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Here structure of project:
```
Home.py
pages
modules
embeddings
```
Where pages, modules and embeddings are folders.
"pages" folder has two files:
AI-Chat.py
AskData.py
modules folder has 7 files:
chatbot.py
embedder.py
history.py
layout.py
sidebar.py
table_tool.py
utils.py
Home.py (the file I run as ` streamlit run .\src\Home.py`)
AI-Chat.py:
```
import os
import streamlit as st
from io import StringIO
import re
import sys
from modules.history import ChatHistory
from modules.layout import Layout
from modules.utils import Utilities
from modules.sidebar import Sidebar
....
```
AskData.py:
```
import os
import importlib
import sys
import pandas as pd
import streamlit as st
from io import BytesIO
from modules.table_tool import PandasAgent
from modules.layout import Layout
from modules.utils import Utilities
from modules.sidebar import Sidebar
....
```
chatbot.py:
```
import streamlit as st
# from langchain.chat_models import ChatOpenAI
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts.prompt import PromptTemplate
from langchain.callbacks import get_openai_callback
import os
#fix Error: module 'langchain' has no attribute 'verbose'
import langchain
langchain.verbose = False
import traceback
.....
```
utils.py:
```
import os
import pandas as pd
import streamlit as st
import pdfplumber
from modules.chatbot import Chatbot
from modules.embedder import Embedder
```
embedder.py:
```
import os
import pickle
import tempfile
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
....
```
table_tool.py:
```
import re
import sys
from io import StringIO, BytesIO
import matplotlib.pyplot as plt
import streamlit as st
from langchain.callbacks import get_openai_callback
from streamlit_chat import message
import os
from pandasai import PandasAI
# from pandasai.llm.openai import OpenAI
from pandasai.llm.azure_openai import AzureOpenAI
....
```
history.py:
```
import os
import streamlit as st
from streamlit_chat import message
......
```
layout.py:
```
import streamlit as st
....
```
sidebar.py:
```
import streamlit as st
.......
```
### Expected behavior
no error should rise | KeyError: 'langchain' (circular import error) | https://api.github.com/repos/langchain-ai/langchain/issues/6032/comments | 6 | 2023-06-12T07:52:33Z | 2024-01-30T00:42:49Z | https://github.com/langchain-ai/langchain/issues/6032 | 1,752,131,402 | 6,032 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I keep getting this error when generating Chroma vectors. Here's my code:
`from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import GoogleDriveLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
folder_id = ''
loader = GoogleDriveLoader(folder_id = folder_id,
recursive=False)
docs = loader.load(encoding='utf8')
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=4000, chunk_overlap=0, separators=["", "\n", ","]
)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key="")
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
llm = ChatOpenAI(temperature=0, model_name="davinci")
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
while True:
question = input("> ")
answer = qa.run(question)
print(answer)`
### Suggestion:
_No response_ | ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/6029/comments | 9 | 2023-06-12T05:50:34Z | 2023-09-19T16:08:44Z | https://github.com/langchain-ai/langchain/issues/6029 | 1,751,946,027 | 6,029 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.196
windows, wsl2
python 3.11.3
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow instructions on this page:
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
Then run the following prompt:
```
agent.run("Who is Elon Musk in a relationship with? What is their current age factorial?")
```
### Expected behavior
It should plan out how to solve the problem (search significant other, search age, find factorial, respond) then execute each action and respond to the user. Instead, when it gets to the factorial part, it fails, claiming that there's no factorial function. Stacktrace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File [~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:80](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/oriont/dev/langchain-phx/~/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:80), in LLMMathChain._evaluate_expression(self, expression)
78 local_dict = {"pi": math.pi, "e": math.e}
79 output = str(
---> 80 numexpr.evaluate(
81 expression.strip(),
82 global_dict={}, # restrict access to globals
83 local_dict=local_dict, # add common mathematical functions
84 )
85 )
86 except Exception as e:
File [~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:817](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/oriont/dev/langchain-phx/~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:817), in evaluate(ex, local_dict, global_dict, out, order, casting, **kwargs)
816 if expr_key not in _names_cache:
--> 817 _names_cache[expr_key] = getExprNames(ex, context)
818 names, ex_uses_vml = _names_cache[expr_key]
File [~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:704](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/oriont/dev/langchain-phx/~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:704), in getExprNames(text, context)
703 def getExprNames(text, context):
--> 704 ex = stringToExpression(text, {}, context)
705 ast = expressionToAST(ex)
File [~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:289](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/oriont/dev/langchain-phx/~/miniconda3/envs/langchain/lib/python3.11/site-packages/numexpr/necompiler.py:289), in stringToExpression(s, types, context)
288 # now build the expression
...
93 return re.sub(r"^\[|\]$", "", output)
ValueError: LLMMathChain._evaluate("
math.factorial(33)
") raised error: 'VariableNode' object has no attribute 'factorial'. Please try again with a valid numerical expression
``` | LLMMathChain 'VariableNode' object has no attribute 'factorial'. Please try again with a valid numerical expression | https://api.github.com/repos/langchain-ai/langchain/issues/6028/comments | 2 | 2023-06-12T05:42:29Z | 2023-09-18T16:08:14Z | https://github.com/langchain-ai/langchain/issues/6028 | 1,751,938,791 | 6,028 |
[
"langchain-ai",
"langchain"
] | The documentation says:
> It limits the Document content by doc_content_chars_max.
> Set doc_content_chars_max=None if you don't want to limit the content size.
But the claim type of int prevents this to be set as None:
https://github.com/hwchase17/langchain/blob/289e9aeb9d122d689d68b2e77236ce3dfcd606a7/langchain/utilities/arxiv.py#LL41C5-L41C38
> ValidationError: 1 validation error for ArxivAPIWrapper
> doc_content_chars_max
> none is not an allowed value (type=type_error.none.not_allowed)
Can you change that?
In addition, can you also expose this parameter to the `ArxivLoader`?
Thank you! | ArxivAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/6027/comments | 0 | 2023-06-12T05:30:46Z | 2023-06-16T05:16:43Z | https://github.com/langchain-ai/langchain/issues/6027 | 1,751,928,656 | 6,027 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.9.13
langchain (0.0.163)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import AzureChatOpenAI
from langchain.agents import create_pandas_dataframe_agent
from langchain.agents import create_csv_agent
import os
import openai
os.environ[“OPENAI_API_BASE”] = os.environ[“AZURE_OPENAI_ENDPOINT”] = AZURE_OPENAI_ENDPOINT
os.environ[“OPENAI_API_KEY”] = os.environ[“AZURE_OPENAI_API_KEY”] = AZURE_OPENAI_API_KEY
os.environ[“OPENAI_API_VERSION”] = os.environ[“AZURE_OPENAI_API_VERSION”] = AZURE_OPENAI_API_VERSION
os.environ[“OPENAI_API_TYPE”] = “azure”
df=pd.read_csv(‘data.csv’).fillna(value = 0)
llm = AzureChatOpenAI(deployment_name=“gpt-35-turbo”, model_name=“gpt-35-turbo”,temperature=0)
agent_executor = create_pandas_dataframe_agent(llm=llm,df=df,verbose=True)
response = agent_executor.run(prompt + QUESTION)
print(response)
### Expected behavior
The current execution of agent results in errors like "Agent stopped due to iteration limit or time limit" and "Couldn't parse LLM output".
Also noticed that agent runs gets into multiple loops in identifying the valid tool but fails to identify tool , says as not valid tool and tries another one and ultimately stops with iteration limit or time limit.
Kindly suggest me on advice/suggestions to resolve the same.
Is there any particular tabular data structure that langchain works best with?
Do I need to add any particular kind of tool (As observed it tries multiple tools and says not valid tool)
Any modifications I need to make with the code? | Issue : Agent Executor stops due to iteration limit or time limit. | https://api.github.com/repos/langchain-ai/langchain/issues/6025/comments | 4 | 2023-06-12T03:48:01Z | 2024-07-30T08:37:31Z | https://github.com/langchain-ai/langchain/issues/6025 | 1,751,831,130 | 6,025 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
扫码进群,交流Chinese LLM 模型训练的问题和LLM的技术

| LLM技术和训练问题的微信交流群 | https://api.github.com/repos/langchain-ai/langchain/issues/6024/comments | 2 | 2023-06-12T03:21:25Z | 2023-10-12T16:08:53Z | https://github.com/langchain-ai/langchain/issues/6024 | 1,751,817,138 | 6,024 |
[
"langchain-ai",
"langchain"
] | ERROR: type should be string, got "https://github.com/hwchase17/langchain/blob/18af149e91e62b3ac7728ddea420688d41043734/langchain/text_splitter.py#L420\r\n\r\nBecause it goes from top to bottom the last chunk has the potential to be of any size and is frequently too small to be useful.\r\n\r\nI wrote a new class that is more or less a copy of the original function, but adjusts the output if if the last chunk is too small (less than 75% of chunk_size). It does this by adjusting the chunk size upwards to chunk_size = chunk_sizer + (small sized chunk / (chunks - 1)). This allows the last chunks token count to be distributed across all chunks and the end result is that there are no longer bad (small) chunks.\r\n\r\nI'm hesitant to create a PR from this because it's such a large change. I believe the correct course would be to integrate it into the main function, but because it rewrites the merge_splits function it would impact all splitters. It's also quite a bit slower than the original class because it can take a few tries to get to the write size. There are optimizations to be had.\r\n@hwchase17 \r\nhttps://github.com/ShelbyJenkins/langchain/blob/master/langchain/text_splitter.py#L793" | Last chunk output by RecursiveCharacterTextSplitter is often too small to be useful | https://api.github.com/repos/langchain-ai/langchain/issues/6019/comments | 1 | 2023-06-12T00:39:03Z | 2023-09-18T16:08:24Z | https://github.com/langchain-ai/langchain/issues/6019 | 1,751,680,025 | 6,019 |
[
"langchain-ai",
"langchain"
] | ### System Info
SQLDatabaseChain is appending a semicolon to created SQL. which is not liked by cx_oracle (or Oracle database by implication) - since it appends another semicolon in the end. making two semicolons in effect and causing error
sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) ORA-00933: SQL command not properly ended
https://cx-oracle.readthedocs.io/en/latest/user_guide/sql_execution.html
Need a way to control this behavior.
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html#
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
import cx_Oracle
db = SQLDatabase.from_uri("oracle://ora-user-name:ora-user-password@ora-host-name:1521/service")
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("How many employees are there?")
sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) ORA-00933: SQL command not properly ended
[SQL: SELECT COUNT(*) FROM YOUR_TABLE_NAME;]
(Background on this error at: https://sqlalche.me/e/20/4xp6)
### Expected behavior
Need a way to optionally suppress a semicolon being appended. or if it could be avoided in favor of cx_oracle - since it doesnt like a semicolon anyways in SQL | extra semicolon with SQLDatabaseChain when used with Oracle | https://api.github.com/repos/langchain-ai/langchain/issues/6016/comments | 3 | 2023-06-11T22:24:31Z | 2023-09-19T16:08:49Z | https://github.com/langchain-ai/langchain/issues/6016 | 1,751,630,974 | 6,016 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently the PALChain does in-context learning by passing an in-context learning prompt with 8 maths problem solutions in a single large prompt.
If we want the power of PALChain to increase and solve different problem types, we can't keep on adding more and more problem solutions in a single prompt.
As an alternative, we should store more problem solutions in a list, and only include the most relevant to the query in the prompt.
### Motivation
I would like to enhance PAL to solve many more types of problems that can be solved in Python code, e.g. those described in leetcode.com, maths problems in the UK 11+, GCSE and even "A" level example, etc. etc.
### Your contribution
I am happy to work on a PR for this.
How I see it working:
- long list of maths problems solutions in the existing format (natural language question; python code to print the answer).
- the first time the PALChain is invoked, every one of these is embedded using an embedding model.
- every query that a user submits is also embedded, and the most similar questions are passed into the prompt.
There is a risk that "Most similar" gets distracted by proper nouns. E.g. if there is a problem solution saying "Jonathan has 5 gold balls, ..." and the user submits a question on a completely different maths topic, e.g. "Jonathan wants to express 3/5 as a percentage." Jonathan is irrelevant but the embedding similiarity algorithm may flag them as similar. For that reason I propose that as a first pass, both the problem solution and the user query is first passed through a named entity recognition pass to a LLM, which removed names entity related to people, geographical locations etc and replaces them with generic tokens. | PALChain In-context-learning won't scale to multiple problem types. | https://api.github.com/repos/langchain-ai/langchain/issues/6014/comments | 1 | 2023-06-11T22:13:42Z | 2023-09-17T17:13:12Z | https://github.com/langchain-ai/langchain/issues/6014 | 1,751,628,136 | 6,014 |
[
"langchain-ai",
"langchain"
] | ###Use of Output parser with LLM Chain
I want to use the sequential chain of two LLm chains. The first chain is coded as below. I want to get the output of this chain as a Python list of aspects.
```
# This is an LLMChain for Aspects Extraction.
examples =[ {
"review": '''Improve the communication updates to families. Improve consistency, with housekeeping it was supposed to be weekly but now it is every 2 or 3 weeks. There is no consistency in the staff either due to the high turnover rate. Improve the services in the dining room along with meal options.''',
"aspects": '''Relevant Aspects are communication on updates, housekeeping consistency, staff turnover, and dining services.'''
},
{"review": '''On paper they do, but my wife has not been brought to them. I have not had a meeting to set up a plan for her. No one wheels her to partake in the activities. They need somebody there that could take them to activities if they wanted. They should bring them to activities where other people will watch over them. The people that are in charge, like the head nurse and activities director, are good about getting ahold of and answering your questions. Once you get down to the next level, they are overwhelmed. They could use another set of eyes and hands. In the memory care area, a lot of people need care at the same time.''',
"aspects": '''Relevant Aspects are Staff Attitude, Care Plan Setup, Staff Involvement in Activities, Oversight during Activities, Memory Care Area'''}
]
#Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.
prompt_template='''
Review: {review}
{aspects}
'''
example_prompt = PromptTemplate(input_variables=["review", "aspects"], template= prompt_template, output_parser=output_parser )
final_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Review: {review}",
input_variables=["review"],
prefix="We are extracting the aspects from the review given by the residents of nursing homes. Take the Review as input and extract the different aspects about the the staff, food, \
building, activities, management, cost of the the nuursing home."
)
output=aspect_extraction_chain.predict_and_parse(review="The community has popcorn days, church, birthday celebrations, holiday parties, therapy dogs, and so much more. My mother is very happy here, and she is kept active. They do a great job of keeping the elderly minds active and involved. The dining program is great as well. My mother tends to eat slow, but the dining program always lets my mother stay to finish her food. Any residents that want to practice religion, this is also offered here! More outings have been added, they just went to Walmart recently.")
```
The current result is a string like
'Relevant Aspects are Activities, Elderly Minds Engagement, Dining Program, Religious Offerings, Outings.'
I want the result as: ['\nActivities', 'Elderly Minds Engagement', 'Dining Program', 'Religious Offerings', 'Outings.']
### Suggestion:
Kindly guide me on How to use langchain output parser for it. | How to use Output parser with LLM Chain | https://api.github.com/repos/langchain-ai/langchain/issues/6013/comments | 10 | 2023-06-11T20:46:29Z | 2023-10-15T16:06:38Z | https://github.com/langchain-ai/langchain/issues/6013 | 1,751,597,844 | 6,013 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Description:
I am currently using the lang chain repository for my project, specifically the functionality related to similarity search. I have encountered an issue when attempting to retrieve the chunk number from the search results.
To provide some context, I have already performed the necessary steps to generate embeddings for each chunk using the provided functions in lang chain. Here is the relevant code snippet:
```
embeddings = OpenAIEmbeddings()
knowledge_base = FAISS.from_texts(chunks, embeddings)
```
After creating the knowledge base, I utilize the similarity_search function to find the most similar chunk to a given query:
`docs = knowledge_base.similarity_search(query)`
The docs object returned contains information about the search results, but I am struggling to access the specific chunk number associated with the most similar result.
My question is: Is there a method or property available in lang chain that allows me to retrieve the chunk number from the docs object?
I would greatly appreciate any assistance or guidance in resolving this issue. Thank you for your support!
### Suggestion:
I could not able to find any documentation, Once I know how to do it I can add it to the documentation | Issue with retrieving the chunk number from similarity search in lang chain | https://api.github.com/repos/langchain-ai/langchain/issues/6004/comments | 3 | 2023-06-11T16:08:28Z | 2023-11-16T16:07:26Z | https://github.com/langchain-ai/langchain/issues/6004 | 1,751,497,935 | 6,004 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi,
I am using UnstructureUrlLoader to upload the urls into Chroma vector database but it is only uploading the html content of the url, not uploading the pdfs available on this url that's why it is not giving the answer for query related to pdf files while performing querying using RetrievalQA chain
I also tried with SeleniumUrlLoader but still no results.
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter , TokenTextSplitter
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains import VectorDBQA, RetrievalQA
from langchain.document_loaders import TextLoader, UnstructuredFileLoader, DirectoryLoader,UnstructuredURLLoader,SeleniumURLLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.prompts import PromptTemplate
import chromadb
import wx
import os
import requests
from bs4 import BeautifulSoup
openai_api_key = my_openai_key
persist_directory = "Zdb_directory"
collection_name = 'my collection'
temperature = 0
max_tokens = 200
llm=ChatOpenAI(openai_api_key=openai_api_key)
urls = ['url1,'url2','url3']
#loader = UnstructuredURLLoader(urls)
loader = SeleniumURLLoader(urls)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory=persist_directory)
vectordb.persist()
vectordb = None
vectodb = Chroma(persist_directory=persist_directory, embedding_function=embeddings, collection_name='my_collection)
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=vectordb.as_retriever(search_type="mmr"), return_source_documents=True)
ques = 'my_question'
ans = qa(ques)
Can anyone please help me that how can I upload all the pdf files consisting by a url into the chroma vector database using UnstructuredUrlLoader or SeleniumUrlLoader, I will be thankful to you.
Thank You
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter , TokenTextSplitter
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains import VectorDBQA, RetrievalQA
from langchain.document_loaders import TextLoader, UnstructuredFileLoader, DirectoryLoader,UnstructuredURLLoader,SeleniumURLLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.prompts import PromptTemplate
import chromadb
import wx
import os
import requests
from bs4 import BeautifulSoup
openai_api_key = my_openai_key
persist_directory = "Zdb_directory"
collection_name = 'my collection'
temperature = 0
max_tokens = 200
llm=ChatOpenAI(openai_api_key=openai_api_key)
#all url are consisting pdf files inside it or in sub url
urls = ['url1,'url2','url3']
#loader = UnstructuredURLLoader(urls)
loader = SeleniumURLLoader(urls)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory=persist_directory)
vectordb.persist()
vectordb = None
vectodb = Chroma(persist_directory=persist_directory, embedding_function=embeddings, collection_name='my_collection)
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=vectordb.as_retriever(search_type="mmr"), return_source_documents=True)
ques = 'my_question'
ans = qa(ques)
### Expected behavior
while I am uploading a url to chroma vector database using Unstructured or SeleniumUrlLoader so rather than just uploading the html website it should also upload the content of all sub urls and the content of all files which are available in this url.
Thank You | UnstructuredUrlLoader or SeleniumUrlLoader are not able to upload the pdf's consisting by urls. | https://api.github.com/repos/langchain-ai/langchain/issues/6000/comments | 3 | 2023-06-11T08:55:35Z | 2023-09-28T16:06:24Z | https://github.com/langchain-ai/langchain/issues/6000 | 1,751,329,902 | 6,000 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version - 0.0.154
ubuntu - 18.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits import create_python_agent
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/practice_ml/lib/python3.9/site-packages/openai/openai_object.py:59, in OpenAIObject.__getattr__(self, k)
58 try:
---> 59 return self[k]
60 except KeyError as err:
**KeyError: 'choice'**
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 from langchain.agents.agent_toolkits import create_python_agent
File ~/practice/langchain.py:23
16 response = openai.ChatCompletion.create(
17 model = model,
18 messages = messages,
19 temperature=0
20 )
21 return response.choice[0].message['content']
---> 23 get_completion("what is 1 + 1")
24 customer_email = """
25 Arrr, I be fuming that me blender lid \
26 flew off and splattered me kitchen walls \
(...)
30 right now, matey!
31 """
34 from langchain.chat_models import ChatOpenAI
File ~/practice/langchain.py:21, in get_completion(prompt, model)
12 messages = [
13 {'role' : 'user',
14 "content" : prompt}
15 ]
16 response = openai.ChatCompletion.create(
17 model = model,
18 messages = messages,
19 temperature=0
20 )
---> 21 return response.choice[0].message['content']
File ~/practice_ml/lib/python3.9/site-packages/openai/openai_object.py:61, in OpenAIObject.__getattr__(self, k)
59 return self[k]
60 except KeyError as err:
---> 61 raise AttributeError(*err.args)
AttributeError: choice
### Expected behavior
no error . | AttributeError: choice while creating python agent | https://api.github.com/repos/langchain-ai/langchain/issues/5999/comments | 1 | 2023-06-11T06:54:03Z | 2023-09-17T17:13:16Z | https://github.com/langchain-ai/langchain/issues/5999 | 1,751,277,196 | 5,999 |
[
"langchain-ai",
"langchain"
] | ### System Info
version = "0.0.157"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [x] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just run
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=900,
chunk_overlap=0,
separators=separators,
add_start_index = True,
length_function=tiktoken_len,
)
```
### Expected behavior
no error | TypeError: TextSplitter.__init__() got an unexpected keyword argument 'add_start_index' | https://api.github.com/repos/langchain-ai/langchain/issues/5998/comments | 4 | 2023-06-11T06:33:16Z | 2023-12-06T17:45:30Z | https://github.com/langchain-ai/langchain/issues/5998 | 1,751,271,273 | 5,998 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The human input tool only works with agents run from the command line. As far as I can see, it is impossible to use it when the conversation is being displayed in a web UI (or, for that matter, over any other channel).
### Suggestion:
Implement a callback for handling requests for human input and a way to return the input to the tool/agent. This would allow eg requests for input to be sent to an open WebSocket chat and optionally be added to the message history. | Human Input tool is not useable in production | https://api.github.com/repos/langchain-ai/langchain/issues/5996/comments | 6 | 2023-06-11T05:52:55Z | 2024-02-12T16:18:14Z | https://github.com/langchain-ai/langchain/issues/5996 | 1,751,260,303 | 5,996 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I know this is a pretty general question, but in my testing, the ChatGPT console (with GPT-4) is performing much better for text summarization than `ChatOpenAI(model='gpt-4')` is.
Are the default settings intended to be as similar as possible, or are there other known arguments, model_kwargs, system prompts, etc. that will adjust the performance to be more in line with the console experience? | DOC: ChatOpenAI parameters to get it to respond like ChatGPT Plus | https://api.github.com/repos/langchain-ai/langchain/issues/5995/comments | 4 | 2023-06-11T02:42:21Z | 2023-09-18T16:08:34Z | https://github.com/langchain-ai/langchain/issues/5995 | 1,751,205,661 | 5,995 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I think it would be a nice idea to implement the langchain logic in a minimalistic way using python decorators to quickly prototype stuff.
### Motivation
<img width="462" alt="Screenshot 2023-06-10 at 17 22 29" src="https://github.com/hwchase17/langchain/assets/34897716/a22aa5fd-be7a-4eaa-a479-cbbeeb08a15c">
like here https://github.com/srush/MiniChain
### Your contribution
Currently I do not have the time to do this but I just wanted to pop the idea :) | Implement nice pythonic @decorators [see image] | https://api.github.com/repos/langchain-ai/langchain/issues/5987/comments | 1 | 2023-06-10T15:27:35Z | 2023-09-16T16:06:27Z | https://github.com/langchain-ai/langchain/issues/5987 | 1,751,010,738 | 5,987 |
[
"langchain-ai",
"langchain"
] | ### System Info
>>> langchain.__version__
'0.0.194'
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.prompts.pipeline import PipelinePromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import HumanMessagePromptTemplate
pipeline_prompt = PipelinePromptTemplate(final_prompt=PromptTemplate.from_template(""), pipeline_prompts=[])
human_message_prompt = HumanMessagePromptTemplate(prompt=pipeline_prompt)
```
### Expected behavior
Code should run without raising any errors.
Instead it gives this error:
```
pydantic.error_wrappers.ValidationError: 1 validation error for HumanMessagePromptTemplate
prompt
Can't instantiate abstract class StringPromptTemplate with abstract method format (type=type_error)
```
Which comes from the fact that HumanMessagePromptTemplate inherits from BaseStringMessagePromptTemplate and BaseStringMessagePromptTemplate requires a prompt of type StringPromptTemplate:
https://github.com/hwchase17/langchain/blob/f3e7ac0a2c0ad677e91571f59b03b55c5af52db2/langchain/prompts/chat.py#L67
I solved it this way:
```
class ExtendedHumanMessagePromptTemplate(HumanMessagePromptTemplate):
prompt: BasePromptTemplate
human_message_prompt = ExtendedHumanMessagePromptTemplate(prompt=pipeline_prompt)
```
| pydantic validation error when creating a HumanMessagePromptTemplate from a PipelinePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/5986/comments | 1 | 2023-06-10T13:47:36Z | 2023-09-17T17:13:27Z | https://github.com/langchain-ai/langchain/issues/5986 | 1,750,977,213 | 5,986 |
[
"langchain-ai",
"langchain"
] | I am trying to make a chatbot which remember the existing chat and can answer from that as well as from document. Here is what I have tried.
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(
ChatOpenAI(model_name='gpt-3.5-turbo'),
retriever=docsearch,
memory=memory,
verbose=True)
result = qa({"question": "My name is Talha. Ali is my friend. What is CNN?"})
```
My document is based on CNN, so it fetched what is CNN
Then I ask another question.
```
result = qa({"question": "Who is Ali?"})
```
this is what happened behind the scene
```
> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
Human: My name is Talha. Ali is my friend. What is CNN?
Assistant: CNN is a tool for deep learning and machine learning algorithms used in artificial neural networks for image recognition, object detection, and segmentation.
Follow Up Input: Who is Ali?
Standalone question:
> Finished chain.
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
_few paragraphs from documents i have ingested in vector database_
Question: Ali is Talha's friend.
Answer:
```
But when i print `result['answer']` It say `"I don't know."`
Here is notebook for reproducibility
https://colab.research.google.com/drive/1UTjdpAAjoZ_ccpdAEr9gpfpqlC3pwtmT?usp=sharing | ConversationalRetrievalChain did not look into chat histroy while making an answer | https://api.github.com/repos/langchain-ai/langchain/issues/5984/comments | 1 | 2023-06-10T11:32:30Z | 2023-06-13T01:56:12Z | https://github.com/langchain-ai/langchain/issues/5984 | 1,750,931,497 | 5,984 |
[
"langchain-ai",
"langchain"
] | ### Feature request
What the title says. A method in the `VectorStore` class that allows the size of the store to be retrieved.
### Motivation
This could be useful for certain applications of continuous storage.
### Your contribution
Happy to contribute with some guidance. | Get number of vectors in Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/5980/comments | 7 | 2023-06-10T09:20:56Z | 2024-06-16T16:07:06Z | https://github.com/langchain-ai/langchain/issues/5980 | 1,750,888,737 | 5,980 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.195
### Who can help?
@vowelparrot
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
SendEmail = StructuredTool.from_function(sendEmail)
UpdateClientProfile = StructuredTool.from_function(updateClientProfile)
tools = [SendEmail, UpdateClientProfile]
#llm=ChatOpenAI(openai_api_key=os.environ['OPENAI_API_KEY_GPT4'], model=pd.steps["code"]["$return_value"]["model"], temperature=0.8)
llm=ChatOpenAI(openai_api_key=os.environ['OPENAI_API_KEY_GPT4'], model="gpt-4", temperature=0.8)
prompt = pd.steps["code"]["$return_value"]["prompt"]
gpt3 = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
res = gpt3.run(prompt)
gpt4 = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
res = gpt4.run(prompt)
```
Here's the prompt:
<img width="714" alt="prompt" src="https://github.com/hwchase17/langchain/assets/7931903/2d15bbdf-a81d-4742-beae-0664acce53bc">
gpt-3.5-turbo correctly handle the tasks:
<img width="884" alt="gpt3 5" src="https://github.com/hwchase17/langchain/assets/7931903/0599f839-4a2e-44b5-9978-df758ea8e69c">
gpt-4 only do the first one:
<img width="880" alt="gpt4" src="https://github.com/hwchase17/langchain/assets/7931903/1f6017a4-a31c-48b6-b06b-0bf6214cddf5">
### Expected behavior
Both agents should complete two tasks :
1 - Send an email
2 - Update the customer profile
gpt-3.5-turbo correctly completes the tasks, while gpt-4 stops after the first one and outputs what it should do next:
"Now I will update the customer profile..." | STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION with gpt-4 doesn't follow through the tasks, while gpt-3.5-turbo does. | https://api.github.com/repos/langchain-ai/langchain/issues/5978/comments | 3 | 2023-06-10T07:53:02Z | 2023-12-06T17:45:35Z | https://github.com/langchain-ai/langchain/issues/5978 | 1,750,855,665 | 5,978 |
[
"langchain-ai",
"langchain"
] | ### System Info
The current behavior of the ZapierToolkit in the LangChain tool occasionally results in the execution of tasks that the user did not specifically request. For example, when a user asks the ZapierToolkit to read the most recent email, it sometimes replies to the email even when the user did not explicitly specify this action. This unexpected behavior occurs when the AgentExecutor chain automatically decides to reply to the email, regardless of whether the email requested the reply or not. It is safe for the ZapierToolkit to adhere to the user's explicit instructions to avoid potential mishaps or undesired outcomes.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The issue was tested with the GPT-3.5 models (openai chat completion model and text-davinci-003) with temperature=0.7, and Gmail-related Zapier NLA Development actions, such as "Gmail: Reply to Email," "Gmail: Find Email," and "Gmail: Send Email.", enabled.
Code Sample:
```Python
# get from https://platform.openai.com/
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
# get from https://nla.zapier.com/demo/provider/debug (under User Information, after logging in):
os.environ["ZAPIER_NLA_API_KEY"] = os.environ.get("ZAPIER_NLA_API_KEY", "")
llm = OpenAI()
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.get_tools(), llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True);
try:
agent.run("Read the most recent email")
except Exception as e:
print(e)
exit(1)
```
### Expected behavior
The ZapierToolkit should only perform tasks that are explicitly requested by the user. If the user does not specify that the most recent email should be replied to, the ZapierToolkit should not automatically decide to perform this action. | ZapierToolkit Automatically Performing Unrequested Tasks | https://api.github.com/repos/langchain-ai/langchain/issues/5977/comments | 1 | 2023-06-10T06:47:07Z | 2023-09-16T16:06:38Z | https://github.com/langchain-ai/langchain/issues/5977 | 1,750,835,797 | 5,977 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest langchain & openai 0.27.5
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html
class CalculatorInput(BaseModel):
question:str = Field()
tools.append(
Tool.from_function(
func=llm_math_chain.run,
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
# coroutine= ... <- you can specify an async method if desired as well
)
Got this error msg:
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Tool
args_schema extra fields not permitted (type=value_error.extra)
### Expected behavior
Should enter Agent excution | pydantic.error_wrappers.ValidationError for args_schema | https://api.github.com/repos/langchain-ai/langchain/issues/5974/comments | 4 | 2023-06-10T03:33:28Z | 2024-02-11T16:19:26Z | https://github.com/langchain-ai/langchain/issues/5974 | 1,750,766,369 | 5,974 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I was trying to store different vectorstore to chroma with different ids like "vectorstore = Chroma.from_documents(documents, embeddings, ids="test1",)", but it keeps telling Number of embeddings 9 must match number of ids 1
and how i can erase some of vectorstore inside chroma, if i can store many into it?
### Suggestion:
_No response_ | Issue: Is there some detail documents about chroma.from_documents args? | https://api.github.com/repos/langchain-ai/langchain/issues/5973/comments | 3 | 2023-06-10T02:40:35Z | 2023-09-18T16:08:39Z | https://github.com/langchain-ai/langchain/issues/5973 | 1,750,751,056 | 5,973 |
[
"langchain-ai",
"langchain"
] | ### Feature request
make recursivecharactersplit return start index and end index of a chunk in the original doc
then user can get snnipt by `original_doc[start_index:end_index]`
### Motivation
This is important info of the trunk.
### Your contribution
I can not do it now, but if I have time, I will try to do it. | Get start index and end index of a chunk in the original doc with recursivecharactersplit. | https://api.github.com/repos/langchain-ai/langchain/issues/5972/comments | 3 | 2023-06-10T02:38:21Z | 2023-09-18T16:08:43Z | https://github.com/langchain-ai/langchain/issues/5972 | 1,750,750,087 | 5,972 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using `Unstructured` for parsing PDFs and have it installed through a docker dev container. It was working a few months ago but after I rebuilt the container for deployment, it suddenly breaks.
Here's my dockerfile:
```
FROM python:3.9-slim-buster
# Update package lists
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 gcc g++ git build-essential libpoppler-cpp-dev libmagic-dev pkg-config poppler-utils tesseract-ocr libtesseract-dev -y
# Make working directories
RUN mkdir -p /app
WORKDIR /app
# Copy the requirements.txt file to the container
COPY requirements.txt .
# Install dependencies
RUN pip install --upgrade pip
RUN pip install torch torchvision torchaudio
RUN pip install unstructured-inference
RUN pip install -r requirements.txt
RUN pip install 'git+https://github.com/facebookresearch/detectron2.git@e2ce8dc#egg=detectron2'
# Copy the .env file to the container
COPY .env .
# Copy every file in the source folder to the created working directory
COPY . .
# Expose the port that the application will run on
EXPOSE 8080
# Start the application
CMD ["python3.9", "-m", "uvicorn", "main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "8080", "--workers", "4"]
```
and `requirements.txt`:
```
fastapi
uvicorn
langchain
python-poppler
pytesseract
unstructured[local-inference]
psycopg2-binary
pgvector
openai
tiktoken
python-dotenv
pypdf
```
I've verified that detectron is in my system but it seems like it's not being used by unstructured as it defaults to using `pdfminer`
### Suggestion:
_No response_ | `UnstructuredPDFLoader` suddenly can't parse scanned PDFs | https://api.github.com/repos/langchain-ai/langchain/issues/5968/comments | 1 | 2023-06-10T02:03:23Z | 2023-09-16T16:06:53Z | https://github.com/langchain-ai/langchain/issues/5968 | 1,750,740,372 | 5,968 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.192, Python: 3.10
```
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path='.langchain.db')
from langchain.llms import OpenAI
my_llm = OpenAI(model_name='text-davinci-003', temperature=0, max_tokens=1, logprobs=5)
result=my_llm.generate(['2+2='])
result.generations[0][0]
```
In the above, the logprobs will only be generated the first time and never again. Interestingly, the InMemoryCache does successfully persist the logprobs (and other generation_info) upon a cache hit.
@hwchase17 @yuert
### Who can help?
Any contributor.
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path='.langchain.db')
from langchain.llms import OpenAI
my_llm = OpenAI(model_name='text-davinci-003', temperature=0, max_tokens=1, logprobs=5)
result=my_llm.generate(['2+2='])
result.generations[0][0]
```
### Expected behavior
I expect the following output every time I run this
```
Generation(text='4', generation_info={'finish_reason': 'length', 'logprobs': <OpenAIObject at 0x7fbcb6ab1710> JSON: {
"tokens": [
"4"
],
"token_logprobs": [
-0.5794853
],
"top_logprobs": [
{
"4": -0.5794853,
"\n": -1.6557268,
"5": -2.264629,
"?": -3.518741,
"3": -4.042759
}
],
"text_offset": [
4
]
}})
```
But instead (upon the second or more times I run this), I get this:
Generation(text='4', generation_info=None) | SQLiteCache does not cache logprobs | https://api.github.com/repos/langchain-ai/langchain/issues/5965/comments | 2 | 2023-06-10T00:26:29Z | 2023-09-18T16:08:49Z | https://github.com/langchain-ai/langchain/issues/5965 | 1,750,669,192 | 5,965 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Need to add proxy to GoogleSearchAPIWrapper()
Perhaps using this [approach](https://github.com/googleapis/google-api-python-client/issues/1078#issuecomment-718919158) and/or this [approach](https://github.com/googleapis/google-api-python-client/issues/1260#issuecomment-802728649)?
```
import httplib2
import google_auth_httplib2
http = httplib2.Http(proxy_info=httplib2.ProxyInfo(
httplib2.socks.PROXY_TYPE_HTTP, PROXY_IP, PROXY_PORT
))
authorized_http = google_auth_httplib2.AuthorizedHttp(credentials, http=http)
service = discovery.build(http=authorized_http)
```
### Motivation
Add Proxy to GoogleSearchAPIWrapper
### Your contribution
I can write a PR but perhaps with some additional help | Add Proxy to GoogleSearchAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/5964/comments | 2 | 2023-06-09T23:38:23Z | 2023-11-07T16:07:53Z | https://github.com/langchain-ai/langchain/issues/5964 | 1,750,638,342 | 5,964 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Sometimes, the agent took too long to respond, stopping it and trying again would be a good option,
So, How Can I Stop and Agent?
Thanks in advance.
### Suggestion:
_No response_ | Issue: How to stop an AgentExecutor after has been dispatched/ran. | https://api.github.com/repos/langchain-ai/langchain/issues/5963/comments | 5 | 2023-06-09T23:01:56Z | 2023-09-18T16:08:54Z | https://github.com/langchain-ai/langchain/issues/5963 | 1,750,591,178 | 5,963 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
This is for the documentation page:
https://python.langchain.com/en/latest/modules/agents/agents/examples/structured_chat.html
This does not work in Jupyter or if you download it to a .py file and run it. You do get 2 different errors though.
When you run this in Jupyter you get this error:
```
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
Cell In[29], line 3
1 async_browser = create_async_playwright_browser()
2 # sync_browser = None # create_sync_playwright_browser()
----> 3 browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
4 tools = browser_toolkit.get_tools()
File ~/Google Drive/PycharmProjectsLocal/pythonProject2/venv/lib/python3.8/site-packages/langchain/agents/agent_toolkits/playwright/toolkit.py:83, in PlayWrightBrowserToolkit.from_browser(cls, sync_browser, async_browser)
81 # This is to raise a better error than the forward ref ones Pydantic would have
82 lazy_import_playwright_browsers()
---> 83 return cls(sync_browser=sync_browser, async_browser=async_browser)
File ~/Google Drive/PycharmProjectsLocal/pythonProject2/venv/lib/python3.8/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/Google Drive/PycharmProjectsLocal/pythonProject2/venv/lib/python3.8/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/Google Drive/PycharmProjectsLocal/pythonProject2/venv/lib/python3.8/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()
ConfigError: field "sync_browser" not yet prepared so type is still a ForwardRef, you might need to call PlayWrightBrowserToolkit.update_forward_refs().
```
When you run it as a .py file you get this error:
```
Connected to pydev debugger (build 231.9011.38)
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/greggwcasey/Google Drive/PycharmProjectsLocal/pythonProject2/structured_chat.py", line 76
response = await agent_chain.arun(input="Hi I'm Erica.")
^
SyntaxError: 'await' outside function
python-BaseException
```
### Idea or request for content:
I would like to have a working example that I could download and run on my Jupyter server or Colab and it work as-is. | DOC: Documented example doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/5957/comments | 15 | 2023-06-09T18:21:14Z | 2024-02-24T13:48:27Z | https://github.com/langchain-ai/langchain/issues/5957 | 1,750,318,438 | 5,957 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version = '0.0.191'
elasticsearch version='(8, 8, 0)'
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying ElasticVectorSearchon langchain with my Azure elastic instance and I am getting `BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'runtime error')`
```
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "hostid.westeurope.azure.elastic-cloud.com"
elasticsearch_url = f"https://user:password-elastic@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_langchain",
embedding=embedding
)
```
`doc = elastic_vector_search.similarity_search("simple query")`
The response is :
`BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'runtime error')`
### Expected behavior
It should return a list of document that match the query from elastic search | BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'runtime error') using ElasticVectorSearch | https://api.github.com/repos/langchain-ai/langchain/issues/5953/comments | 1 | 2023-06-09T17:26:11Z | 2023-09-15T16:08:07Z | https://github.com/langchain-ai/langchain/issues/5953 | 1,750,256,739 | 5,953 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There is no way to update existing vectors in the opensearch implementation for example if some existing text was changed by passing in custom ids.
https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/opensearch_vector_search.py#L95
### Motivation
Pinecone implementation has this basic functionality:
https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/pinecone.py#L85
### Your contribution
I can try to make a PR to mirror the pincone implementation but it would be pretty naive. | Ability to pass in ids for opensearch docs | https://api.github.com/repos/langchain-ai/langchain/issues/5952/comments | 0 | 2023-06-09T17:06:34Z | 2023-06-26T15:58:45Z | https://github.com/langchain-ai/langchain/issues/5952 | 1,750,232,393 | 5,952 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add a way to pass pre-embedded texts into the VectorStore interface.
Options:
- Standardize the add_embeddings function that has been added to some of the implementations. e.g.
```
def add_embeddings(
self,
texts: List[str],
embeddings: List[List[float]],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
```
- Add embeddings kwarg to the add_texts interface
```
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
embeddings: Optional[List[List[float]]] = None,
**kwargs: Any,
) -> List[str]:
```
First option is nice in that it leaves each function with a very distinct role, but has added overhead for implementing the interface. Results in very similar code between add_texts and add_embeddings and/or a third private method to handle the actual add operation.
Second option should be pretty straightforward to add to all the implementations, but adds some clutter to the add_texts interface. No too bad IMO as it would be handled almost the same as metadatas.
### Motivation
Embedding a large body of text (e.g. pubmed) takes a long time, and it is too restrictive to rely on each VectorStore implementation calling embedding functions in the most optimal way every use case. By having a defined way to pass in embeddings directly the interface becomes much more flexible.
For example I've been using huggingface's Datasets.map for processing texts, running embeddings on multiple gpus, etc. Would like to be able to save the final embedded dataset, and then insert into the vector store.
### Your contribution
Happy to help work on it. I know there's quite a few vector store implementations that would have to be updated for this. | VectorStore add embedded texts interface | https://api.github.com/repos/langchain-ai/langchain/issues/5945/comments | 7 | 2023-06-09T14:33:34Z | 2024-02-12T16:18:19Z | https://github.com/langchain-ai/langchain/issues/5945 | 1,750,015,297 | 5,945 |
[
"langchain-ai",
"langchain"
] | ### System Info
I7 32 GBRAM , ASUS dynabook protege
### Who can help?
@hwchase17 @agola11 @vow
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
db = Chroma.from_texts(chunks, embeddings, persist_directory='db', client_settings=CHROMA_SETTINGS)
chat_llm = ChatOpenAI(model_name = 'gpt-3.5-turbo',
callbacks=callbacks,
verbose=True,
temperature=0,
streaming = True
)
question_generator = LLMChain(llm=chat_llm, prompt=CONDENSE_QUESTION_PROMPT)
prompt = load_prompt(model_name='gpt')
doc_chain = load_qa_chain(llm=chat_llm,chain_type="stuff",prompt=prompt)
chain = ConversationalRetrievalChain(retriever=vectorstore.as_retriever(search_kwargs={"k": 2}),
question_generator=question_generator,
combine_docs_chain=doc_chain,
memory=memory,
return_source_documents=True,
get_chat_history=lambda h :h)
```
### Expected behavior
I wish to change the `search_kwargs` more specifically `top k` relevant documents which are retrieved but I am not able to change it ! it is repeatedly providing the default 4 documents. I have tried all the required method.
I am using Chroma DB as retriever on bunch of pdfs.
| Not able to set k top documents in Chroma DB based Retriever | https://api.github.com/repos/langchain-ai/langchain/issues/5944/comments | 2 | 2023-06-09T14:29:00Z | 2023-09-27T16:06:25Z | https://github.com/langchain-ai/langchain/issues/5944 | 1,750,007,818 | 5,944 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, I'd love to request examples/documentation for how to use [Gorilla](https://github.com/ShishirPatil/gorilla#faqs) with LangChain.
### Motivation
This looks like a great model to serve correct API calls.
### Your contribution
🤷🏻♂️ | Gorilla examples | https://api.github.com/repos/langchain-ai/langchain/issues/5941/comments | 4 | 2023-06-09T13:50:08Z | 2024-02-14T00:02:23Z | https://github.com/langchain-ai/langchain/issues/5941 | 1,749,936,984 | 5,941 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi,
I would like to use `RetrievalQA` chain with `chain_type` equal to map only. Is it actually possible with LangChain library? When generating answer I also get logits scores for each answer, so I would be able to choose the best answer based on these logits scores. Consequently, I only need the `RetrievalQA` chain to generate answer for each document retrieved.
### Motivation
Logits scores is a new feature of hugging Face ecosystem for generative model. It would be interesting to enable the user to generate answer for each retrieved document then let the user decided the best answer based on these scores. So a `chain_type` of type map only is needed.
### Your contribution
Let me know if this exists, otherwise I would be glad to test. | Define map only for chain_type of RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/5935/comments | 1 | 2023-06-09T10:43:36Z | 2023-09-15T16:08:11Z | https://github.com/langchain-ai/langchain/issues/5935 | 1,749,644,936 | 5,935 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi,
I would like to use `RetrievalQA` chain with custom remote embeddings API. As far as I could see when searching through documentation, I couldn't find any way on how to specify the url and payload to send to the custom API in the argument of the `RetrievalQA` object.
One solution would be to be able to take the url and payload into the `langchain.embeddings` module. The second path would be to be able to specify directly the query embeddings within the `RetrievalQA` object.
### Motivation
Using a custom API for embeddings is highly desirable for anyone who doesn't want to use paying APIs or who wants to optimize inference computations.
### Your contribution
Please indicate me if this already exists in the library. | Combination of RetrievalQA with custom embeddings API | https://api.github.com/repos/langchain-ai/langchain/issues/5933/comments | 1 | 2023-06-09T10:34:32Z | 2023-09-15T16:08:17Z | https://github.com/langchain-ai/langchain/issues/5933 | 1,749,631,612 | 5,933 |
[
"langchain-ai",
"langchain"
] | ### System Info
A4000 GPU
transformer 4.30.0
langchain latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True, use_multiprocessing=False)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=1000, streamer=streamer, temperature=0, top_p=0.2
)
hf = HuggingFacePipeline(pipeline=pipe)
llm_chain.run("Where is New delhi?").lstrip()
### Expected behavior
While loading a QLoRa model and passing TextIteratorStreamer to iterate over generated words from LLM i am getting TypeError: cannot pickle '_thread.lock' object.
While using only TextStreamer it is running fine but could not able to stream the result at API. | Facing issue while trying to Stream output for QLoRa model using TextIteratorStreamer | https://api.github.com/repos/langchain-ai/langchain/issues/5932/comments | 2 | 2023-06-09T10:27:05Z | 2023-10-06T16:07:39Z | https://github.com/langchain-ai/langchain/issues/5932 | 1,749,619,689 | 5,932 |
[
"langchain-ai",
"langchain"
] | ### Langchain agent error
I have the following error while trying to load an llm agent and connecting to wikipedia tool, it seems that the response is generated but throws an error when trying to convert the response to the right format.
Am I missing something or doing something wrong?
Thank you in advanced :)
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.memory import ConversationBufferMemory
from langchain.llms import AzureOpenAI
import openai
openai.api_type = "azure"
openai.api_base = "api_base"
openai.api_version = "2022-12-01"
openai.api_key = "key"
memory = ConversationBufferMemory(memory_key="chat-history", return_messages=True,output_key='answer')
llm = AzureOpenAI(deployment_name="text-davinci-003", model_name="text-davinci-003", openai_api_key="key",openai_api_version = "2022-12-01")
tools = load_tools (['wikipedia'], llm=llm)
agent = initialize_agent(
tools,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
llm=llm)
response = {}
response['input'] = "When did Cristobal Colon discover America?"
response['chat_history'] = []
res = agent.run(response)
```

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the code snippet above to reproduce the error
### Expected behavior
We are expecting the generated response as an output but seems to not be working | Langchain model using wikipedia tool fails to return response, but generates it successfully | https://api.github.com/repos/langchain-ai/langchain/issues/5928/comments | 2 | 2023-06-09T08:18:00Z | 2023-06-09T10:32:54Z | https://github.com/langchain-ai/langchain/issues/5928 | 1,749,387,543 | 5,928 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
It's not clear in the documentation ([JS/TS](https://js.langchain.com/docs/modules/indexes/text_splitters/#:~:text=Parameters%E2%80%8B,default%20value%20is%201000%20tokens.)) if the unit of measure used for chunks is in single text characters or in tokens. It's incosistent because it explicits both units in the same phrase.
As far as I know a token should be around 4 characters, so it's an important size difference.
### Idea or request for content:
Please change the documentation explaining whether it's in text characters or in tokens. | DOC: Inconsistent unit of measure for chunk_size and chunk_overlap | https://api.github.com/repos/langchain-ai/langchain/issues/5927/comments | 1 | 2023-06-09T07:59:37Z | 2023-09-15T16:08:21Z | https://github.com/langchain-ai/langchain/issues/5927 | 1,749,352,605 | 5,927 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Guys,I found the bigger the trained file size is,i got the the lower quality question。How can i fix this?
### Suggestion:
_No response_ | the bigger file size,the lower quality | https://api.github.com/repos/langchain-ai/langchain/issues/5924/comments | 0 | 2023-06-09T07:22:49Z | 2023-06-09T08:10:24Z | https://github.com/langchain-ai/langchain/issues/5924 | 1,749,285,405 | 5,924 |
[
"langchain-ai",
"langchain"
] | ### System Info
There is no safeguard in SQLDatabaseChain to prevent a malicious user from sending a prompt such as "Drop Employee table".
SQLDatabaseChain should have a facility to intercept and review the SQL before sending it to the database.
Creating this separately from https://github.com/hwchase17/langchain/issues/1026 because the SQL injection issue and the Python exec issues are separate. For example SQL injection cannot be solved with running inside an isolated container.
[LangChain version: 0.0.194. Python version 3.11.1]
<img width="596" alt="image" src="https://github.com/hwchase17/langchain/assets/227187/3ced0139-490f-4e41-a880-71dc864ee12c">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is a repro using the Chinook sqlite database used in the example ipynb. Running this will drop the Employee table from the SQLite database.
```python
chinook_sqlite_uri = "sqlite:///Chinook_Sqlite_Tmp.sqlite"
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
llm = OpenAI(temperature=0)
db = SQLDatabase.from_uri(chinook_sqlite_uri)
db.get_usable_table_names()
db_chain = SQLDatabaseChain.from_llm(llm=llm, db=db, verbose=True)
db_chain.run("How many employees are there?")
db_chain.run("Drop the employee table")
```
### Expected behavior
LangChain should provide a mechanism to intercept SQL before sending it to the database. During this interception the SQL can be examined and rejected if it performs unsafe operations. | SQLDatabaseChain has SQL injection issue | https://api.github.com/repos/langchain-ai/langchain/issues/5923/comments | 7 | 2023-06-09T07:19:24Z | 2024-03-13T16:12:29Z | https://github.com/langchain-ai/langchain/issues/5923 | 1,749,279,355 | 5,923 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac M1
I've just upgraded to Langchain 0.0.194
I need to pass through a proxy, so I set HTTP_PROXY, HTTPS_PROXY, OPENAI_PROXY and REQUEST_BUNDLE_CA= for the https certificate.
```
InvalidRequestError: Unrecognized request argument supplied: proxy
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Roughly this is the code to reproduce the issue:
```python
llm = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0,
return_pl_id=True,
pl_tags=["question-answering", "chatgpt"])
chain = load_qa_chain(llm, chain_type="stuff")
query_main_magnet = "Extract the starting magnets or main materials being studied in the text. Include all the compositions mentioned."
relevant_documents = db.as_retriever().get_relevant_documents(query)
chain.run(input_documents=relevant_documents, question=query)
```
and the issue
```
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[40], line 10
8 query_main_magnet = "Extract the starting magnets or main materials being studied in the text. Include all the compositions mentioned."
9 relevant_documents = embeddings_dict['Lit_120_To'].as_retriever().get_relevant_documents(query_main_magnet)
---> 10 chain.run(input_documents=relevant_documents, question=query_main_magnet)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/base.py:259, in Chain.run(self, callbacks, *args, **kwargs)
256 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
258 if kwargs and not args:
--> 259 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
261 if not kwargs and not args:
262 raise ValueError(
263 "`run` supported with either positional arguments or keyword arguments,"
264 " but none were provided."
265 )
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/base.py:145, in Chain.__call__(self, inputs, return_only_outputs, callbacks, include_run_info)
143 except (KeyboardInterrupt, Exception) as e:
144 run_manager.on_chain_error(e)
--> 145 raise e
146 run_manager.on_chain_end(outputs)
147 final_outputs: Dict[str, Any] = self.prep_outputs(
148 inputs, outputs, return_only_outputs
149 )
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/base.py:139, in Chain.__call__(self, inputs, return_only_outputs, callbacks, include_run_info)
133 run_manager = callback_manager.on_chain_start(
134 {"name": self.__class__.__name__},
135 inputs,
136 )
137 try:
138 outputs = (
--> 139 self._call(inputs, run_manager=run_manager)
140 if new_arg_supported
141 else self._call(inputs)
142 )
143 except (KeyboardInterrupt, Exception) as e:
144 run_manager.on_chain_error(e)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py:84, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
82 # Other keys are assumed to be needed for LLM prediction
83 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
---> 84 output, extra_return_dict = self.combine_docs(
85 docs, callbacks=_run_manager.get_child(), **other_keys
86 )
87 extra_return_dict[self.output_key] = output
88 return extra_return_dict
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py:87, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
85 inputs = self._get_inputs(docs, **kwargs)
86 # Call predict on the LLM.
---> 87 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/llm.py:213, in LLMChain.predict(self, callbacks, **kwargs)
198 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
199 """Format prompt with kwargs and pass to LLM.
200
201 Args:
(...)
211 completion = llm.predict(adjective="funny")
212 """
--> 213 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/base.py:145, in Chain.__call__(self, inputs, return_only_outputs, callbacks, include_run_info)
143 except (KeyboardInterrupt, Exception) as e:
144 run_manager.on_chain_error(e)
--> 145 raise e
146 run_manager.on_chain_end(outputs)
147 final_outputs: Dict[str, Any] = self.prep_outputs(
148 inputs, outputs, return_only_outputs
149 )
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/base.py:139, in Chain.__call__(self, inputs, return_only_outputs, callbacks, include_run_info)
133 run_manager = callback_manager.on_chain_start(
134 {"name": self.__class__.__name__},
135 inputs,
136 )
137 try:
138 outputs = (
--> 139 self._call(inputs, run_manager=run_manager)
140 if new_arg_supported
141 else self._call(inputs)
142 )
143 except (KeyboardInterrupt, Exception) as e:
144 run_manager.on_chain_error(e)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/llm.py:69, in LLMChain._call(self, inputs, run_manager)
64 def _call(
65 self,
66 inputs: Dict[str, Any],
67 run_manager: Optional[CallbackManagerForChainRun] = None,
68 ) -> Dict[str, str]:
---> 69 response = self.generate([inputs], run_manager=run_manager)
70 return self.create_outputs(response)[0]
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chains/llm.py:79, in LLMChain.generate(self, input_list, run_manager)
77 """Generate LLM result from inputs."""
78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
---> 79 return self.llm.generate_prompt(
80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None
81 )
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/base.py:148, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks)
141 def generate_prompt(
142 self,
143 prompts: List[PromptValue],
144 stop: Optional[List[str]] = None,
145 callbacks: Callbacks = None,
146 ) -> LLMResult:
147 prompt_messages = [p.to_messages() for p in prompts]
--> 148 return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/base.py:92, in BaseChatModel.generate(self, messages, stop, callbacks)
90 except (KeyboardInterrupt, Exception) as e:
91 run_manager.on_llm_error(e)
---> 92 raise e
93 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
94 generations = [res.generations for res in results]
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/base.py:84, in BaseChatModel.generate(self, messages, stop, callbacks)
80 new_arg_supported = inspect.signature(self._generate).parameters.get(
81 "run_manager"
82 )
83 try:
---> 84 results = [
85 self._generate(m, stop=stop, run_manager=run_manager)
86 if new_arg_supported
87 else self._generate(m, stop=stop)
88 for m in messages
89 ]
90 except (KeyboardInterrupt, Exception) as e:
91 run_manager.on_llm_error(e)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/base.py:85, in <listcomp>(.0)
80 new_arg_supported = inspect.signature(self._generate).parameters.get(
81 "run_manager"
82 )
83 try:
84 results = [
---> 85 self._generate(m, stop=stop, run_manager=run_manager)
86 if new_arg_supported
87 else self._generate(m, stop=stop)
88 for m in messages
89 ]
90 except (KeyboardInterrupt, Exception) as e:
91 run_manager.on_llm_error(e)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/promptlayer_openai.py:50, in PromptLayerChatOpenAI._generate(self, messages, stop, run_manager)
47 from promptlayer.utils import get_api_key, promptlayer_api_request
49 request_start_time = datetime.datetime.now().timestamp()
---> 50 generated_responses = super()._generate(messages, stop, run_manager)
51 request_end_time = datetime.datetime.now().timestamp()
52 message_dicts, params = super()._create_message_dicts(messages, stop)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/openai.py:323, in ChatOpenAI._generate(self, messages, stop, run_manager)
319 message = _convert_dict_to_message(
320 {"content": inner_completion, "role": role}
321 )
322 return ChatResult(generations=[ChatGeneration(message=message)])
--> 323 response = self.completion_with_retry(messages=message_dicts, **params)
324 return self._create_chat_result(response)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/openai.py:284, in ChatOpenAI.completion_with_retry(self, **kwargs)
280 @retry_decorator
281 def _completion_with_retry(**kwargs: Any) -> Any:
282 return self.client.create(**kwargs)
--> 284 return _completion_with_retry(**kwargs)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File ~/anaconda3/envs/magneto/lib/python3.9/concurrent/futures/_base.py:439, in Future.result(self, timeout)
437 raise CancelledError()
438 elif self._state == FINISHED:
--> 439 return self.__get_result()
441 self._condition.wait(timeout)
443 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File ~/anaconda3/envs/magneto/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
389 if self._exception:
390 try:
--> 391 raise self._exception
392 finally:
393 # Break a reference cycle with the exception in self._exception
394 self = None
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/langchain/chat_models/openai.py:282, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
280 @retry_decorator
281 def _completion_with_retry(**kwargs: Any) -> Any:
--> 282 return self.client.create(**kwargs)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/openai/api_requestor.py:230, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
209 def request(
210 self,
211 method,
(...)
218 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
219 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
220 result = self.request_raw(
221 method.lower(),
222 url,
(...)
228 request_timeout=request_timeout,
229 )
--> 230 resp, got_stream = self._interpret_response(result, stream)
231 return resp, got_stream, self.api_key
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/openai/api_requestor.py:624, in APIRequestor._interpret_response(self, result, stream)
616 return (
617 self._interpret_response_line(
618 line, result.status_code, result.headers, stream=True
619 )
620 for line in parse_stream(result.iter_lines())
621 ), True
622 else:
623 return (
--> 624 self._interpret_response_line(
625 result.content.decode("utf-8"),
626 result.status_code,
627 result.headers,
628 stream=False,
629 ),
630 False,
631 )
File ~/anaconda3/envs/magneto/lib/python3.9/site-packages/openai/api_requestor.py:687, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
685 stream_error = stream and "error" in resp.data
686 if stream_error or not 200 <= rcode < 300:
--> 687 raise self.handle_error_response(
688 rbody, rcode, resp.data, rheaders, stream_error=stream_error
689 )
690 return resp
InvalidRequestError: Unrecognized request argument supplied: proxy
```
### Expected behavior
That it work | [proxy users] Possible regression after upgrading to 0.0.194: InvalidRequestError: Unrecognized request argument supplied: proxy | https://api.github.com/repos/langchain-ai/langchain/issues/5915/comments | 2 | 2023-06-09T02:51:34Z | 2023-06-21T04:16:05Z | https://github.com/langchain-ai/langchain/issues/5915 | 1,749,011,804 | 5,915 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'd like to have my server that uses langchain to run asynchronously, specifically, I'd like to have `await` when running `chain({})`. My understanding of this is that the server will start handling other requests made to it, and not wait for `chain({})` to finish before handling other requests.
```
@app.post("/document/qa")
async def qa(document: Document):
try:
api_key = os.environ.get("OPENAI_API_KEY")
embedding = OpenAIEmbeddings()
collection_name = document.user_id + "/" + document.pdf_title
connection_result = check_if_connection_exists(collection_name)
if connection_result == False:
raise Exception("Collection does not exist")
store = PGVector(
connection_string=connection_string,
embedding_function=embedding,
collection_name=collection_name,
distance_strategy=DistanceStrategy.COSINE
)
memory_response = get_conversation_memory(collection_name)
if (memory_response["status-code"] == 404):
raise Exception("Failed to retrieve conversation memory")
json_response = memory_response["request-response"]
chat_memory = ChatMemory.from_pg_conversation(
response_json=json_response)
retriever = store.as_retriever()
query = document.query
llm = ChatOpenAI(streaming=False, openai_api_key=api_key)
# don't mind the streaming lol it's a work in progress
streaming_llm = ChatOpenAI(streaming=False, openai_api_key=api_key, verbose=True,)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm=streaming_llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
# I'd like to have await here
result = chain({
"question": query,
"chat_history": chat_memory.history
},
return_only_outputs=True
)
ai_split_response = result["answer"].split("SOURCES: ")
ai_answer = ai_split_response[0]
ai_sources = ai_split_response[1].split(", ")
# removes the period at the end of the last source
ai_sources[-1] = ai_sources[-1].replace(".", "")
# posts conversation result to the database
post_response = post_conversation_memory(
title=collection_name,
user_message=query,
ai_response=ai_answer,
ai_sources=ai_sources,
)
if (post_response["status-code"] == 404):
raise Exception("Failed to post conversation memory")
return {
"question": query,
"result": ai_answer,
"sources": ai_sources,
}
except Exception as e:
raise HTTPException(status_code=404, detail=str(e))
```
### Suggestion:
_No response_ | How can I make langchain run asynchronously? | https://api.github.com/repos/langchain-ai/langchain/issues/5913/comments | 1 | 2023-06-09T02:27:09Z | 2023-06-09T02:42:42Z | https://github.com/langchain-ai/langchain/issues/5913 | 1,748,990,134 | 5,913 |
[
"langchain-ai",
"langchain"
] | ### System Info
In Google Collab
What I have installed
%pip install requests==2.27.1
%pip install chromadb==<compatible version>
%pip install langchain duckdb unstructured chromadb openai tiktoken
MacBook M1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings() << PASS
create a vector database and use it to index the embeddings
2. class Document:
def __init__(self, text, metadata):
self.page_content = text
self.metadata = metadata << PASS
3.documents = [Document(text, metadata) for text, metadata in zip(texts, metadata_list)] << PASS
4.from langchain.vectorstores import Chroma
db = Chroma.from_documents(documents, embeddings, model='davinci') << AuthenticationError: <empty message>
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
[<ipython-input-87-ea7b035908f9>](https://rzac4prlyba-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384#) in <cell line: 3>()
1 from langchain.vectorstores import Chroma
2
----> 3 db = Chroma.from_documents(documents, embeddings, model='davinci')
17 frames
[/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py](https://rzac4prlyba-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384#) in _interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
AuthenticationError: <empty message>
What should I do?
FYI. I try to follow this link https://medium.com/mlearning-ai/using-chatgpt-for-question-answering-on-your-own-data-afa33d82fbd0
seems like he has no issue TT
### Expected behavior
pass | chroma.from_documents AuthenticationError | https://api.github.com/repos/langchain-ai/langchain/issues/5910/comments | 4 | 2023-06-08T23:55:36Z | 2023-09-19T16:08:55Z | https://github.com/langchain-ai/langchain/issues/5910 | 1,748,813,751 | 5,910 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I could work around the call from `load_summarize_chain` to add response in certain languages
https://github.com/hwchase17/langchain/blob/master/langchain/chains/summarize/map_reduce_prompt.py
The work around was something like this:
```python
from langchain.prompts import PromptTemplate
prompt_template = """Write a concise summary of the following text.
The following text is in 'pt-br' and you should respond in 'pt-br':
"{text}"
CONCISE SUMMARY:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
```
Is it possible to add a language param in some functions to add this approach, if not, the answers would be always in English as its the use language in requests.
### Motivation
Help people with responses in the desired language
### Your contribution
I can help of course, even with a PR. Just need to get more familiar with the tool | Add desired expected language response as a function param. | https://api.github.com/repos/langchain-ai/langchain/issues/5907/comments | 1 | 2023-06-08T22:02:13Z | 2023-09-14T16:05:40Z | https://github.com/langchain-ai/langchain/issues/5907 | 1,748,709,596 | 5,907 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.194
langchainplus-sdk 0.0.6
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. code from langchain.tools import PubmedQueryRun
2. Error:
ImportError: cannot import name 'PubmedQueryRun' from 'langchain.tools' (C:\Users\USER\anaconda3\lib\site-packages\langchain\tools\__init__.py)
Traceback:
File "C:\Users\USER\anaconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\USER\Dropbox\AI\Cloned Repis\Langchain-Crash-Course\pubmed.py", line 31, in <module>
from langchain.tools import PubmedQueryRun
### Expected behavior
It seems PubmedQueryRun is not defined. | PubmedQueryRun not working | https://api.github.com/repos/langchain-ai/langchain/issues/5906/comments | 4 | 2023-06-08T21:55:40Z | 2023-11-28T16:10:40Z | https://github.com/langchain-ai/langchain/issues/5906 | 1,748,703,628 | 5,906 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Currently, the newest version `0.0.194` is not building on conda-forge since `langchainplus-sdk` is missing.
I can't add that to conda-forge, though, since I have neither found source code nor a licence file for that package - other than the PyPI release.
Is `langchainplus-sdk` a strongly needed dependency or can it be made optional like many others as well?
This prevents the newest version from being installable through conda/mama/micromamba etc.
### Suggestion:
Remove the dependency on `langchainplus-sdk` with `optional=True`. | Issue: langchainplus-sdk dependency | https://api.github.com/repos/langchain-ai/langchain/issues/5905/comments | 13 | 2023-06-08T21:16:55Z | 2023-11-08T16:08:50Z | https://github.com/langchain-ai/langchain/issues/5905 | 1,748,649,765 | 5,905 |
[
"langchain-ai",
"langchain"
] | ### Feature request
As of now callbacks are provisioned for OPEN AI which gives completion tokens, prompt tokens and total cost. Similar feature is needed for VertexAI Chat Models of GCP.
### Motivation
This is an important feature as this abstraction is needed for GCP
### Your contribution
i can test and validate | langchain callback support PALM2 GCP | https://api.github.com/repos/langchain-ai/langchain/issues/5904/comments | 5 | 2023-06-08T20:55:13Z | 2024-02-19T19:33:21Z | https://github.com/langchain-ai/langchain/issues/5904 | 1,748,622,671 | 5,904 |
[
"langchain-ai",
"langchain"
] | ### System Info
python in Azure Function, langchain 0.0.194
this code:
```
tools = [
PubmedQueryRun(),
ArxivQueryRun(),
]
```
and then loading it into an agent:
```
agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, return_intermediate_steps=False,
max_iterations=15, max_execution_time=100, early_stopping_method="generate")
```
is giving this error:
```
1 validation error for ArxivQueryRun
[2023-06-08T18:28:08.426Z] api_wrapper
[2023-06-08T18:28:08.426Z] field required (type=value_error.missing)
[2023-06-08T18:28:08.609Z] Executed 'Functions.HttpTriggerCallNavigator' (Failed, Id=70088244-eda1-456d-b9bc-1acedfe71af2, Duration=3756ms)
[2023-06-08T18:28:08.609Z] System.Private.CoreLib: Exception while executing function: Functions.HttpTriggerCallNavigator. System.Private.CoreLib: Result: Failure
[2023-06-08T18:28:08.609Z] Exception: TypeError: unable to encode outgoing TypedData: unsupported type "<class 'azure.functions.http.HttpResponseConverter'>" for Python type "NoneType"
[2023-06-08T18:28:08.609Z] Stack: File "/usr/lib/azure-functions-core-tools-4/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 506, in _handle__invocation_request
[2023-06-08T18:28:08.610Z] return_value = bindings.to_outgoing_proto(
[2023-06-08T18:28:08.610Z] File "/usr/lib/azure-functions-core-tools-4/workers/python/3.9/LINUX/X64/azure_functions_worker/bindings/meta.py", line 152, in to_outgoing_proto
[2023-06-08T18:28:08.610Z] datum = get_datum(binding, obj, pytype)
[2023-06-08T18:28:08.610Z] File "/usr/lib/azure-functions-core-tools-4/workers/python/3.9/LINUX/X64/azure_functions_worker/bindings/meta.py", line 110, in get_datum
[2023-06-08T18:28:08.610Z] raise TypeError(
[2023-06-08T18:28:08.610Z] .
```
### Who can help?
@hwchase17 @vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
tools = [
PubmedQueryRun(),
ArxivQueryRun(),
]
agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, return_intermediate_steps=False,
max_iterations=15, max_execution_time=100, early_stopping_method="generate")
```
### Expected behavior
1 validation error for ArxivQueryRun
[2023-06-08T18:28:08.426Z] api_wrapper
[2023-06-08T18:28:08.426Z] field required (type=value_error.missing) | Arxiv Tool validation error (api_wrapper field required (type=value_error.missing)) | https://api.github.com/repos/langchain-ai/langchain/issues/5901/comments | 3 | 2023-06-08T18:56:35Z | 2023-06-12T01:04:25Z | https://github.com/langchain-ai/langchain/issues/5901 | 1,748,467,868 | 5,901 |
[
"langchain-ai",
"langchain"
] | when i try and run this code:
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///my_data.db")
llm = OpenAI(temperature=0, verbose=True)
i get this error: typing.ClassVar[typing.Collection[str]] is not valid as type argument
I have tested it on a snowflake connection too with the exact same error.
I checked the langchain documentation and nothing has been updated or changed from what i can tell. | SQLDatabaseChain and agent no longer working at all. Code was working for past 4 weeks. no changes made to my code and everything errored out as of this morning (6/8/24 ~11am ET). It worked flawlessly just yesterday (6/7) for a demo i gave. | https://api.github.com/repos/langchain-ai/langchain/issues/5900/comments | 0 | 2023-06-08T18:39:26Z | 2023-06-09T14:51:14Z | https://github.com/langchain-ai/langchain/issues/5900 | 1,748,448,821 | 5,900 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone! I'm having trouble setting up the successful usage of a custom QA prompt template that includes input variables with my RetrievalQA.from_chain_type.
This issue is similar to https://github.com/hwchase17/langchain/pull/3425
I am using LangChain v0.0.192 with FAISS vectorstore.
As below my custom prompt has three input
**My_loader_
made_corrections_
output_format_instructions_**
My code looks like that for now:
pdf_template_stuff = """
You are a Contract Review Specialist. You have been given a dataloader as
langchain.document_loaders.directory.DirectoryLoader: {my_loader_}.
Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use as much detail as possible when responding and try to make answer in markdown format as much as possible.
Based on this data do the following:
1- Read the input PDFs and extract the relevant information.
2- Answer the question using the extracted information.
3- Cross-reference the answer with other PDF files to ensure accuracy.
4- Based on the historic corrections: {made_corrections_} make corrections to the extracted answers
5- provide the answer in alignment with the {output_format_instructions_} format.
Context: {context}
Question:{question}
Helpful Answer:
"""
pdf_prompt_template = PromptTemplate( input_variables=
["context","question","my_loader_", "made_corrections_","output_format_instructions_"],
template=pdf_template_stuff
)
my_chain_type='stuff'
my_qa_chain = RetrievalQA.from_chain_type(llm=my_specific_llm,
chain_type=my_chain_type,
retriever=my_retriever,
return_source_documents=False,
chain_type_kwargs={"prompt": pdf_prompt_template}
)
final_result=my_qa_chain(query="MY SAMPLE QUESTION",my_loader_=my_loader, made_corrections_=made_corrections , Output_format_instructions_=output_format_instructions)
ValueError: Missing some input keys: {'output_format_instructions_', 'my_loader_', 'made_corrections_'}
Thanks for your help
### Suggestion:
_No response_ | Issue: <RetrievalQA.from_chain_type with custom prompt template> | https://api.github.com/repos/langchain-ai/langchain/issues/5899/comments | 3 | 2023-06-08T18:29:01Z | 2023-12-01T16:09:49Z | https://github.com/langchain-ai/langchain/issues/5899 | 1,748,436,224 | 5,899 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**Observation:** TokenTextSplitter processes passages smaller than specified chunk_size, and creates a lot more smaller chunks with duplicated overlaps.
**Impact:** The smaller chunks with duplicated overlaps may be harmful to storage size, computation time, and it may confuse the retrieval model.
**For example:** The second TokenTextSplitter creates 366 more small chunks, even if new_chunks1 doesn't have any passages longer than chunk_size=500.
<img width="950" alt="Screen Shot 2023-06-08 at 10 58 52 AM" src="https://github.com/hwchase17/langchain/assets/103061109/d05c28a9-b6dc-4e44-ad00-c892457055d8">
### Suggestion:
In source code for langchain.text_splitter, split_text_on_tokens function, can we add the check for
`If len(input_ids) <= tokenizer.tokens_per_chunk: splits.append(tokenizer.decode(input_ids)) ` | Issue: TokenTextSplitter processes passages less than chunk_size and creates duplicate overlaps | https://api.github.com/repos/langchain-ai/langchain/issues/5897/comments | 3 | 2023-06-08T17:59:46Z | 2024-03-20T16:04:58Z | https://github.com/langchain-ai/langchain/issues/5897 | 1,748,401,334 | 5,897 |
[
"langchain-ai",
"langchain"
] | ### System Info
version: 0.0.194
Platform: Running in HEX notebook with python version 3.9
When trying to import KNN retriever I get the below error:
"typing.ClassVar[typing.Collection[str]] is not valid as type argument"
Here's my code:
`from langchain.retrievers import KNNRetriever
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQA`
Here's the error report:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-da50b4368ae7> in <cell line: 1>()
----> 1 from langchain.retrievers import KNNRetriever
2 from langchain.embeddings import OpenAIEmbeddings
3 from langchain.chains import RetrievalQA
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/agents/__init__.py in <module>
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/agents/agent.py in <module>
14
15 from langchain.agents.agent_types import AgentType
---> 16 from langchain.agents.tools import InvalidTool
17 from langchain.base_language import BaseLanguageModel
18 from langchain.callbacks.base import BaseCallbackManager
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/agents/tools.py in <module>
6 CallbackManagerForToolRun,
7 )
----> 8 from langchain.tools.base import BaseTool, Tool, tool
9
10
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/tools/__init__.py in <module>
44 )
45 from langchain.tools.plugin import AIPluginTool
---> 46 from langchain.tools.powerbi.tool import (
47 InfoPowerBITool,
48 ListPowerBITool,
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/tools/powerbi/tool.py in <module>
9 CallbackManagerForToolRun,
10 )
---> 11 from langchain.chains.llm import LLMChain
12 from langchain.tools.base import BaseTool
13 from langchain.tools.powerbi.prompt import (
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/chains/__init__.py in <module>
1 """Chains are easily reusable components which can be linked together."""
----> 2 from langchain.chains.api.base import APIChain
3 from langchain.chains.api.openapi.chain import OpenAPIEndpointChain
4 from langchain.chains.combine_documents.base import AnalyzeDocumentChain
5 from langchain.chains.constitutional_ai.base import ConstitutionalChain
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/chains/api/base.py in <module>
11 CallbackManagerForChainRun,
12 )
---> 13 from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
14 from langchain.chains.base import Chain
15 from langchain.chains.llm import LLMChain
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/chains/api/prompt.py in <module>
1 # flake8: noqa
----> 2 from langchain.prompts.prompt import PromptTemplate
3
4 API_URL_PROMPT_TEMPLATE = """You are given the below API Documentation:
5 {api_docs}
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/prompts/__init__.py in <module>
1 """Prompt template classes."""
2 from langchain.prompts.base import BasePromptTemplate, StringPromptTemplate
----> 3 from langchain.prompts.chat import (
4 AIMessagePromptTemplate,
5 BaseChatPromptTemplate,
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/prompts/chat.py in <module>
8 from pydantic import BaseModel, Field
9
---> 10 from langchain.memory.buffer import get_buffer_string
11 from langchain.prompts.base import BasePromptTemplate, StringPromptTemplate
12 from langchain.prompts.prompt import PromptTemplate
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/memory/__init__.py in <module>
28 from langchain.memory.summary_buffer import ConversationSummaryBufferMemory
29 from langchain.memory.token_buffer import ConversationTokenBufferMemory
---> 30 from langchain.memory.vectorstore import VectorStoreRetrieverMemory
31
32 __all__ = [
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/memory/vectorstore.py in <module>
8 from langchain.memory.utils import get_prompt_input_key
9 from langchain.schema import Document
---> 10 from langchain.vectorstores.base import VectorStoreRetriever
11
12
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/vectorstores/__init__.py in <module>
1 """Wrappers on top of vector stores."""
----> 2 from langchain.vectorstores.analyticdb import AnalyticDB
3 from langchain.vectorstores.annoy import Annoy
4 from langchain.vectorstores.atlas import AtlasDB
5 from langchain.vectorstores.base import VectorStore
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/vectorstores/analyticdb.py in <module>
20 from langchain.embeddings.base import Embeddings
21 from langchain.utils import get_from_dict_or_env
---> 22 from langchain.vectorstores.base import VectorStore
23
24 Base = declarative_base() # type: Any
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/langchain/vectorstores/base.py in <module>
355
356
--> 357 class VectorStoreRetriever(BaseRetriever, BaseModel):
358 vectorstore: VectorStore
359 search_type: str = "similarity"
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/pydantic/main.cpython-39-x86_64-linux-gnu.so in pydantic.main.ModelMetaclass.__new__()
~/.cache/pypoetry/virtualenvs/python-kernel-OtKFaj5M-py3.9/lib/python3.9/site-packages/pydantic/typing.cpython-39-x86_64-linux-gnu.so in pydantic.typing.resolve_annotations()
/usr/local/lib/python3.9/typing.py in _eval_type(t, globalns, localns, recursive_guard)
290 """
291 if isinstance(t, ForwardRef):
--> 292 return t._evaluate(globalns, localns, recursive_guard)
293 if isinstance(t, (_GenericAlias, GenericAlias)):
294 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
/usr/local/lib/python3.9/typing.py in _evaluate(self, globalns, localns, recursive_guard)
551 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
552 )
--> 553 type_ = _type_check(
554 eval(self.__forward_code__, globalns, localns),
555 "Forward references must evaluate to types.",
/usr/local/lib/python3.9/typing.py in _type_check(arg, msg, is_argument, module, allow_special_forms)
156 if (isinstance(arg, _GenericAlias) and
157 arg.__origin__ in invalid_generic_forms):
--> 158 raise TypeError(f"{arg} is not valid as type argument")
159 if arg in (Any, NoReturn, Final):
160 return arg
TypeError: typing.ClassVar[typing.Collection[str]] is not valid as type argument
### Who can help?
@vowelparrot @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. from langchain.retrievers import KNNRetriever in an environment with python 3.9
### Expected behavior
This should error out throwing the error message "typing.ClassVar[typing.Collection[str]] is not valid as type argument" | Having issues importing KNNRetriever from Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5893/comments | 1 | 2023-06-08T16:43:29Z | 2023-09-14T16:05:45Z | https://github.com/langchain-ai/langchain/issues/5893 | 1,748,306,870 | 5,893 |
[
"langchain-ai",
"langchain"
] | Hi there,
I am running LLM through custom API and have the possibility to run batch inference. However, the generate method from langchain only runs iteratively the LLM on the list of prompts.
Would there be an existing method that I could use to allow batch generation on my API? | Batch generation from API | https://api.github.com/repos/langchain-ai/langchain/issues/5892/comments | 5 | 2023-06-08T16:31:19Z | 2024-01-30T23:32:53Z | https://github.com/langchain-ai/langchain/issues/5892 | 1,748,292,147 | 5,892 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone! I can't successfully pass the CONDENSE_QUESTION_PROMPT to `ConversationalRetrievalChain`, while basic QA_PROMPT I can pass.
I also need the CONDENSE_QUESTION_PROMPT because there I will pass the chat history, since I want to achieve a converstional chat over documents with working chat history, and later possibly some summary memories to prevent halucinations.
I am using LangChain v0.0.191 with Chrom vectorstore v0.0.25
What I want to achieve is that the model will know about chat history.
My code looks like that for now:
```
self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=settings.OPENAI_API_KEY,
streaming=True, verbose=True, callback_manager=CallbackManager([ChainStreamHandler(generator)]))
self.memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True, output_key='answer')
QA_PROMPT = PromptTemplate(input_variables=["context", "question"], template=QA_PROMP_ALL_KNOWLEDGE)
retriever = chroma_Vectorstore.as_retriever(qa_template=QA_PROMP_ALL_KNOWLEDGE
,search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.2}
)
self.chain = ConversationalRetrievalChain.from_llm(self.llm, retriever=retriever,
return_source_documents=True,verbose=True,
memory=self.memory,
combine_docs_chain_kwargs={'prompt': QA_PROMPT})
result = self.chain({"question": question})
res_dict = {
"answer": result["answer"],
}
res_dict["source_documents"] = []
for source in result["source_documents"]:
res_dict["source_documents"].append({
"page_content": source.page_content,
"metadata": source.metadata
})
return res_dict
```
But where can I than pass the CONDENSE_QUESTION_PROMPT?
` CONDENSEprompt = PromptTemplate(input_variables=["chat_history", "question"], template=CONDENSE_QUESTION_PROMPT)`
My exact CONDENSE_QUESTION_PROMPT is:
```
CONDENSE_PROMPT = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
```
and
My exact QA_PROMPT_DOCUMENT_CHAT is
```
QA_PROMPT_DOCUMENT_CHAT = """You are a helpful AI assistant. Use the following pieces of context to answer the question at the end.
If the question is not related to the context, politely respond that you are teached to only answer questions that are related to the context.
If you don't know the answer, just say you don't know. DO NOT try to make up an answer. Try to make the title for every answer if it is possible. Answer in markdown.
Use as much detail as possible when responding and try to make answer in markdown format as much as possible.
{context}
Question: {question}
Answer in markdown format:"""
```
With my current code, the history doesn't work.
### Suggestion:
Maybe @hwchase17 or @agola11 can help. Thanks | Issue: ConversationalRetrievalChain - issue with passing the CONDENSE_QUESTION_PROMPT for working chat history | https://api.github.com/repos/langchain-ai/langchain/issues/5890/comments | 5 | 2023-06-08T16:22:14Z | 2023-11-06T09:36:51Z | https://github.com/langchain-ai/langchain/issues/5890 | 1,748,279,843 | 5,890 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain=0.0.194
python=3.11.3
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run:
`VertexAI(project="my_project_name")`
### Expected behavior
The client will connect to the supplied project_id | When inialztion VertexAI() all passed parameters got ignored | https://api.github.com/repos/langchain-ai/langchain/issues/5889/comments | 0 | 2023-06-08T16:06:31Z | 2023-06-09T06:15:24Z | https://github.com/langchain-ai/langchain/issues/5889 | 1,748,233,322 | 5,889 |
[
"langchain-ai",
"langchain"
] | ### System Info
```yaml
Fedora OS 38
Podman info:
bash
host:
arch: amd64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-2.fc38.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 87.93
systemPercent: 2.6
userPercent: 9.48
cpus: 4
databaseBackend: boltdb
distribution:
distribution: fedora
variant: workstation
version: "38"
eventLogger: journald
hostname: fedora
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.3.5-200.fc38.x86_64
linkmode: dynamic
logDriver: journald
memFree: 346042368
memTotal: 3947089920
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.8.5-1.fc38.x86_64
path: /usr/bin/crun
version: |-
crun version 1.8.5
commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-12.fc38.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 3264212992
swapTotal: 3946835968
uptime: 2h 21m 51.00s (Approximately 0.08 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /home/cmirdesouza/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/cmirdesouza/.local/share/containers/storage
graphRootAllocated: 238352859136
graphRootUsed: 23042453504
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 14
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/cmirdesouza/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.1
Built: 1685123928
BuiltTime: Fri May 26 14:58:48 2023
GitCommit: ""
GoVersion: go1.20.4
Os: linux
OsArch: linux/amd64
Version: 4.5.1
```
```bash
podman run -it langchain
=============================== test session starts ===============================
platform linux -- Python 3.11.2, pytest-7.3.1, pluggy-1.0.0
rootdir: /app
configfile: pyproject.toml
plugins: asyncio-0.20.3, cov-4.1.0, dotenv-0.5.2, mock-3.10.0, socket-0.6.0
asyncio: mode=Mode.STRICT
collected 769 items
tests/unit_tests/test_bash.py .s...s [ 0%]
tests/unit_tests/test_dependencies.py ... [ 1%]
tests/unit_tests/test_document_transformers.py .. [ 1%]
tests/unit_tests/test_formatting.py ... [ 1%]
tests/unit_tests/test_math_utils.py ....... [ 2%]
tests/unit_tests/test_pytest_config.py F [ 2%]
tests/unit_tests/test_python.py ........ [ 3%]
tests/unit_tests/test_schema.py ...... [ 4%]
tests/unit_tests/test_sql_database.py ..... [ 5%]
tests/unit_tests/test_sql_database_schema.py .. [ 5%]
tests/unit_tests/test_text_splitter.py ............................ [ 9%]
tests/unit_tests/agents/test_agent.py ....... [ 10%]
tests/unit_tests/agents/test_mrkl.py ............ [ 11%]
tests/unit_tests/agents/test_public_api.py . [ 11%]
tests/unit_tests/agents/test_react.py ... [ 12%]
tests/unit_tests/agents/test_serialization.py . [ 12%]
tests/unit_tests/agents/test_sql.py . [ 12%]
tests/unit_tests/agents/test_tools.py .......... [ 13%]
tests/unit_tests/agents/test_types.py . [ 13%]
tests/unit_tests/callbacks/test_callback_manager.py ........ [ 14%]
tests/unit_tests/callbacks/test_openai_info.py ... [ 15%]
tests/unit_tests/callbacks/tracers/test_base_tracer.py ........... [ 16%]
tests/unit_tests/callbacks/tracers/test_langchain_v1.py ............ [ 18%]
tests/unit_tests/chains/test_api.py . [ 18%]
tests/unit_tests/chains/test_base.py .............. [ 20%]
tests/unit_tests/chains/test_combine_documents.py .......... [ 21%]
tests/unit_tests/chains/test_constitutional_ai.py . [ 21%]
tests/unit_tests/chains/test_conversation.py ........... [ 23%]
tests/unit_tests/chains/test_graph_qa.py .. [ 23%]
tests/unit_tests/chains/test_hyde.py .. [ 23%]
tests/unit_tests/chains/test_llm.py ..... [ 24%]
tests/unit_tests/chains/test_llm_bash.py ..... [ 24%]
tests/unit_tests/chains/test_llm_checker.py . [ 25%]
tests/unit_tests/chains/test_llm_math.py ... [ 25%]
tests/unit_tests/chains/test_llm_summarization_checker.py . [ 25%]
tests/unit_tests/chains/test_memory.py .... [ 26%]
tests/unit_tests/chains/test_natbot.py .. [ 26%]
tests/unit_tests/chains/test_sequential.py ........... [ 27%]
tests/unit_tests/chains/test_transform.py .. [ 28%]
tests/unit_tests/chains/query_constructor/test_parser.py .................. [ 30%]
............ [ 31%]
tests/unit_tests/chat_models/test_google_palm.py ssssssss [ 33%]
tests/unit_tests/client/test_runner_utils.py .............................. [ 36%]
...... [ 37%]
tests/unit_tests/docstore/test_arbitrary_fn.py . [ 37%]
tests/unit_tests/docstore/test_inmemory.py .... [ 38%]
tests/unit_tests/document_loaders/test_base.py . [ 38%]
tests/unit_tests/document_loaders/test_bibtex.py ssss [ 39%]
tests/unit_tests/document_loaders/test_bshtml.py ss [ 39%]
tests/unit_tests/document_loaders/test_confluence.py sssss [ 39%]
tests/unit_tests/document_loaders/test_csv_loader.py .... [ 40%]
tests/unit_tests/document_loaders/test_detect_encoding.py ss [ 40%]
tests/unit_tests/document_loaders/test_directory.py .. [ 40%]
tests/unit_tests/document_loaders/test_evernote_loader.py sssssssssss [ 42%]
tests/unit_tests/document_loaders/test_generic_loader.py ...s. [ 43%]
tests/unit_tests/document_loaders/test_github.py ..... [ 43%]
tests/unit_tests/document_loaders/test_json_loader.py sssss [ 44%]
tests/unit_tests/document_loaders/test_psychic.py ss [ 44%]
tests/unit_tests/document_loaders/test_readthedoc.py .... [ 45%]
tests/unit_tests/document_loaders/test_telegram.py .s [ 45%]
tests/unit_tests/document_loaders/test_trello.py sss [ 45%]
tests/unit_tests/document_loaders/test_web_base.py . [ 45%]
tests/unit_tests/document_loaders/test_youtube.py .............. [ 47%]
tests/unit_tests/document_loaders/blob_loaders/test_filesystem_blob_loader.py . [ 47%]
.......s [ 48%]
tests/unit_tests/document_loaders/blob_loaders/test_public_api.py . [ 49%]
tests/unit_tests/document_loaders/blob_loaders/test_schema.py ............. [ 50%]
. [ 50%]
tests/unit_tests/document_loaders/loaders/vendors/test_docugami.py s. [ 51%]
tests/unit_tests/document_loaders/parsers/test_generic.py .. [ 51%]
tests/unit_tests/document_loaders/parsers/test_html_parsers.py s [ 51%]
tests/unit_tests/document_loaders/parsers/test_pdf_parsers.py ssss [ 52%]
tests/unit_tests/document_loaders/parsers/test_public_api.py . [ 52%]
tests/unit_tests/evaluation/qa/test_eval_chain.py ... [ 52%]
tests/unit_tests/llms/test_base.py .. [ 52%]
tests/unit_tests/llms/test_callbacks.py ... [ 53%]
tests/unit_tests/llms/test_loading.py . [ 53%]
tests/unit_tests/llms/test_utils.py .. [ 53%]
tests/unit_tests/memory/test_combined_memory.py .. [ 53%]
tests/unit_tests/memory/chat_message_histories/test_file.py ... [ 54%]
tests/unit_tests/memory/chat_message_histories/test_sql.py ... [ 54%]
tests/unit_tests/memory/chat_message_histories/test_zep.py ssssss [ 55%]
tests/unit_tests/output_parsers/test_base_output_parser.py ................ [ 57%]
....... [ 58%]
tests/unit_tests/output_parsers/test_boolean_parser.py . [ 58%]
tests/unit_tests/output_parsers/test_combining_parser.py . [ 58%]
tests/unit_tests/output_parsers/test_datetime_parser.py . [ 58%]
tests/unit_tests/output_parsers/test_enum_parser.py . [ 58%]
tests/unit_tests/output_parsers/test_json.py ......... [ 60%]
tests/unit_tests/output_parsers/test_list_parser.py .. [ 60%]
tests/unit_tests/output_parsers/test_pydantic_parser.py .. [ 60%]
tests/unit_tests/output_parsers/test_regex_dict.py . [ 60%]
tests/unit_tests/output_parsers/test_structured_parser.py . [ 60%]
tests/unit_tests/prompts/test_chat.py ...... [ 61%]
tests/unit_tests/prompts/test_few_shot.py .......... [ 62%]
tests/unit_tests/prompts/test_few_shot_with_templates.py . [ 63%]
tests/unit_tests/prompts/test_length_based_example_selector.py .... [ 63%]
tests/unit_tests/prompts/test_loading.py ......... [ 64%]
tests/unit_tests/prompts/test_pipeline_prompt.py .... [ 65%]
tests/unit_tests/prompts/test_prompt.py ............... [ 67%]
tests/unit_tests/prompts/test_utils.py . [ 67%]
tests/unit_tests/retrievers/test_tfidf.py sss [ 67%]
tests/unit_tests/retrievers/test_time_weighted_retriever.py ..... [ 68%]
tests/unit_tests/retrievers/test_zep.py ss [ 68%]
tests/unit_tests/retrievers/self_query/test_pinecone.py .. [ 68%]
tests/unit_tests/tools/test_base.py ................................. [ 73%]
tests/unit_tests/tools/test_exported.py . [ 73%]
tests/unit_tests/tools/test_json.py .... [ 73%]
tests/unit_tests/tools/test_public_api.py . [ 73%]
tests/unit_tests/tools/test_signatures.py ................................. [ 78%]
............................................... [ 84%]
tests/unit_tests/tools/test_zapier.py ... [ 84%]
tests/unit_tests/tools/file_management/test_copy.py ... [ 85%]
tests/unit_tests/tools/file_management/test_file_search.py ... [ 85%]
tests/unit_tests/tools/file_management/test_list_dir.py ... [ 85%]
tests/unit_tests/tools/file_management/test_move.py ... [ 86%]
tests/unit_tests/tools/file_management/test_read.py .. [ 86%]
tests/unit_tests/tools/file_management/test_toolkit.py .... [ 87%]
tests/unit_tests/tools/file_management/test_utils.py ..... [ 87%]
tests/unit_tests/tools/file_management/test_write.py ... [ 88%]
tests/unit_tests/tools/openapi/test_api_models.py ......................... [ 91%]
.......................... [ 94%]
tests/unit_tests/tools/powerbi/test_powerbi.py . [ 94%]
tests/unit_tests/tools/python/test_python.py ........... [ 96%]
tests/unit_tests/tools/requests/test_tool.py ...... [ 97%]
tests/unit_tests/tools/shell/test_shell.py ..... [ 97%]
tests/unit_tests/utilities/test_graphql.py s [ 97%]
tests/unit_tests/utilities/test_loading.py ...... [ 98%]
tests/unit_tests/vectorstores/test_sklearn.py ssssss [ 99%]
tests/unit_tests/vectorstores/test_utils.py .... [100%]
==================================== FAILURES =====================================
______________________________ test_socket_disabled _______________________________
def test_socket_disabled() -> None:
"""This test should fail."""
> with pytest.raises(pytest_socket.SocketBlockedError):
E Failed: DID NOT RAISE <class 'pytest_socket.SocketBlockedError'>
tests/unit_tests/test_pytest_config.py:8: Failed
================================ warnings summary =================================
langchain/text_splitter.py:607
/app/langchain/text_splitter.py:607: DeprecationWarning: invalid escape sequence '\*'
"\n\*+\n",
langchain/text_splitter.py:706
/app/langchain/text_splitter.py:706: DeprecationWarning: invalid escape sequence '\*'
"\n\*\*\*+\n",
langchain/text_splitter.py:719
/app/langchain/text_splitter.py:719: DeprecationWarning: invalid escape sequence '\c'
"\n\\\chapter{",
langchain/text_splitter.py:720
/app/langchain/text_splitter.py:720: DeprecationWarning: invalid escape sequence '\s'
"\n\\\section{",
langchain/text_splitter.py:721
/app/langchain/text_splitter.py:721: DeprecationWarning: invalid escape sequence '\s'
"\n\\\subsection{",
langchain/text_splitter.py:722
/app/langchain/text_splitter.py:722: DeprecationWarning: invalid escape sequence '\s'
"\n\\\subsubsection{",
tests/unit_tests/test_document_transformers.py::test__filter_similar_embeddings
tests/unit_tests/test_math_utils.py::test_cosine_similarity_zero
tests/unit_tests/test_math_utils.py::test_cosine_similarity
tests/unit_tests/test_math_utils.py::test_cosine_similarity_top_k
tests/unit_tests/test_math_utils.py::test_cosine_similarity_score_threshold
tests/unit_tests/test_math_utils.py::test_cosine_similarity_top_k_and_score_threshold
tests/unit_tests/vectorstores/test_utils.py::test_maximal_marginal_relevance_lambda_zero
tests/unit_tests/vectorstores/test_utils.py::test_maximal_marginal_relevance_lambda_one
/app/langchain/math_utils.py:23: RuntimeWarning: invalid value encountered in divide
similarity = np.dot(X, Y.T) / np.outer(X_norm, Y_norm)
tests/unit_tests/test_sql_database_schema.py::test_table_info
/app/.venv/lib/python3.11/site-packages/duckdb_engine/__init__.py:160: DuckDBEngineWarning: duckdb-engine doesn't yet support reflection on indices
warnings.warn(
tests/unit_tests/document_loaders/test_readthedoc.py::test_main_id_main_content
tests/unit_tests/document_loaders/test_readthedoc.py::test_div_role_main
tests/unit_tests/document_loaders/test_readthedoc.py::test_custom
tests/unit_tests/document_loaders/test_readthedoc.py::test_empty
/app/langchain/document_loaders/readthedocs.py:48: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 48 of the file /app/langchain/document_loaders/readthedocs.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
_ = BeautifulSoup(
tests/unit_tests/document_loaders/test_readthedoc.py::test_main_id_main_content
tests/unit_tests/document_loaders/test_readthedoc.py::test_div_role_main
tests/unit_tests/document_loaders/test_readthedoc.py::test_custom
tests/unit_tests/document_loaders/test_readthedoc.py::test_empty
/app/langchain/document_loaders/readthedocs.py:75: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 75 of the file /app/langchain/document_loaders/readthedocs.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
soup = BeautifulSoup(data, **self.bs_kwargs)
tests/unit_tests/memory/test_combined_memory.py::test_basic_functionality
/app/langchain/memory/combined.py:38: UserWarning: When using CombinedMemory, input keys should be so the input is known. Was not set on chat_memory=ChatMessageHistory(messages=[]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='foo'
warnings.warn(
tests/unit_tests/memory/test_combined_memory.py::test_basic_functionality
/app/langchain/memory/combined.py:38: UserWarning: When using CombinedMemory, input keys should be so the input is known. Was not set on chat_memory=ChatMessageHistory(messages=[]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='bar'
warnings.warn(
tests/unit_tests/tools/shell/test_shell.py::test_shell_input_validation
/app/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================== slowest 5 durations ===============================
0.49s call tests/unit_tests/test_pytest_config.py::test_socket_disabled
0.23s call tests/unit_tests/test_sql_database_schema.py::test_table_info
0.15s call tests/unit_tests/test_sql_database_schema.py::test_sql_database_run
0.07s call tests/unit_tests/callbacks/tracers/test_base_tracer.py::test_tracer_llm_run
0.06s call tests/unit_tests/document_loaders/test_readthedoc.py::test_main_id_main_content
============================= short test summary info =============================
FAILED tests/unit_tests/test_pytest_config.py::test_socket_disabled - Failed: DID NOT RAISE <class 'pytest_socket.SocketBlockedError'>
============= 1 failed, 697 passed, 71 skipped, 26 warnings in 6.13s ==============
```
### Who can help?
@vowelparrot @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. git clone https://github.com/hwchase17/langchain.git
2. cd langchain
3. podman build -t langchain -f Dockerfile .
4. podman run -it langchain
### Expected behavior
All the tests will run without error. | Test Failure: SocketBlockedError not raised in test_pytest_config.py/test_socket_disabled | https://api.github.com/repos/langchain-ai/langchain/issues/5888/comments | 1 | 2023-06-08T15:54:40Z | 2023-09-14T16:05:51Z | https://github.com/langchain-ai/langchain/issues/5888 | 1,748,203,950 | 5,888 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.