issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Pycharm in macOS
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# change in the template string will cause the SSL connection or the server response to hang,code here:
`question = "Who won the FIFA World Cup in the year 1994? "
template = """Question: {question}
Answer: Let's think step by step."""
def execute_llm(repo_id:str,template:str,question:str):
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature": 0, "max_length": 64})
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))`
execute_llm(repo_id="google/flan-t5-xl",template=template,question=question)
Upon executing llm_chain.run(question), it accesses the HuggingFaceHub service. This results in the code getting stuck at the following line in ssl.py, which is part of the SSLSocket class: self._sslobj.read(len, buffer). This method of the SSLSocket class holds a _sslobj object, which is also of SSLSocket class. The SSLSocket class defines a method read(self, len=1024, buffer=None) which internally calls _sslobj's read(self, len=1024, buffer=None) method.
The problem is that any modification to the template (whether it's adding spaces, newline characters, or changing the string) will cause self._sslobj.read(len, buffer) to block until it times out. Since it's completely impossible to step into it for tracking and debugging, I'm unable to determine the specific reason.
### Expected behavior
Could you explain what might be causing this issue? It seems to be related to the SSL network connection, not the content of the template string. However, it's puzzling that the change in the template string is causing the SSL connection or the server response to hang. This behavior is not entirely reasonable and may be a bug or a problem caused by other factors. | change in the template string is causing the SSL connection or the server response to hang | https://api.github.com/repos/langchain-ai/langchain/issues/5360/comments | 1 | 2023-05-28T02:31:02Z | 2023-09-10T16:11:01Z | https://github.com/langchain-ai/langchain/issues/5360 | 1,729,102,111 | 5,360 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When running multi_modal_output_agent.ipynb: https://python.langchain.com/en/latest/use_cases/agents/multi_modal_output_agent.html, I get ConfigError: field "steamship" not yet prepared so type is still a ForwardRef, you might need to call SteamshipImageGenerationTool.update_forward_refs().
tools = [
SteamshipImageGenerationTool(model_name= "dall-e")
]
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
Cell In[7], line 2
1 tools = [
----> 2 SteamshipImageGenerationTool(model_name= "dall-e")
3 ]
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()
ConfigError: field "steamship" not yet prepared so type is still a ForwardRef, you might need to call SteamshipImageGenerationTool.update_forward_refs().
After I call SteamshipImageGenerationTool.update_forward_refs(), I get another error.
SteamshipImageGenerationTool.update_forward_refs()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[10], line 1
----> 1 SteamshipImageGenerationTool.update_forward_refs()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:815, in pydantic.main.BaseModel.update_forward_refs()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:562, in pydantic.typing.update_model_forward_refs()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:528, in pydantic.typing.update_field_forward_refs()
File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:66, in pydantic.typing.evaluate_forwardref()
File ~/miniconda3/envs/langchain/lib/python3.11/typing.py:864, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
859 if self.__forward_module__ is not None:
860 globalns = getattr(
861 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
862 )
863 type_ = _type_check(
--> 864 eval(self.__forward_code__, globalns, localns),
865 "Forward references must evaluate to types.",
866 is_argument=self.__forward_is_argument__,
867 allow_special_forms=self.__forward_is_class__,
868 )
869 self.__forward_value__ = _eval_type(
870 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
871 )
872 self.__forward_evaluated__ = True
File <string>:1
NameError: name 'Steamship' is not defined
### Idea or request for content:
_No response_ | DOC: SteamshipImageGenerationTool returns Config Error in multi_modal_output_agent.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/5358/comments | 7 | 2023-05-27T22:42:18Z | 2024-02-15T05:48:15Z | https://github.com/langchain-ai/langchain/issues/5358 | 1,728,991,454 | 5,358 |
[
"langchain-ai",
"langchain"
] | ### Chat agent reliability fix: put format instructions and other important information in a human message
There have been a few raised issues specifically around agent reliability when using chat models. @emilsedgh brought up in the JS Discord that OpenAI's 3.5 turbo model is documented as "not pay[ing] strong attention to the system message, and therefore important instructions are often better placed in a user message.":
https://platform.openai.com/docs/guides/chat/instructing-chat-models
The `ConversationalChatAgent` is implemented this way:
https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational_chat/base.py#L90
But the base `ChatAgent` and the `StructuredChatAgent` are not:
https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/base.py#L81
https://github.com/hwchase17/langchain/blob/master/langchain/agents/structured_chat/base.py#L88
Need to do a little bit more experimenting, but moving things into the human message may help with reliability issues.
### Suggestion:
_No response_ | Issue: Chat agents should put format instructions and other important information in a human message | https://api.github.com/repos/langchain-ai/langchain/issues/5353/comments | 0 | 2023-05-27T20:44:27Z | 2023-05-27T21:21:10Z | https://github.com/langchain-ai/langchain/issues/5353 | 1,728,932,672 | 5,353 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[Improving Factuality and Reasoning in Language
Models through Multiagent Debate](https://arxiv.org/pdf/2305.14325.pdf) - looks very promising
### Motivation
This method is orthogonal to other methods like CoT. It looks like, this method is beneficial almost in any case when we need the highest quality answer.
### Your contribution
I can help with testing. Not sure about quick implementation. | implement `Multiagent Debate` | https://api.github.com/repos/langchain-ai/langchain/issues/5348/comments | 5 | 2023-05-27T19:13:41Z | 2023-12-11T17:05:44Z | https://github.com/langchain-ai/langchain/issues/5348 | 1,728,907,870 | 5,348 |
[
"langchain-ai",
"langchain"
] | ### System Info
Running latest versions of langchain, openai, openlm, python 3.10, mac M1, trying this example I saw on Twitter
https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html?highlight=openlm
```
from langchain.llms import OpenLM
from langchain.llms import OpenAI
from langchain.llms import openai
from langchain import PromptTemplate, LLMChain
openai.api_key = os.getenv("OPENAI_API_KEY")
question = "What is the capital of France?"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm1 = OpenAI()
llm2 = OpenLM(model="text-davinci-003")
llm_chain1 = LLMChain(prompt=prompt, llm=llm1)
llm_chain2 = LLMChain(prompt=prompt, llm=llm2)
result1 = llm_chain1.run(question)
result2 = llm_chain2.run(question)
```
result 1 runs, result 2 gives ValueError: OPENAI_API_KEY is not set or passed as an argument
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import OpenLM
from langchain.llms import OpenAI
from langchain.llms import openai
from langchain import PromptTemplate, LLMChain
openai.api_key = os.getenv("OPENAI_API_KEY")
question = "What is the capital of France?"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm1 = OpenAI()
llm2 = OpenLM(model="text-davinci-003")
llm_chain1 = LLMChain(prompt=prompt, llm=llm1)
llm_chain2 = LLMChain(prompt=prompt, llm=llm2)
result1 = llm_chain1.run(question)
result2 = llm_chain2.run(question)
```
### Expected behavior
Run without error | Using OpenLM example giving error: "ValueError: OPENAI_API_KEY is not set or passed as an argument" | https://api.github.com/repos/langchain-ai/langchain/issues/5347/comments | 2 | 2023-05-27T18:13:32Z | 2023-09-12T16:12:17Z | https://github.com/langchain-ai/langchain/issues/5347 | 1,728,877,787 | 5,347 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Extend `langchain/embeddings/elasticsearch.py` to support kNN indexing and searching.
The high-level objectives will be:
1. Allow for the [creation of an index with the correct mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html#index-vectors-knn-search) to store documents including dense_vectors so they can be used for kNN search
2. Store embeddings in elasticsearch in [dense_vector](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html) field type
3. Perform [kNN search](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html)
4. Perform [Hybrid](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html#_combine_approximate_knn_with_other_features) BM25 (query) + kNN search
### Motivation
Elasticsearch support approximate k-nearest neighbor search with dense vectors. The current module only support script score / exact match vector search.
### Your contribution
I will work on the code and create the pull request | Extend elastic_vector_search.py to allow for kNN indexing/searching | https://api.github.com/repos/langchain-ai/langchain/issues/5346/comments | 4 | 2023-05-27T17:12:57Z | 2023-06-02T19:49:19Z | https://github.com/langchain-ai/langchain/issues/5346 | 1,728,842,882 | 5,346 |
[
"langchain-ai",
"langchain"
] | ---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
[<ipython-input-33-f877209e86e7>](https://localhost:8080/#) in <cell line: 1>()
----> 1 flare.run(query)
20 frames
[/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py](https://localhost:8080/#) in __prepare_create_request(cls, api_key, api_base, api_type, api_version, organization, **params)
81 if typed_api_type in (util.ApiType.AZURE, util.ApiType.AZURE_AD):
82 if deployment_id is None and engine is None:
---> 83 raise error.InvalidRequestError(
84 "Must provide an 'engine' or 'deployment_id' parameter to create a %s"
85 % cls,
InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> | FLARE | Azure open Ai doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/5345/comments | 3 | 2023-05-27T16:08:32Z | 2023-09-15T16:10:19Z | https://github.com/langchain-ai/langchain/issues/5345 | 1,728,810,122 | 5,345 |
[
"langchain-ai",
"langchain"
] | ### System Info
I operate according to the official documentation (https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html), but I can't get the information through loader.load(), but the output is still [], the language I use is Python, and the version of langchain is 0.0.181, I tried reinstalling the dependent environment but it didn't work well, hopefully it can be solved soon
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import UnstructuredURLLoader
urls = [
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023",
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023"
]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
### Expected behavior
data not empty | UnstructuredURLLoader can't load data from url | https://api.github.com/repos/langchain-ai/langchain/issues/5342/comments | 24 | 2023-05-27T13:55:09Z | 2024-04-06T08:47:27Z | https://github.com/langchain-ai/langchain/issues/5342 | 1,728,738,044 | 5,342 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I've been playing around with OpenAI GPT-4 and ran into situation when response generation might take quite some time - say 5 minutes.
I switched over to streaming, but often I can immediately see the response is not what want, and therefore I'd like to cancel request.
Now here is the part that is unclear to me:
**is there an official way to cancel request in Python's version of LangChain?** I have found this [described](https://js.langchain.com/docs/modules/models/chat/additional_functionality#cancelling-requests) in JS/TS version of the framework, however scanning docs, sources and issues yields nothing for this repo.
For now I simply terminate process, which works good enough for something like Jupyter notebooks, but quite problematic for say web application.
Besides termination, it's also unclear if I may incur any unwanted costs or not for the abandoned request.
Should some sort of feature parity be made with JS LangChain?
### Motivation
Provide a documented way to cancel long-running requests
### Your contribution
At this point I have capacity only to test out potential implementation. May work on the implementation in later weeks. | Implement a way to abort / cancel request | https://api.github.com/repos/langchain-ai/langchain/issues/5340/comments | 23 | 2023-05-27T12:07:10Z | 2024-08-01T16:05:23Z | https://github.com/langchain-ai/langchain/issues/5340 | 1,728,681,110 | 5,340 |
[
"langchain-ai",
"langchain"
] | ### Feature request
MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
### Motivation
There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search.
### Your contribution
I am submitting a PR for this issue soon. | Add MongoDBAtlasVectorSearch vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/5337/comments | 0 | 2023-05-27T11:41:39Z | 2023-05-30T14:59:03Z | https://github.com/langchain-ai/langchain/issues/5337 | 1,728,669,494 | 5,337 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS
Langchain Version 0.0.181
Python Version 3.11.3
### Who can help?
@eyurtsev I wasn't sure who to reach out to. The following is the signature for adding embeddings to FAISS:
```python
FAISS.add_embeddings(
self,
text_embeddings: 'Iterable[Tuple[str, List[float]]]',
metadatas: 'Optional[List[dict]]' = None,
**kwargs: 'Any',
) -> 'List[str]'
```
Notice that `text_embeddings` takes an iterable. However, when I do this I get a failure with my iterable, but when wrapped in a `list` function then it is successful.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
vs = FAISS.from_texts(['a'], embedding=OpenAIEmbeddings())
vector = OpenAIEmbeddings().embed_query('b')
# error happens with this next line, see "Expected behavior" below.
vs.add_embeddings(iter([('b', vector)]))
# no error happens when wrapped in a list
vs.add_embeddings(list(iter([('b', vector)])))
```
### Expected behavior
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
...
File ~/.pyenv/versions/3.11.3/envs/myenv/lib/python3.11/site-packages/faiss/class_wrappers.py:227, in handle_Index.<locals>.replacement_add(self, x)
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
``` | FAISS.add_embeddings is typed to take iterables but does not. | https://api.github.com/repos/langchain-ai/langchain/issues/5336/comments | 3 | 2023-05-27T11:29:03Z | 2023-12-07T16:08:35Z | https://github.com/langchain-ai/langchain/issues/5336 | 1,728,661,570 | 5,336 |
[
"langchain-ai",
"langchain"
] | ### System Info
* Langchain: 0.0.181
* OS: Ubuntu Linux 20.04
* Kernel: `Linux iZt4n78zs78m7gw0tztt8lZ 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
* Ubuntu version:
```plain
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
```
* Python: Python 3.8.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use the example code provided in [Quick Start: Agents with Chat Models](https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-with-chat-models), but replace the 'serpapi' tool with 'google-serper' tool .
Here's the modified code:
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
chat = ChatOpenAI(temperature=0.3)
llm = OpenAI(temperature=0)
tools = load_tools(["google-serper", "llm-math"], llm=llm)
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(result)
```
When I execute the code above. Error occurred. Here's the error text:
~~~plain
(openai-test) dsdashun@iZt4n78zs78m7gw0tztt8lZ:~/workspaces/openai-test/langchain$ python3 get_started_chat_agent.py
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 22, in parse
response = json.loads(action.strip())
File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 4 column 2 (char 75)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get_started_chat_agent.py", line 14, in <module>
result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 672, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 385, in plan
return self.output_parser.parse(full_output)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 26, in parse
raise OutputParserException(f"Could not parse LLM output: {text}")
langchain.schema.OutputParserException: Could not parse LLM output: Question: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?
Thought: I should use Serper Search to find out who Olivia Wilde's boyfriend is and then use Calculator to calculate his age raised to the 0.23 power.
Action:
```
{
"action": "Serper Search",
"action_input": "Olivia Wilde boyfriend"
},
{
"action": "Calculator",
"action_input": "Age of Olivia Wilde's boyfriend raised to the 0.23 power"
}
```
~~~
However, if I use the `pdb` debugger to debug the program step by step, and pause a little bit after running `initialize_agent`, everything is fine.
I didn't use the 'serpapi' tool, because I don't have an API key on it. So I cannot verify whether the original example code can be executed successfully on my machine using the 'serpapi' tool
### Expected behavior
I expect the code can run successfully without any problems, even if I replace the search tool with a similar one. | `Agents with Chat Models` Example Code Abnormal When Using `google-serper` Tool | https://api.github.com/repos/langchain-ai/langchain/issues/5335/comments | 3 | 2023-05-27T09:47:29Z | 2023-09-15T16:10:24Z | https://github.com/langchain-ai/langchain/issues/5335 | 1,728,609,263 | 5,335 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.176
Ubuntu x86 23.04
Memory 24gb
AMD EPYC
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
# Make sure the model path is correct for your system!
llm_cpp = LlamaCpp(model_path="/vicuna/ggml-vic7b-q4_0.bin", callback_manager=callback_manager)
llm = llm_cpp
tools = load_tools(["serpapi"], llm=llm_cpp)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("What is football?")
```
Result
```
Action Input: "what is football?" I should probably start by defining what football actually is.
Action: Let's [Search] "what is football?"
Action Input: "what is football?"
Observation: Let's [Search] "what is football?" is not a valid tool, try another one.
Thought:
```
### Expected behavior
Expected behavior:
search google and return correct results
If i change model from vicuna to openAI api, works fine
| Serp APi and google search API won't work with LLama models like vicuna | https://api.github.com/repos/langchain-ai/langchain/issues/5329/comments | 2 | 2023-05-27T04:23:16Z | 2023-06-30T07:51:03Z | https://github.com/langchain-ai/langchain/issues/5329 | 1,728,448,457 | 5,329 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
What are the key differences between these methods of trying to query/ask a database, then return the answer along with its relevant sources?
My main objective is to have a Chatbot that has knowledge from a knowledge base, and can still maintain conversation history. Their answer must return me the source document as well. Which option is the best among so many choices?
There are
1. [Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html),
```
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
```
2. [Retrieval Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html)
```
from langchain import OpenAI
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
```
3. [Question Answering over Docs](https://python.langchain.com/en/latest/use_cases/question_answering.html)
```
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
```
++ probably quite a few more examples I could find if I dig through the documentation. | Difference among various ways to query database and return source information? (Question Answering with Sources, Retrieval Question Answering with Sources, index.query_with_sources, etc.) | https://api.github.com/repos/langchain-ai/langchain/issues/5328/comments | 4 | 2023-05-27T03:28:23Z | 2023-09-18T16:10:35Z | https://github.com/langchain-ai/langchain/issues/5328 | 1,728,430,768 | 5,328 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
LC Release announcements seem to be missing from Discord's Announcements channel since 0.0.166. Looking more closely, these seem to be manual, added by hwchase17.
On Twitter, the most recent release announcement from the LangChainAI account is 0.0.170, viz:
https://twitter.com/search?q=(from%3ALangChainAI)%20release&src=typed_query&f=live
### Suggestion:
I couldn't tell from this project's various actions whether such postings are meant to be automated upon release (Github search on actions isn't great), and just need to be fixed. If not, I think it would be would be very useful for the community to add such release notification actions, so that the various places people keep up to date are all, well, up to date. | Issue: Fix or automate sync of releases to Discord Announcements channel, Twitter, etc. | https://api.github.com/repos/langchain-ai/langchain/issues/5324/comments | 2 | 2023-05-27T01:24:20Z | 2023-09-10T16:11:21Z | https://github.com/langchain-ai/langchain/issues/5324 | 1,728,392,677 | 5,324 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS = MACOS
langchain=0.0.179 (also tried 0.0.174 and 0.0.178)
### Who can help?
@hwchase17 @agola11
The full code below is single file. imports and other information not added to keep it crisp.
The following works with no issues:
```
llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key)
resp = llm("Tell me pub joke")
print(resp)
```
The following does not work.
```
#get document store
store = getfromstore(collection_name="sou_coll")
# Create vectorstore info object - metadata repo?
vectorstore_info = VectorStoreInfo(
name="sou",
description="sou folder",
vectorstore=store
)
# Convert the document store into a langchain toolkit
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
# Add the toolkit to an end-to-end LC
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
response = agent_executor.run(prompt)
print(response)
```
I can confirm the document store exists and the same code with appropriate OpenAI (not Azure OpenAI) works as expected with no issue. Azure OpenAI gives the following error -
```
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
```
Observation LLM is correct since the first part (write a joke) works. The agent does not. Please help!
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#!/usr/bin/env python3
import sys
from dotenv import load_dotenv
# Load default environment variables (.env)
load_dotenv()
# Import os to set API key
import os
# Import OpenAI as main LLM service
from langchain.llms import AzureOpenAI
from langchain.callbacks import get_openai_callback
# Bring in streamlit for UI/app interface
import streamlit as st
# Import PDF document loaders...there's other ones as well!
from langchain.document_loaders import PyPDFLoader
# Import chroma as the vector store
from langchain.vectorstores import Chroma
from common.funs import getfromstore
# Import vector store stuff
from langchain.agents.agent_toolkits import (
create_vectorstore_agent,
VectorStoreToolkit,
VectorStoreInfo
)
# Set this to `azure`
openai_api_type = os.environ["OPENAI_API_TYPE"] ="azure"
openai_api_version = os.environ["OPENAI_API_VERSION"] = os.environ["AOAI_OPENAI_API_VERSION"]
openai_api_base = os.environ["OPENAI_API_BASE"] = os.environ["AOAI_OPENAI_API_BASE"]
openai_api_key = os.environ["OPENAI_API_KEY"] = os.environ["AOAI_OPENAI_API_KEY"]
# Create instance of OpenAI LLM
#llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", openai_api_key=openai_api_key)
llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key)
resp = llm("Tell me pub joke")
print(resp)
print("------------")
st.write(resp)
st.write("----------------------")
#get document store
store = getfromstore(collection_name="sou_coll")
#print(store1.get(["metadatas"]))
# Create vectorstore info object - metadata repo?
vectorstore_info = VectorStoreInfo(
name="sou",
description="sou folder",
vectorstore=store
)
# Convert the document store into a langchain toolkit
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
# Add the toolkit to an end-to-end LC
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
st.title("🦜🔗🤗 What would you like to know?")
st.write("This sample uses Azure OpenAI")
# Create a text input box for the user
prompt = st.text_input('Input your prompt here:')
# If the user hits enter
if prompt:
with get_openai_callback() as cb:
#try:
# Then pass the prompt to the LLM
response = agent_executor.run(prompt)
# ...and write it out to the screen
st.write(response)
st.write(cb)
#except Exception as e:
# st.warning
# st.write("That was a difficult question! I choked on it!! Can you please try again with rephrasing it a bit?")
# st.write(cb)
# print(e)
# Find the relevant pages
search = store.similarity_search_with_score(prompt)
# Write out the first
try:
st.write("This information was found in:")
for doc in search:
score = doc[1]
try:
page_num = doc[0].metadata['page']
except:
page_num = "txt snippets"
source = doc[0].metadata['source']
# With a streamlit expander
with st.expander("Source: " + str(source) + " - Page: " + str(page_num) + "; Similarity Score: " + str(score) ):
st.write(doc[0].page_content)
except:
print("unable to get source document detail")
### Expected behavior
The video shows the expected output - https://www.youtube.com/watch?v=q27RbxcfGvE
The OpenAI code in this sample is exact except for changes to LLM and env variables - file https://github.com/ushakrishnan/SearchWithOpenAI/blob/main/pages/6_Q%26A_with_Open_AI.py.
| Issues with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/5322/comments | 2 | 2023-05-26T23:47:15Z | 2023-05-31T23:40:55Z | https://github.com/langchain-ai/langchain/issues/5322 | 1,728,349,391 | 5,322 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain='0.0.161'
python='3.9.13'
IPython= '7.31.1'
ipykernel='6.15.2'
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
retriever1 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever()
retriever2 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever()
retriever_infos = [
{
"name": "sindh",
"description": "Good for answering questions about Sindh",
"retriever": retriever1
},
{
"name": "punjab",
"description": "Good for answering questions about Punjab",
"retriever": retriever2
}]
chain = MultiRetrievalQAChain.from_retrievers(ChatOpenAI(model_name='gpt-3.5-turbo',temperature=0), retriever_infos,verbose=True)
chain.save('chain.json')
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_20160\230129054.py in <module>
----> 1 chain.save('chain.json')
~\anaconda3\lib\site-packages\langchain\chains\base.py in save(self, file_path)
294
295 # Fetch dictionary to save
--> 296 chain_dict = self.dict()
297
298 if save_path.suffix == ".json":
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\prompts\base.py in dict(self, **kwargs)
186 def dict(self, **kwargs: Any) -> Dict:
187 """Return dictionary representation of prompt."""
--> 188 prompt_dict = super().dict(**kwargs)
189 prompt_dict["_type"] = self._prompt_type
190 return prompt_dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\schema.py in dict(self, **kwargs)
354 """Return dictionary representation of output parser."""
355 output_parser_dict = super().dict()
--> 356 output_parser_dict["_type"] = self._type
357 return output_parser_dict
358
~\anaconda3\lib\site-packages\langchain\schema.py in _type(self)
349 def _type(self) -> str:
350 """Return the type key."""
--> 351 raise NotImplementedError
352
353 def dict(self, **kwargs: Any) -> Dict:
NotImplementedError:
### Expected behavior
I expected to save the chain on disk for future use. | MultiRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/5318/comments | 1 | 2023-05-26T22:32:19Z | 2023-09-10T16:11:27Z | https://github.com/langchain-ai/langchain/issues/5318 | 1,728,309,373 | 5,318 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.181
Python 3.10
OS: Ubuntu
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
```
import asyncio
from functools import lru_cache
from typing import AsyncGenerator
from langchain.text_splitter import RecursiveCharacterTextSplitter
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from pydantic import BaseModel
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
```
```
api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
app = FastAPI()
```
```
with open('state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
doc_text = text_splitter.create_documents([state_of_the_union])
```
```
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
vector_db = Chroma.from_documents(doc_text, embeddings,persist_directory='db')
retriever = vector_db.as_retriever()
```
```
class ChatRequest(BaseModel):
"""Request model for chat requests.
Includes the conversation ID and the message from the user.
"""
conversation_id: str
message: str
```
```
class StreamingConversationChain:
"""
Class for handling streaming conversation chains.
It creates and stores memory for each conversation,
and generates responses using the ChatOpenAI model from LangChain.
"""
def __init__(self, openai_api_key: str, temperature: float = 0.0):
self.memories = {}
self.openai_api_key = openai_api_key
self.temperature = temperature
async def generate_response(
self, conversation_id: str, message: str
) -> AsyncGenerator[str, None]:
"""
Asynchronous function to generate a response for a conversation.
It creates a new conversation chain for each message and uses a
callback handler to stream responses as they're generated.
:param conversation_id: The ID of the conversation.
:param message: The message from the user.
"""
callback_handler = AsyncIteratorCallbackHandler()
llm = ChatOpenAI(
callbacks=[callback_handler],
streaming=True,
temperature=self.temperature,
openai_api_key=self.openai_api_key,
)
memory = self.memories.get(conversation_id)
if memory is None:
memory = ConversationBufferMemory(memory_key="chat_history",output_key='answer',
return_messages=True)
self.memories[conversation_id] = memory
chain = ConversationalRetrievalChain.from_llm(llm,
retriever=retriever, memory=memory,
chain_type="stuff",
# return_source_documents=True
)
run = asyncio.create_task(chain(({"question": message})))
async for token in callback_handler.aiter():
yield token
await run()
```
```
streaming_conversation_chain = StreamingConversationChain(
openai_api_key=api_key
)
```
```
@app.post("/chat", response_class=StreamingResponse)
async def generate_response(data: ChatRequest) -> StreamingResponse:
"""Endpoint for chat requests.
It uses the StreamingConversationChain instance to generate responses,
and then sends these responses as a streaming response.
:param data: The request data.
"""
return StreamingResponse(
streaming_conversation_chain.generate_response(
data.conversation_id, data.message
),
media_type="text/event-stream",
)
```
```
if __name__ == "__main__":
import uvicorn
uvicorn.run(app)
```
Here is error traceback
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/home/talha/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/media/talha/data/nlp/langchain/fastapi/error_rep.py", line 93, in generate_response
run = asyncio.create_task(chain(({"question": message})))
File "/usr/lib/python3.10/asyncio/tasks.py", line 337, in create_task
task = loop.create_task(coro)
File "uvloop/loop.pyx", line 1435, in uvloop.loop.Loop.create_task
TypeError: a coroutine was expected, got {'question': 'what is cnn', 'chat_history': [HumanMessage(content='what is cnn', additional_kwargs={}, example=False), AIMessage(content='CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.', additional_kwargs={}, example=False)], 'answer': 'CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.'}
### Expected behavior
This code worked with `ConversationChain` and produce streaming output
```
chain = ConversationChain(
memory=memory,
prompt=CHAT_PROMPT_TEMPLATE,
llm=llm,
)
run = asyncio.create_task(chain.arun(input=message))
```
But i want to use ConversationalRetrievalChain | TypeError: a coroutine was expected, got {'question': query, 'chat_history': {...}} | https://api.github.com/repos/langchain-ai/langchain/issues/5317/comments | 3 | 2023-05-26T21:09:00Z | 2023-09-25T10:16:10Z | https://github.com/langchain-ai/langchain/issues/5317 | 1,728,248,298 | 5,317 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | VertexAIEmbeddings error when passing a list with of length greater than 5. | https://api.github.com/repos/langchain-ai/langchain/issues/5316/comments | 2 | 2023-05-26T20:31:56Z | 2023-05-29T13:57:42Z | https://github.com/langchain-ai/langchain/issues/5316 | 1,728,211,849 | 5,316 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there any way to convert ChatOpenAI models to ONNX format? I've noticed that other models can be converted to ONNX (example: https://github.com/openai/whisper/discussions/134) and I was wondering if similar logic could be applied in this case as well.
### Motivation
I want to save these models in the ONNX format (a single file) so I can easily retrieve them and use them for question-answering. I want to be able to save the model as a single file in this case.
### Your contribution
Not sure. I could create a PR if I'm able to succeed in this. | Converting ChatOpenAI model to ONNX format | https://api.github.com/repos/langchain-ai/langchain/issues/5313/comments | 3 | 2023-05-26T19:19:53Z | 2023-09-27T16:06:50Z | https://github.com/langchain-ai/langchain/issues/5313 | 1,728,123,134 | 5,313 |
[
"langchain-ai",
"langchain"
] | I've built many langchain agents with many types of memory to remember the context of the context in a correct way.
The problem is if I send the same message again, he will generate the same thought and answer again, which I don't refer to.
Example of the incorrect conversation that would happen between me and the agent when using **GPTIndexChatMemory**
**User**:- Hi
**AI**:- Hi, how can I help you today?
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)**
**User**:- I didn't get it.
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)**
**User**:- what's the physics?
**AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)**
**User**:- I didn't get it. **(same message again which saved in the previous chat history)**
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(not the correct answer, I'm referring to the definition of physics not the definition of ML)**
**User**: What's machine learning?
** AI **:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(He generated the same answer from the previous chat history while I need him to give more information to the user not just the same previous answer)**
Example of the correct conversation I want to accomplish when using **GPTIndexChatMemory**
**User**:- Hi
**AI**:- Hi, how can I help you today?
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)**
**User**:- I didn't get it.
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)**
**User**:- what's the physics?
**AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)**
**User**:- I didn't get it.
**AI**:- I apologize for any confusion caused. Let me provide a more simplified explanation of physics: Physics is the branch of science that deals with understanding and explaining how the world around us works. It is concerned with studying matter (everything that makes up the physical world) and energy (the ability to do work). **(now it's good because he knows that I'm referring to the definition of physics not the definition of ML, although the "I didn't get it." message was saved in the previous chat history.**
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence that focuses on developing algorithms and models capable of learning from data and making predictions or decisions. The primary idea behind machine learning is to enable computers to learn and improve automatically without explicit programming. **(better answer although I repeated the same question, but he didn't get the same answer from the previous chat history)**
I know that the problem with the memory because if I build my agent with **ConversationBufferWindowMemory** with k = he 1 will perform this type of conversation, but since I'm using **GPTIndexChatMemory** he saved all the messages and the questions and the answers of the full conversation in this memory and bring the same answer from the previous chat history if the **message/question** repeated, which is totally wrong.
This is my prompt to instruct my agent and its **CONVERSATIONAL_REACT_DESCRIPTION**
"""
SMSM bot, your main objective is to provide the most helpful and accurate responses to the user Zeyad. To do this, you have a powerful toolset and the ability to learn and adapt to the conversation's context
GOAL: The priority is to keep the conversation flowing smoothly. Offer new insights, avoid repetitive responses, and refrain from chat history without considering the most recent context. Always place emphasis on the most recent question or topic raised by the user, and tailor your responses to match his inquiries.
Consider the following scenarios:
**Scenario 1**: Whenever the user introduces a new topic, all his subsequent messages are assumed to refer to this latest topic, even if this message/question already exists in the previous chat history as it is in previous conversations under different topics. This context remains until the user changes the topic explicitly. Do not seek clarification on the topic unless the user's message is ambiguous within the context of the latest topic, For example, if the user asked about Machine Learning and then about Physics, and subsequently said, "I didn't get it," your responsibility is to provide further explanation about Physics (the latest topic), and not Machine Learning (the previous topic) or ask which topic he's referring to. The phrase "I didn't get it" must be associated with the most recent topic discussed.
**Scenario 2:** If the user asks the same question or a general knowledge question that has been asked before and you answered it, don't just repeat the previous answer verbatim or without relying on the previous chat history answer. Instead, try to add more value, provide a different perspective, or delve deeper into the topic and aim to generate a better and different answer that provides additional value.
You MUST use the following format to provide the answer to the user:
**Thought**: I have to see what the current topic we are currently discussing with the user based on the current topic, deeply analyze the user's message, find out his intention, and see if the user refers to the current topic or not regardless of previous chat history and with regarding (Scenario 1, GOAL)
**AI**: [your response here]
Begin!
Prvious chat history:
{chat_history}
New input: {input}
"""
That's the way I define the agent and my memory.
`embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(embed_model=embed_model)
index = GPTListIndex([],service_context=service_context)
from llama_index.query_engine import RetrieverQueryEngine
#retriever = index.as_retriever(retriever_mode='embedding')
#query_engine = RetrieverQueryEngine(retriever)
memory = GPTIndexChatMemory(
index=index,
memory_key="chat_history",
query_kwargs={"response_mode": "compact"},
input_key="input",
)
agent_chain = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION ,
verbose=True,
handle_parsing_errors=True,
memory = memory
)` | Issue: All types of langchain memories don't work in a proper way. | https://api.github.com/repos/langchain-ai/langchain/issues/5308/comments | 2 | 2023-05-26T16:33:34Z | 2023-09-18T16:10:41Z | https://github.com/langchain-ai/langchain/issues/5308 | 1,727,924,219 | 5,308 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add a langchain.embeddings.AnthropicEmbeddings class, similar to the langchain.embeddings.OpenAIEmbeddings class
### Motivation
I am trying to modify this notebook to use Claude by Anthropic instead of OpenAI: https://github.com/pinecone-io/examples/blob/master/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb
This notebook uses Pinecone and an OpenAI LLM to do retrieval augmentation, but I would like to use Claude by Anthropic
However, I am stuck because of the lack of a corresponding langchain.embeddings.AnthropicEmbeddings to replace the langchain.embeddings.OpenAIEmbeddings class that is used in this example
### Your contribution
I am willing to contribute, but would appreciate some guidance. I am very new to this project | Add a langchain.embeddings.AnthropicEmbeddings class | https://api.github.com/repos/langchain-ai/langchain/issues/5307/comments | 1 | 2023-05-26T16:25:57Z | 2023-05-26T17:02:37Z | https://github.com/langchain-ai/langchain/issues/5307 | 1,727,914,846 | 5,307 |
[
"langchain-ai",
"langchain"
] | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | https://api.github.com/repos/langchain-ai/langchain/issues/5304/comments | 2 | 2023-05-26T16:10:47Z | 2023-05-28T20:19:35Z | https://github.com/langchain-ai/langchain/issues/5304 | 1,727,893,507 | 5,304 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.180
python 3.11
### Who can help?
@dev2049
@Jflick58
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
question1 = "I am axa, I'm a 2 months old baby.."
question2 = "I like eating 🍌 🍉 🫐 but dislike 🥑"
question3 = "what is my name?"
question4 = "Do i disklike 🍌?"
agent_chain = initialize_agent(
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools=[],
llm=llm,
verbose=True,
max_iterations=3,
memory=ConversationBufferMemory(
memory_key="chat_history", return_messages=True),
)
agent_chain.run(input=question1)
agent_chain.run(input=question2)
agent_chain.run(input=question3)
agent_chain.run(input=question4)
File "/Users/axa/workspace/h/default/genai_learning/post/api/app/routes/v1/quiz_chat.py", line 271, in ask
agent_chain.run(input=question1)
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 951, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 773, in _take_next_step
raise e
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 762, in _take_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 444, in plan
return self.output_parser.parse(full_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/conversational/output_parser.py", line 23, in parse
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.OutputParserException: **Could not parse LLM output: `Hi Axa, it's nice to meet you! I'm Bard, a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.**
### Expected behavior
When I used same code but ChatOpenAI() it worked perfectly.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Hello Axa! As an AI language model, I'm not able to see or interact with you physically, but I'm here to assist you with any questions or topics you might have. How can I assist you today?
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: It's great to hear that you enjoy eating bananas, watermelons, and blueberries! However, it's understandable that you might not like avocados. Everyone has their own preferences when it comes to food. Is there anything else you would like to discuss or ask about?
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Your name is Axa, as you mentioned earlier.
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: You did not mention that you dislike bananas, so I cannot say for sure. However, based on your previous message, it seems that you enjoy eating bananas.
> Finished chain.
INFO: 127.0.0.1:57044 - "POST /api/v1/quiz/ask HTTP/1.1" 200 OK | Vertex ChatVertexAI() doesn't support initialize_agent() as OutputParserException error | https://api.github.com/repos/langchain-ai/langchain/issues/5301/comments | 1 | 2023-05-26T15:29:14Z | 2023-09-10T16:11:32Z | https://github.com/langchain-ai/langchain/issues/5301 | 1,727,836,402 | 5,301 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We should add support for the following vectorizers in the [weaviate hybrid search](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html):
1. [cohere](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
2. [palm](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
3. [huggingface](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
### Motivation
more flexibility to users
### Your contribution
code review | Weaviate: Add support for other vectorizers in hybrid search | https://api.github.com/repos/langchain-ai/langchain/issues/5300/comments | 9 | 2023-05-26T14:54:53Z | 2023-09-18T16:10:45Z | https://github.com/langchain-ai/langchain/issues/5300 | 1,727,780,002 | 5,300 |
[
"langchain-ai",
"langchain"
] | ### System Info
Version: 0.0.180
Python: 3.10.11
OS: MacOs Monterrey 12.5.1 (Apple Silicon)
Steps to reproduce:
```
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?")
```
Output:
```
{
"name": "OutputParserException",
"message": "Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n",
"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:21\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[0;32m---> 21\u001b[0m response \u001b[39m=\u001b[39m parse_json_markdown(text)\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/output_parsers/json.py:17\u001b[0m, in \u001b[0;36mparse_json_markdown\u001b[0;34m(json_string)\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[39m# Parse the JSON string into a Python dictionary\u001b[39;00m\n\u001b[0;32m---> 17\u001b[0m parsed \u001b[39m=\u001b[39m json\u001b[39m.\u001b[39;49mloads(json_string)\n\u001b[1;32m 19\u001b[0m \u001b[39mreturn\u001b[39;00m parsed\n\nFile \u001b[0;32m/opt/homebrew/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py:346\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 343\u001b[0m \u001b[39mif\u001b[39;00m (\u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 344\u001b[0m parse_int \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m parse_float \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 345\u001b[0m parse_constant \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_pairs_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m kw):\n\u001b[0;32m--> 346\u001b[0m \u001b[39mreturn\u001b[39;00m _default_decoder\u001b[39m.\u001b[39;49mdecode(s)\n\u001b[1;32m 347\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\nFile \u001b[0;32m/opt/homebrew/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:337\u001b[0m, in \u001b[0;36mJSONDecoder.decode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 333\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Return the Python representation of ``s`` (a ``str`` instance\u001b[39;00m\n\u001b[1;32m 334\u001b[0m \u001b[39mcontaining a JSON document).\u001b[39;00m\n\u001b[1;32m 335\u001b[0m \n\u001b[1;32m 336\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[0;32m--> 337\u001b[0m obj, end \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mraw_decode(s, idx\u001b[39m=\u001b[39;49m_w(s, \u001b[39m0\u001b[39;49m)\u001b[39m.\u001b[39;49mend())\n\u001b[1;32m 338\u001b[0m end \u001b[39m=\u001b[39m _w(s, end)\u001b[39m.\u001b[39mend()\n\nFile \u001b[0;32m/opt/homebrew/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:355\u001b[0m, in \u001b[0;36mJSONDecoder.raw_decode\u001b[0;34m(self, s, idx)\u001b[0m\n\u001b[1;32m 354\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mStopIteration\u001b[39;00m \u001b[39mas\u001b[39;00m err:\n\u001b[0;32m--> 355\u001b[0m \u001b[39mraise\u001b[39;00m JSONDecodeError(\u001b[39m\"\u001b[39m\u001b[39mExpecting value\u001b[39m\u001b[39m\"\u001b[39m, s, err\u001b[39m.\u001b[39mvalue) \u001b[39mfrom\u001b[39;00m \u001b[39mNone\u001b[39;00m\n\u001b[1;32m 356\u001b[0m \u001b[39mreturn\u001b[39;00m obj, end\n\n\u001b[0;31mJSONDecodeError\u001b[0m: Expecting value: line 1 column 1 (char 0)\n\nDuring handling of the above exception, another exception occurred:\n\n\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)\nCell \u001b[0;32mIn[13], line 19\u001b[0m\n\u001b[1;32m 16\u001b[0m agent \u001b[39m=\u001b[39m initialize_agent(tools, chat, agent\u001b[39m=\u001b[39mAgentType\u001b[39m.\u001b[39mCHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose\u001b[39m=\u001b[39m\u001b[39mTrue\u001b[39;00m)\n\u001b[1;32m 18\u001b[0m \u001b[39m# Now let's test it out!\u001b[39;00m\n\u001b[0;32m---> 19\u001b[0m agent\u001b[39m.\u001b[39;49mrun(\u001b[39m\"\u001b[39;49m\u001b[39mWhat is EPAM price in NYSE? What is that number raised to the 0.23 power?\u001b[39;49m\u001b[39m\"\u001b[39;49m)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:236\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, *args, **kwargs)\u001b[0m\n\u001b[1;32m 234\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mlen\u001b[39m(args) \u001b[39m!=\u001b[39m \u001b[39m1\u001b[39m:\n\u001b[1;32m 235\u001b[0m \u001b[39mraise\u001b[39;00m \u001b[39mValueError\u001b[39;00m(\u001b[39m\"\u001b[39m\u001b[39m`run` supports only one positional argument.\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 236\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m(args[\u001b[39m0\u001b[39;49m], callbacks\u001b[39m=\u001b[39;49mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\u001b[1;32m 238\u001b[0m \u001b[39mif\u001b[39;00m kwargs \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m args:\n\u001b[1;32m 239\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m(kwargs, callbacks\u001b[39m=\u001b[39mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:140\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\u001b[0;32m--> 140\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 141\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_end(outputs)\n\u001b[1;32m 142\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprep_outputs(inputs, outputs, return_only_outputs)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:134\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 128\u001b[0m run_manager \u001b[39m=\u001b[39m callback_manager\u001b[39m.\u001b[39mon_chain_start(\n\u001b[1;32m 129\u001b[0m {\u001b[39m\"\u001b[39m\u001b[39mname\u001b[39m\u001b[39m\"\u001b[39m: \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__class__\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name__\u001b[39m},\n\u001b[1;32m 130\u001b[0m inputs,\n\u001b[1;32m 131\u001b[0m )\n\u001b[1;32m 132\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 133\u001b[0m outputs \u001b[39m=\u001b[39m (\n\u001b[0;32m--> 134\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_call(inputs, run_manager\u001b[39m=\u001b[39;49mrun_manager)\n\u001b[1;32m 135\u001b[0m \u001b[39mif\u001b[39;00m new_arg_supported\n\u001b[1;32m 136\u001b[0m \u001b[39melse\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_call(inputs)\n\u001b[1;32m 137\u001b[0m )\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:951\u001b[0m, in \u001b[0;36mAgentExecutor._call\u001b[0;34m(self, inputs, run_manager)\u001b[0m\n\u001b[1;32m 949\u001b[0m \u001b[39m# We now enter the agent loop (until it returns something).\u001b[39;00m\n\u001b[1;32m 950\u001b[0m \u001b[39mwhile\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_should_continue(iterations, time_elapsed):\n\u001b[0;32m--> 951\u001b[0m next_step_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_take_next_step(\n\u001b[1;32m 952\u001b[0m name_to_tool_map,\n\u001b[1;32m 953\u001b[0m color_mapping,\n\u001b[1;32m 954\u001b[0m inputs,\n\u001b[1;32m 955\u001b[0m intermediate_steps,\n\u001b[1;32m 956\u001b[0m run_manager\u001b[39m=\u001b[39;49mrun_manager,\n\u001b[1;32m 957\u001b[0m )\n\u001b[1;32m 958\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(next_step_output, AgentFinish):\n\u001b[1;32m 959\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_return(\n\u001b[1;32m 960\u001b[0m next_step_output, intermediate_steps, run_manager\u001b[39m=\u001b[39mrun_manager\n\u001b[1;32m 961\u001b[0m )\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:773\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 771\u001b[0m raise_error \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\u001b[1;32m 772\u001b[0m \u001b[39mif\u001b[39;00m raise_error:\n\u001b[0;32m--> 773\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 774\u001b[0m text \u001b[39m=\u001b[39m \u001b[39mstr\u001b[39m(e)\n\u001b[1;32m 775\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:762\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 756\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Take a single step in the thought-action-observation loop.\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \n\u001b[1;32m 758\u001b[0m \u001b[39mOverride this to take control of how the agent makes and acts on choices.\u001b[39;00m\n\u001b[1;32m 759\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 761\u001b[0m \u001b[39m# Call the LLM to see what to do.\u001b[39;00m\n\u001b[0;32m--> 762\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49magent\u001b[39m.\u001b[39;49mplan(\n\u001b[1;32m 763\u001b[0m intermediate_steps,\n\u001b[1;32m 764\u001b[0m callbacks\u001b[39m=\u001b[39;49mrun_manager\u001b[39m.\u001b[39;49mget_child() \u001b[39mif\u001b[39;49;00m run_manager \u001b[39melse\u001b[39;49;00m \u001b[39mNone\u001b[39;49;00m,\n\u001b[1;32m 765\u001b[0m \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49minputs,\n\u001b[1;32m 766\u001b[0m )\n\u001b[1;32m 767\u001b[0m \u001b[39mexcept\u001b[39;00m OutputParserException \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 768\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:444\u001b[0m, in \u001b[0;36mAgent.plan\u001b[0;34m(self, intermediate_steps, callbacks, **kwargs)\u001b[0m\n\u001b[1;32m 442\u001b[0m full_inputs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mget_full_inputs(intermediate_steps, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs)\n\u001b[1;32m 443\u001b[0m full_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mllm_chain\u001b[39m.\u001b[39mpredict(callbacks\u001b[39m=\u001b[39mcallbacks, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mfull_inputs)\n\u001b[0;32m--> 444\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49moutput_parser\u001b[39m.\u001b[39;49mparse(full_output)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:25\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\u001b[1;32m 24\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mException\u001b[39;00m:\n\u001b[0;32m---> 25\u001b[0m \u001b[39mraise\u001b[39;00m OutputParserException(\u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mCould not parse LLM output: \u001b[39m\u001b[39m{\u001b[39;00mtext\u001b[39m}\u001b[39;00m\u001b[39m\"\u001b[39m)\n\n\u001b[0;31mOutputParserException\u001b[0m: Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n"
}
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?")
```
### Expected behavior
Should work | Failure to run docpage examples | https://api.github.com/repos/langchain-ai/langchain/issues/5299/comments | 3 | 2023-05-26T13:48:05Z | 2023-06-05T08:51:01Z | https://github.com/langchain-ai/langchain/issues/5299 | 1,727,665,260 | 5,299 |
[
"langchain-ai",
"langchain"
] | ### System Info
`python 3.11`
```
fastapi==0.95.1
langchain==0.0.180
pydantic==1.10.7
uvicorn==0.21.1
openai==0.27.4
```
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
I am trying to create a streaming endpoint in Fast API, below are the files
`main.py`
```python
from fastapi import FastAPI
from src.chat_stream import ChatOpenAIStreamingResponse, send_message, StreamRequest
app = FastAPI()
@app.post("/chat_streaming", response_class=StreamingResponse)
async def chat(body: StreamRequest ):
return ChatOpenAIStreamingResponse(send_message(body.message), media_type="text/event-stream")
```
`src/chat_stream.py`
```python
from typing import Awaitable, Callable, Union
Sender = Callable[[Union[str, bytes]], Awaitable[None]]
from starlette.types import Send
from typing import Any, Optional, Awaitable, Callable, Iterator, Union
from langchain.schema import HumanMessage
from pydantic import BaseModel
from fastapi.responses import StreamingResponse
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import AsyncCallbackManager
class EmptyIterator(Iterator[Union[str, bytes]]):
def __iter__(self):
return self
def __next__(self):
raise StopIteration
class AsyncStreamCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming, inheritance from AsyncCallbackHandler."""
def __init__(self, send: Sender):
super().__init__()
self.send = send
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Rewrite on_llm_new_token to send token to client."""
await self.send(f"data: {token}\n\n")
class ChatOpenAIStreamingResponse(StreamingResponse):
"""Streaming response for openai chat model, inheritance from StreamingResponse."""
def __init__(
self,
generate: Callable[[Sender], Awaitable[None]],
status_code: int = 200,
media_type: Optional[str] = None,
) -> None:
super().__init__(
content=EmptyIterator(), status_code=status_code, media_type=media_type
)
self.generate = generate
async def stream_response(self, send: Send) -> None:
"""Rewrite stream_response to send response to client."""
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async def send_chunk(chunk: Union[str, bytes]):
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
await send({"type": "http.response.body", "body": chunk, "more_body": True})
# send body to client
await self.generate(send_chunk)
# send empty body to client to close connection
await send({"type": "http.response.body", "body": b"", "more_body": False})
def send_message(message: str) -> Callable[[Sender], Awaitable[None]]:
async def generate(send: Sender):
model = ChatOpenAI(
streaming=True,
verbose=True,
callback_manager=AsyncCallbackManager([AsyncStreamCallbackHandler(send)]),
)
await model.agenerate(messages=[[HumanMessage(content=message)]])
return generate
class StreamRequest(BaseModel):
"""Request body for streaming."""
message: str
```
### Expected behavior
The Endpoint should stream the response from LLM Chain, instead I am getting this error
```
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
```
```python
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1098, in create_connection
transport, protocol = await self._create_connection_transport(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1131, in _create_connection_transport
await waiter
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 577, in _on_handshake_complete
raise handshake_exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 559, in _do_handshake
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 979, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 588, in arequest_raw
result = await session.request(**request_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
raise last_exc
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/Project/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/Project/src/app.py", line 67, in stream_response
await self.generate(send_chunk)
File "/Project/src/app.py", line 80, in generate
await model.agenerate(messages=[[HumanMessage(content=message)]])
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 63, in agenerate
results = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 297, in _agenerate
async for stream_resp in await acompletion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 63, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 61, in _completion_with_retry
return await llm.client.acreate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 300, in arequest
result = await self.arequest_raw(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 605, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
``` | APIConnectionError: Error communicating with OpenAI. | https://api.github.com/repos/langchain-ai/langchain/issues/5296/comments | 12 | 2023-05-26T12:14:47Z | 2024-04-19T15:23:44Z | https://github.com/langchain-ai/langchain/issues/5296 | 1,727,514,993 | 5,296 |
[
"langchain-ai",
"langchain"
] | ### System Info
ValueError: `run` not supported when there is not exactly one output key. Got ['result', 'source_documents'
]
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type = "stuff",
retriever=db.as_retriever(),
return_source_documents=True
)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)#, output_keys=['result','source_documents'])
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory)# return_intermediate_steps=True)#, output_keys=['result','source_documents']
#)
### Expected behavior
Returns the answer and source doc as well | Get the source document info with result | https://api.github.com/repos/langchain-ai/langchain/issues/5295/comments | 4 | 2023-05-26T11:41:44Z | 2023-10-23T16:08:27Z | https://github.com/langchain-ai/langchain/issues/5295 | 1,727,465,007 | 5,295 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
TL;DR: The use of exec() in agents can lead to remote code execution vulnerabilities. Some Huggingface projects use such agents, despite the potential harm of LLM-generated Python code.
#1026 and #814 discuss the security concerns regarding the use of `exec()` in llm_math chain. The comments in #1026 proposed methods to sandbox the code execution, but due to environmental issues, the code was patched to replace `exec()` with `numexpr.evaluate()` (#2943). This restricted the execution capabilities to mathematical functionalities only. This bug was assigned the CVE number CVE-2023-29374.
As shown in the above issues, the usage of `exec()` in a chain can pose a significant security risk, especially when the chain is running on a remote machine. This seems common scenario for projects in Huggingface.
However, in the latest langchain, `exec()` is still used in `PythonReplTool` and `PythonAstReplTool`.
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L55
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L102
These functions are called by Pandas Dataframe Agent, Spark Dataframe Agent, CSV Agent. It seems they are intentionally designed to pass the LLM output to `PythonAstTool` or `PythonAstReplTool` to execute the LLM-generated code in the machine.
The documentation for these agents explicitly states that they should be used with caution since LLM-generated Python code can be potentially harmful. For instance:
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/docs/modules/agents/toolkits/examples/pandas.ipynb#L12
Despite this, I have observed several projects in Huggingface using `create_pandas_dataframe_agent` and `create_csv_agent`.
### Suggestion:
Fixing this issue as done in llm_math chain seems challenging.
Simply restricting the LLM-generated code to Pandas and Spark execution might not be sufficient because there are still numerous malicious tasks that can be performed using those APIs. For instance, Pandas can read and write files.
Meanwhile, it seems crucial to emphasize the security concerns related to LLM-generated code for the overall security of LLM apps. Merely limiting execution to specific frameworks or APIs may not fully address the underlying security risks.
| Issue: security concerns with `exec()` via multiple agents and Shell tool | https://api.github.com/repos/langchain-ai/langchain/issues/5294/comments | 3 | 2023-05-26T11:38:23Z | 2024-03-13T16:12:29Z | https://github.com/langchain-ai/langchain/issues/5294 | 1,727,460,382 | 5,294 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It's very userful when use the history paremeter.
```
history = [("who are you", " i'm a ai")]
llm = OpenAI()
llm("hello", history)
llm = LLMChain({ llm, prompt })
llm({"query": "hello", "history": history})
```
### Motivation
*
### Your contribution
* | support history in LLMChain and LLM | https://api.github.com/repos/langchain-ai/langchain/issues/5289/comments | 1 | 2023-05-26T09:11:16Z | 2023-09-10T16:11:37Z | https://github.com/langchain-ai/langchain/issues/5289 | 1,727,217,498 | 5,289 |
[
"langchain-ai",
"langchain"
] | ### System Info
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`

But If I use text-davinici, It generates single result.
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`

how to overcome this issue in **"ChatOpenAI"**?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)
### Expected behavior
I need one answer that is for user input query only. But after the answer It again adding a question by itself. Extra add on question and query not needed in **ChatOpenAI** | SQL chain generates extra add on question if I use ChatOpenAI inplace of OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/5286/comments | 7 | 2023-05-26T06:48:08Z | 2023-10-16T22:55:29Z | https://github.com/langchain-ai/langchain/issues/5286 | 1,727,005,383 | 5,286 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey guys,thanks for your amazing work, if I want to get a dictionary SQL result instead of the default tuple in SQLDatabaseChain, what settings do I need to change?
### Motivation
Without database table header fields, the articles generated by LLM may contain errors.
### Your contribution
I am currently diving into the codes and see how to deal with it | change tuple sql result to dict sql result | https://api.github.com/repos/langchain-ai/langchain/issues/5284/comments | 2 | 2023-05-26T05:27:36Z | 2023-09-18T16:10:51Z | https://github.com/langchain-ai/langchain/issues/5284 | 1,726,931,734 | 5,284 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The llm will generate the content below in some cases:
Action 1: xxx
Action Input 1: xxx
Observation 1: xxx
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
The regex above in langchian.mrkl.output_parser can match the Action and Action Input in the following scenario:
Action 1: xxx
Action Input 1: xxx
but the stop list is still be ['\nObservation:', '\n\tObservation:'] which can not stop the generation by llm, because the llm will generate the 'Observation 1: ... '.
### Expected behavior
Optimize stop logic to solve this problem | Stop logic should be optimezed to be compatible with "Conversation 1:" | https://api.github.com/repos/langchain-ai/langchain/issues/5283/comments | 1 | 2023-05-26T05:21:00Z | 2023-09-10T16:11:42Z | https://github.com/langchain-ai/langchain/issues/5283 | 1,726,925,790 | 5,283 |
[
"langchain-ai",
"langchain"
] | ### Feature request
loader = SitemapLoader(
"https://langchain.readthedocs.io/sitemap.xml",
filter_modified_dates=["2023-", "2022-12-"]
)
documents = loader.load()
### Motivation
Provide enhanced filtering on larger sites
### Your contribution
Provide enhanced filtering on larger sites | Sitemap - add filtering by modified date | https://api.github.com/repos/langchain-ai/langchain/issues/5280/comments | 1 | 2023-05-26T04:52:49Z | 2023-09-10T16:11:47Z | https://github.com/langchain-ai/langchain/issues/5280 | 1,726,903,889 | 5,280 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | Issue Passing in Credential to VertexAI model | https://api.github.com/repos/langchain-ai/langchain/issues/5279/comments | 0 | 2023-05-26T04:34:54Z | 2023-05-26T15:31:04Z | https://github.com/langchain-ai/langchain/issues/5279 | 1,726,889,243 | 5,279 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.180
python==3.10
google-cloud-aiplatform==1.25.0
### Who can help?
@hwc
### Information
- [] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Debatable if this is a bug or a missing feature, but I'd argue that the Vertex implementation is missing an important element -> Even though I'm excited to have the support now.
Using the [VertexAI documentation for chat](https://cloud.google.com/vertex-ai/docs/generative-ai/chat/test-chat-prompts), you can initialise the chat model like the below (emphasis mine).
The list of "examples" functions as a separate instruction (few-shot), not as part of the chat history. This is different from how OpenAI does it.
The current langchain implementation doesn't seem to have an option to submit examples, instead it combining all messages in the chat-history. That would lead to unexpected results if you used if for your examples.
```
def chat_question(context=None, examples=[], chat_instruction=None):
chat_model = ChatModel.from_pretrained("chat-bison@001")
parameters = {
"temperature": .0,
"max_output_tokens": 300,
"top_p": 0.3,
"top_k": 3,
}
chat = chat_model.start_chat(
context=context,
**examples=examples**
)
response = chat.send_message(chat_instruction, **parameters)
return response
```
### Expected behavior
Allow for a set of examples to be passed in when setting up the ChatVertexAI or when using the chat() function.
Apologies if I've missed a way to do this. | VertexAI ChatModel implementation misses few-shot "examples" | https://api.github.com/repos/langchain-ai/langchain/issues/5278/comments | 1 | 2023-05-26T04:02:46Z | 2023-09-15T22:13:02Z | https://github.com/langchain-ai/langchain/issues/5278 | 1,726,867,648 | 5,278 |
[
"langchain-ai",
"langchain"
] | ### System Info
Cannot specify both model and engine
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1、i create ClientChatOpenAI the code like this:
```python
"""Azure OpenAI chat wrapper."""
from __future__ import annotations
import logging
from typing import Any, Dict
from pydantic import root_validator
from langchain.chat_models.openai import ChatOpenAI
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
class ClientChatOpenAI(ChatOpenAI):
deployment_name: str = ""
openai_api_base: str = ""
openai_api_key: str = ""
openai_organization: str = ""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values,
"openai_api_key",
"OPENAI_API_KEY",
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_base = openai_api_base
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
}
```
2.use code
```python
chat = ClientChatOpenAI(
temperature=0,
streaming=True,
openai_api_key=os.getenv("OPENAI_CONFIG_0_API_KEY"),
openai_api_base=os.getenv("OPENAI_CONFIG_0_END_POINT"),
)
batch_messages = [
[SystemMessage(content="你是ai助手."), HumanMessage(content=chat_request.prompts)],
]
result = chat.generate(batch_messages)
print(result.llm_output["token_usage"])
return result
```
### Expected behavior
i think code is ok | when i create ClientChatOpenAI error | https://api.github.com/repos/langchain-ai/langchain/issues/5277/comments | 1 | 2023-05-26T03:42:55Z | 2023-09-10T16:11:53Z | https://github.com/langchain-ai/langchain/issues/5277 | 1,726,855,450 | 5,277 |
[
"langchain-ai",
"langchain"
] | ### Feature request
In the JS SDK of Milvus, there is a function to query documents from an existing collection, while in the Python SDK, this function is not available. Instead, the collection can be constructed using the following way:
```python
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
```
### Motivation
I cannot ask multiple questions
### Your contribution
no | python SDK can't query documents from an existing collection | https://api.github.com/repos/langchain-ai/langchain/issues/5276/comments | 2 | 2023-05-26T03:40:09Z | 2023-06-01T00:28:00Z | https://github.com/langchain-ai/langchain/issues/5276 | 1,726,853,363 | 5,276 |
[
"langchain-ai",
"langchain"
] | ### System Info
When I initialise ChatVeretexAI in a fastAPI app the thread pool never returns to idle blocking the server returning the below error.
E0526 10:18:51.289447000 4300375424 thread_pool.cc:230] Waiting for thread pool to idle before forking
on langchain 0.180
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Initialise ChatVertexAI in a fastapi app. ChatOpenAI works fine.
### Expected behavior
Don't error | When initializing ChatVertexAI fastapi thread pool becomes unaccessible | https://api.github.com/repos/langchain-ai/langchain/issues/5275/comments | 2 | 2023-05-26T00:48:15Z | 2023-09-10T16:11:57Z | https://github.com/langchain-ai/langchain/issues/5275 | 1,726,669,439 | 5,275 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/7652d2abb01208fd51115e34e18b066824e7d921/langchain/agents/mrkl/output_parser.py#L47
Due to the line above the `ShellTool` fails when using it with the `ZeroShotAgent`. In using `langchain.OpenAI` as the `llm` I encountered a scenario where ChatGPT provides a string surrounded by single quotes for `Action Input:`. This causes the ShellTool not to recognize the input command because it is surrounded by single quotes which aren't stripped (I get a command not found error). This could easily be fixed by stripping single quotes from `action_input`.
```
return AgentAction(action, action_input.strip(" ").strip('"').strip("'"), text)
``` | ZeroShotAgent fails with ShellTool due to quotes in llm output | https://api.github.com/repos/langchain-ai/langchain/issues/5271/comments | 3 | 2023-05-25T22:18:12Z | 2023-10-08T16:06:56Z | https://github.com/langchain-ai/langchain/issues/5271 | 1,726,558,628 | 5,271 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
SQLAlchemy==2.0.15
duckdb==0.8.0
duckdb-engine==0.7.3
Running inside GCP Vertex AI Notebook (Jupyter Lab essentially jupyterlab==3.4.8)
python 3.7
### Who can help?
@Jflick58
@lkuligin
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create the Vertex AI LLM (using latest version of LangChain)
`from langchain.llms import VertexAI
palmllm = VertexAI(model_name='text-bison@001',
max_output_tokens=256,
temperature=0.2,
top_p=0.1,
top_k=40,
verbose=True)`
2. Setup the db engine for duckdb in this case
`engine = create_engine("duckdb:///dw.db")`
2. Then create the chain using SQLDatabaseChain (Note the use of use_query_checker=True)
`#Setup the DB
db = SQLDatabase(engine=engine,metadata=MetaData(bind=engine),include_tables=[table_name])
#Setup the chain
db_chain = SQLDatabaseChain.from_llm(palmllm,db,verbose=True,use_query_checker=True,prompt=PROMPT,return_intermediate_steps=True,top_k=3)`
4. Run a query against the chain (Notice the SQLQuery: The query is correct) (It is as if its trying to execute "The query is correct" as SQL"
`> Entering new SQLDatabaseChain chain...
How many countries are there
SQLQuery:The query is correct.`
This is the error returned:
`ProgrammingError: (duckdb.ParserException) Parser Error: syntax error at or near "The"
LINE 1: The query is correct.
^
[SQL: The query is correct.]
(Background on this error at: https://sqlalche.me/e/14/f405)`
IMPORTANT:
- If I remove the "use_query_checker=True" then everything works well.
- If I use the OpenAI LLM and dont change anything (except the LLM), then it works with the "use_query_checker=True" setting.
This relates to [#5049](https://github.com/hwchase17/langchain/pull/5049)
### Expected behavior
I believe the intention of that flag "use_query_checker=True" is to validate the SQL and allow the chain to recover from a simple syntax error. | use_query_checker for VertexAI fails | https://api.github.com/repos/langchain-ai/langchain/issues/5270/comments | 5 | 2023-05-25T21:22:26Z | 2023-10-05T16:09:44Z | https://github.com/langchain-ai/langchain/issues/5270 | 1,726,507,623 | 5,270 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.180
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [x] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm running the example from docs: https://python.langchain.com/en/latest/modules/agents/toolkits/examples/pandas.html.
`agent.run("how many people are 28 years old?")`
gives:
```
> Entering new AgentExecutor chain...
Thought: I need to use the `df` dataframe to find how many people are 28 years old.
Action: python_repl_ast
Action Input: df['Age'] == 28
Observation: 0
Thought: There are no people 28 years old.
Final Answer: 0
```
In other cases, the Action Input the LLM calculates is correct, but the observation (result of applying this action on the dataframe) is incorrect. This makes me believe that the LLM isn't at fault here.
### Expected behavior
Should return 25. | pandas dataframe agent generates correct Action Input, but returns incorrect result | https://api.github.com/repos/langchain-ai/langchain/issues/5269/comments | 11 | 2023-05-25T21:03:00Z | 2024-06-04T21:03:44Z | https://github.com/langchain-ai/langchain/issues/5269 | 1,726,486,276 | 5,269 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everybody,
I'm working with an LLM setup inspired from @pelyhe 's implementation #4573 .
It uses a RetrievalQA that queries a persistent embedded ChromaDB, then feeds it into a ConversationalChatAgent and then an AgentExecutor.
Currently, this setup works for only basic situations which definitely have nothing to do with documents. Once I ask it something document relevant, it gives an empty response. I have a nagging suspicion I've simply wired things up incorrectly, but it's not clear how to fix it.
```
@st.cache_resource
def load_agent():
vectorstore = Chroma(persist_directory=CHROMA_DIR)
basic_prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic.
###Context:
{context}
###Human:
{question}
###Assistant:
"""
prompt = PromptTemplate(
template=basic_prompt_template, input_variables=["context", "question"]
)
system_msg = "You are a helpful assistant."
chain_type_kwargs = {"prompt": prompt}
# Time to initialize the LLM, as late as possible so everything not requiring the LLM instance to fail fast
llm = GPT4All(
model=MODEL,
verbose=True,
)
# Initialise QA chain for document-relevant queries
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
)
tools = [
Tool(
name="Document tool",
func=qa.run,
description="useful for when you need to answer questions from documents.",
),
]
agent = ConversationalChatAgent.from_llm_and_tools(
llm=llm, tools=tools, system_message=system_msg, verbose=True
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=ConversationBufferMemory(
memory_key="chat_history", return_messages=True
),
)
agent = load_agent()
###########################
# Streamlit UI operation. #
###########################
if "generated" not in st.session_state:
st.session_state["generated"] = []
if "past" not in st.session_state:
st.session_state["past"] = []
def get_text():
input_text = st.text_input(label="", key="question")
return input_text
user_input = get_text()
if user_input:
try:
output = agent.run(input=user_input)
except ValueError as e:
output = str(e)
if not output.startswith("Could not parse LLM output: "):
raise Exception(output)
output = output.removeprefix("Could not parse LLM output: ").removesuffix("`")
st.session_state.past.append(user_input)
st.session_state.generated.append(output)
if st.session_state["generated"]:
for i in range(len(st.session_state["generated"]) - 1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")
```
### Suggestion:
_No response_ | Issue: RetrievalQA -> ConversationalChatAgent -> AgentExecutor gives no response if document-related | https://api.github.com/repos/langchain-ai/langchain/issues/5266/comments | 11 | 2023-05-25T19:58:01Z | 2023-09-18T16:10:56Z | https://github.com/langchain-ai/langchain/issues/5266 | 1,726,411,036 | 5,266 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain==0.0.180
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
Windows 11
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Python Code
```
from langchain.document_loaders import UnstructuredMarkdownLoader
markdown_path = r"Pyspy.md"
loader = UnstructuredMarkdownLoader(markdown_path)
data = loader.load()
```
Markdown file `Pyspy.md`
```
```
.pip/bin/py-spy top -p 70
```
```
### Expected behavior
It should result in List[Document] in data | UnstructuredMarkdownLoader resulting in `zipfile.BadZipFile: File is not a zip file` | https://api.github.com/repos/langchain-ai/langchain/issues/5264/comments | 10 | 2023-05-25T18:59:18Z | 2023-11-29T17:55:26Z | https://github.com/langchain-ai/langchain/issues/5264 | 1,726,337,382 | 5,264 |
[
"langchain-ai",
"langchain"
] | ```
~\Anaconda3\lib\site-packages\langchain\memory\vectorstore.py in save_context(self, inputs, outputs)
67 """Save context from this conversation to buffer."""
68 documents = self._form_documents(inputs, outputs)
---> 69 self.retriever.add_documents(documents)
70
71 def clear(self) -> None:
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
413 def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
414 """Add documents to vectorstore."""
--> 415 return self.vectorstore.add_documents(documents, **kwargs)
416
417 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
60 texts = [doc.page_content for doc in documents]
61 metadatas = [doc.metadata for doc in documents]
---> 62 return self.add_texts(texts, metadatas, **kwargs)
63
64 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in add_texts(self, texts, metadatas, ids, **kwargs)
150 # Embed and create the documents.
151 embeddings = [self.embedding_function(text) for text in texts]
--> 152 return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
153
154 def add_embeddings(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in __add(self, texts, embeddings, metadatas, ids, **kwargs)
117 if self._normalize_L2:
118 faiss.normalize_L2(vector)
--> 119 self.index.add(vector)
120 # Get list of index, id, and docs.
121 full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]
~\Anaconda3\lib\site-packages\faiss\class_wrappers.py in replacement_add(self, x)
226
227 n, d = x.shape
--> 228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
230 self.add_c(n, swig_ptr(x))
AssertionError:
``` | Assertion Error when using VertexAIEmbeddings with faiss vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/5262/comments | 6 | 2023-05-25T18:28:41Z | 2023-12-20T19:12:22Z | https://github.com/langchain-ai/langchain/issues/5262 | 1,726,293,593 | 5,262 |
[
"langchain-ai",
"langchain"
] | ```
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate_prompt(self, prompts, stop, callbacks)
141 ) -> LLMResult:
142 prompt_messages = [p.to_messages() for p in prompts]
--> 143 return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
144
145 async def agenerate_prompt(
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
89 except (KeyboardInterrupt, Exception) as e:
90 run_manager.on_llm_error(e)
---> 91 raise e
92 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
93 generations = [res.generations for res in results]
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
81 )
82 try:
---> 83 results = [
84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in <listcomp>(.0)
82 try:
83 results = [
---> 84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
86 else self._generate(m, stop=stop)
~\Anaconda3\lib\site-packages\langchain\chat_models\vertexai.py in _generate(self, messages, stop, run_manager)
123 for pair in history.history:
124 chat._history.append((pair.question.content, pair.answer.content))
--> 125 response = chat.send_message(question.content)
126 text = self._enforce_stop_words(response.text, stop)
127 return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])
~\Anaconda3\lib\site-packages\vertexai\language_models\_language_models.py in send_message(self, message, max_output_tokens, temperature, top_k, top_p)
676 ]
677
--> 678 prediction_response = self._model._endpoint.predict(
679 instances=[prediction_instance],
680 parameters=prediction_parameters,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform\models.py in predict(self, instances, parameters, timeout, use_raw_predict)
1544 )
1545 else:
-> 1546 prediction_response = self._prediction_client.predict(
1547 endpoint=self._gca_resource.name,
1548 instances=instances,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform_v1\services\prediction_service\client.py in predict(self, request, endpoint, instances, parameters, retry, timeout, metadata)
600
601 # Send the request.
--> 602 response = rpc(
603 request,
604 retry=retry,
~\Anaconda3\lib\site-packages\google\api_core\gapic_v1\method.py in __call__(self, timeout, retry, *args, **kwargs)
111 kwargs["metadata"] = metadata
112
--> 113 return wrapped_func(*args, **kwargs)
114
115
~\Anaconda3\lib\site-packages\google\api_core\grpc_helpers.py in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InternalServerError: 500 Internal error encountered.
``` | Internal error encountered when using VertexAI in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/5258/comments | 1 | 2023-05-25T16:38:50Z | 2023-09-10T16:12:07Z | https://github.com/langchain-ai/langchain/issues/5258 | 1,726,151,388 | 5,258 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | Github integration | https://api.github.com/repos/langchain-ai/langchain/issues/5257/comments | 11 | 2023-05-25T16:27:21Z | 2023-11-29T21:21:01Z | https://github.com/langchain-ai/langchain/issues/5257 | 1,726,136,467 | 5,257 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Integration with LocalAI and with its extended endpoints to download models from the gallery.
### Motivation
LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI
### Your contribution
Not a python guru, so might take few cycles away here. | Add integration for LocalAI | https://api.github.com/repos/langchain-ai/langchain/issues/5256/comments | 7 | 2023-05-25T16:25:18Z | 2024-05-03T16:04:00Z | https://github.com/langchain-ai/langchain/issues/5256 | 1,726,133,919 | 5,256 |
[
"langchain-ai",
"langchain"
] | ### Question
Will there be future updates where we are allowed to customize answer_gen_llm when using FlareChain?
### Context
In the [documentation](https://python.langchain.com/en/latest/modules/chains/examples/flare.html) it says that:
In order to set up this chain, we will need three things:
- An LLM to generate the answer
- An LLM to generate hypothetical questions to use in retrieval
- A retriever to use to look up answers for
However, the example code only allows specification for the question_gen_llm, not the answer_gen_llm.
After referencing the [code](https://github.com/hwchase17/langchain/blob/9c0cb90997db9eb2e2a736df458d39fd7bec8ffb/langchain/chains/flare/base.py) for FlareChain, it seems that the answer_gen_llm is initialized as `OpenAI(max_tokens=32, model_kwargs={"logprobs": 1}, temperature=0)`, which default to `"text-davinci-003"` as no model_name is specified. | Inconsistent documentation for langchain.chains.FlareChain | https://api.github.com/repos/langchain-ai/langchain/issues/5255/comments | 2 | 2023-05-25T16:15:17Z | 2023-09-10T16:12:14Z | https://github.com/langchain-ai/langchain/issues/5255 | 1,726,121,249 | 5,255 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
hi, team:
I created Chain_A and Chain_B and set streaming=True for both of them.
overall_chain = SequentialChain(
chains=[chain_A, chain_B],
input_variables=["era", "title"],
output_variables=["synopsis", "review"],
verbose=True)
However, the streaming does not work.
### Suggestion:
_No response_ | Issue: <Streaming mode not work for Sequential Chains> | https://api.github.com/repos/langchain-ai/langchain/issues/5254/comments | 2 | 2023-05-25T15:25:01Z | 2023-09-10T16:12:18Z | https://github.com/langchain-ai/langchain/issues/5254 | 1,726,041,996 | 5,254 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using langchain + openai api to create a chatbot for private data, i can use langchain directory loader class to load files from a directory, but if any new files added to that directory, how to automatically load it?
### Motivation
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
### Your contribution
if can solve the problem, it will be good for company to use it for internal knowdege base share. | how to monitoring the new files after directory loader class used | https://api.github.com/repos/langchain-ai/langchain/issues/5252/comments | 3 | 2023-05-25T14:33:02Z | 2023-09-14T16:09:01Z | https://github.com/langchain-ai/langchain/issues/5252 | 1,725,950,539 | 5,252 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, would it be possible to add the topics (tags) to the repositories, it would be easier to find and organize them afterward. And its also usefull for external tools that are fetching github API to track repos ! Here is an example from HuggingFace :
.
<img width="538" alt="Capture d’écran 2023-05-25 à 15 58 11" src="https://github.com/hwchase17/langchain/assets/90518536/8a0029ad-6c44-426b-bc9d-2b01fcad46a7">
.
And here is a more specific screenshot in case I'm using the wrong words (sry not english) :
.
<img width="1440" alt="Capture d’écran 2023-05-25 à 16 03 40" src="https://github.com/hwchase17/langchain/assets/90518536/5aa4574d-1ae4-4bca-8ad5-044f3ce4a3cf">
### Suggestion:
I think you already know how, clicking the button on the repo page, then in about > topics, adding stuff like "python" "ai" "artificial intelligence" etc... thank you ! 😃 | Issue: Add topics to the GitHub repos | https://api.github.com/repos/langchain-ai/langchain/issues/5249/comments | 4 | 2023-05-25T14:05:59Z | 2023-12-09T16:06:41Z | https://github.com/langchain-ai/langchain/issues/5249 | 1,725,901,643 | 5,249 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
`!pip3 install langchain==0.0.179 boto3`
After installing langchain using the above command and trying to run the example mentioned in
[](https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html)
Getting the below error.
`ImportError:` cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' `(/opt/conda/lib/python3.10/site-packages/langchain/llms/sagemaker_endpoint.py)`
Am I missing something
### Suggestion:
_No response_ | Issue: import 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint failing | https://api.github.com/repos/langchain-ai/langchain/issues/5245/comments | 3 | 2023-05-25T13:28:29Z | 2023-09-14T16:09:07Z | https://github.com/langchain-ai/langchain/issues/5245 | 1,725,833,687 | 5,245 |
[
"langchain-ai",
"langchain"
] | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | Add possibility to set a proxy for openai API access | https://api.github.com/repos/langchain-ai/langchain/issues/5243/comments | 0 | 2023-05-25T13:00:09Z | 2023-05-25T16:50:27Z | https://github.com/langchain-ai/langchain/issues/5243 | 1,725,784,636 | 5,243 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using langchain version of 0.0.176 and hitting the error of 'numpy._DTypeMeta' object is not subscriptable while using Chroma DB for carrying out any operation.
### Who can help?
@hwchase17 - please help me out with this error -- do I need to upgrade the version of Langchain to overcome this problem
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code snippets producing this behavior
1. docsearch=Chroma.from_texts(texts,embeddings,metadatas=[{"source":str(i)} for i in range(len(texts))]).as_retriever()
2. docsearch=Chroma.from_texts(texts,embeddings)
query="...."
docs=docsearch.similarity_search(query)
3. db1=Chroma.from_documents(docs_1,embeddings)
### Expected behavior
Should be able to use ChromaDb as a retriever without hitting any error. | 'numpy._DTypeMeta' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/5242/comments | 2 | 2023-05-25T12:43:16Z | 2023-09-12T16:13:19Z | https://github.com/langchain-ai/langchain/issues/5242 | 1,725,751,778 | 5,242 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everybody, does anyone know if there is a way to make a post request using a custom agent/tool? The idea is when the user need a specific thing the agent intercept it and the custom tool make it. I can't find anything useful in the documentation, the fact is that when I try it doesn't work.
In my case I have:
`class FlowTool(BaseTool):
name = "Call To Max"
description = "use the run function when the user ask to make a call to Max. You don't need any parameter"
def _run(self):
url = "https://ex.mex.com/web"
data = {
"prova": 'ciao'
}
response = requests.post(url, json=data, verify=False)
return 'done'
def _arun(self, radius: int):
raise NotImplementedError("This tool does not support async")`
`tools = [FlowTool()]
agent = initialize_agent(
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
)`
`agent("Can you make a call to mex?")`
Thank you for helping me
### Suggestion:
_No response_ | Issue: How to make a request into an agent/tool | https://api.github.com/repos/langchain-ai/langchain/issues/5241/comments | 1 | 2023-05-25T12:31:15Z | 2023-09-10T16:12:38Z | https://github.com/langchain-ai/langchain/issues/5241 | 1,725,733,180 | 5,241 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
LangChain 0.0.179, hosted elasticsearch (Platinum edition)
V0.0.179 introduced elasticsearch embeddings, great!
But it is only implemented for elastic cloud.
I want to be able to do embeddings on my own elastic cluster.
@jeffvestal @derickson
### Suggestion:
_No response_ | Issue: ElasticsearchEmbeddings does not work on hosted elasticsearch (Platinum) | https://api.github.com/repos/langchain-ai/langchain/issues/5239/comments | 5 | 2023-05-25T12:21:19Z | 2023-05-31T07:40:33Z | https://github.com/langchain-ai/langchain/issues/5239 | 1,725,718,432 | 5,239 |
[
"langchain-ai",
"langchain"
] | ### System Info
When i try to use chatgpt plugin with agents like showed in the documentantion, some plugins like the MediumPluginGPT will reach the token limit during the task and give an error.

### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1Pya_AaPucsgw__OJa0Xho1u8OI1xFqYB#scrollTo=Ri2RPTKrxF6b
### Expected behavior
Should return the ten new most recent about AI | Token limit reached trying to use plugin | https://api.github.com/repos/langchain-ai/langchain/issues/5237/comments | 1 | 2023-05-25T11:17:24Z | 2023-09-10T16:12:44Z | https://github.com/langchain-ai/langchain/issues/5237 | 1,725,616,952 | 5,237 |
[
"langchain-ai",
"langchain"
] | ### System Info
I need to use OpenAPI for calling an API , but that API needs some params in body, and that value needs to be taken from User,
I need to understand the way that can take slots name that needs to be filled from user , is there any wat to do this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Need code
### Expected behavior
Slots filling from user | Slots Filling in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5236/comments | 3 | 2023-05-25T10:50:38Z | 2023-09-17T13:10:59Z | https://github.com/langchain-ai/langchain/issues/5236 | 1,725,576,992 | 5,236 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Allow a user to specify a record ttl for messages/sessions persisted to dynamodb in https://github.com/hwchase17/langchain/blob/5cfa72a130f675c8da5963a11d416f553f692e72/langchain/memory/chat_message_histories/dynamodb.py#L17-L20.
### Motivation
This will allow automated purging of chat history after a specified time period.
### Your contribution
Maybe, depends on my available time. | Support for ttl in DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/5235/comments | 2 | 2023-05-25T10:35:27Z | 2023-11-24T14:35:31Z | https://github.com/langchain-ai/langchain/issues/5235 | 1,725,555,032 | 5,235 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There are other vector database that support the use of async in Langchain, adding Redis to those list would be better for programmers who use asynchronous programming in python. I believe with package like aioredis, this should be easily achievable.
### Motivation
The motivation to to be able to support python async programmers with this feature and also to boost performance when querying from the vector store and inserting data into the vector store.
### Your contribution
I can contribute by opening a PR or by testing the code once it is done. | Make Redis Vector database operations Asynchronous | https://api.github.com/repos/langchain-ai/langchain/issues/5234/comments | 3 | 2023-05-25T10:04:53Z | 2023-09-25T16:07:01Z | https://github.com/langchain-ai/langchain/issues/5234 | 1,725,509,252 | 5,234 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
I'm trying to use ChatVertexAI and I noticed that the following import is not working :
```python
from langchain.chat_models import ChatVertexAI
```
But this one is working correctly :
```python
from langchain.chat_models.vertexai import ChatVertexAI
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install the main branch: `pip install git+https://github.com/hwchase17/langchain.git`
2. try to import `from langchain.chat_models import ChatVertexAI`
3. try to import `from langchain.chat_models.vertexai import ChatVertexAI`
### Expected behavior
The import `from langchain.chat_models import ChatVertexAI` should work | ChatVertexAI is not imported | https://api.github.com/repos/langchain-ai/langchain/issues/5233/comments | 2 | 2023-05-25T08:46:26Z | 2023-06-02T11:55:03Z | https://github.com/langchain-ai/langchain/issues/5233 | 1,725,368,096 | 5,233 |
[
"langchain-ai",
"langchain"
] | ### System Info
code snippet:
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html?highlight=callbacks#async-callbacks
python:Python 3.9.6
langchain :Version: 0.0.178
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
1.copy the source code of Async Callbacks from documentation
the code is wrong ,including syntax error to await outside the async function and missing import module or functions.
2.after making a litte fix,run it again.
the code is :
import asyncio,logging
from langchain.chat_models import ChatOpenAI
from typing import Any, Dict, List
from langchain.schema import LLMResult,HumanMessage
from langchain.callbacks.base import AsyncCallbackHandler,BaseCallbackHandler
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
async def main():
chat = ChatOpenAI(openai_api_key="xxxxxx",max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()])
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
asyncio.run(main())
3.it can not be acted as thew documentation showed, it stuck as the following

### Expected behavior
should act as the documentation said | AsyncCallbacks : Wrong document and stuck when running in terminal ,which finnal turn out to be error retry | https://api.github.com/repos/langchain-ai/langchain/issues/5229/comments | 2 | 2023-05-25T07:02:21Z | 2023-10-02T16:07:46Z | https://github.com/langchain-ai/langchain/issues/5229 | 1,725,213,622 | 5,229 |
[
"langchain-ai",
"langchain"
] |
I want to build a langchain which can
• can chat with human on greetings etc
And
• can do what the create_csv_agent does
And
• has a memory.
So i was using a conversational agent for chat models to do this with memory buffer
It is able to perform 1 and 3 from the things i want. I also gave
tools = [PythonAstREPLTool(locals={"df": df})] as the tools to this agent.
But i am confused where i should give the dataframe df for the chat model similar to how we give for create_csv_agent.
I tried giving it in the prompt by doing prompt.partial but i got an error saying partial method not implemented for chatprompttemplate.
I want the chat model to know that it has access to the df dataframe and questions like "what are top 2 issues " should be answered using that dataframe.
Right now, it outputs saying that, sure i can provide top 2 issues, but please provide information on what data you want me to work on.
Can you help on this? Please let me know if you need additional information
### Suggestion:
_No response_ | Issue: Dataframe with conversation agent for chat models | https://api.github.com/repos/langchain-ai/langchain/issues/5227/comments | 2 | 2023-05-25T03:26:09Z | 2023-09-10T16:12:59Z | https://github.com/langchain-ai/langchain/issues/5227 | 1,725,018,973 | 5,227 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am trying to use the File System tools to generate some boilerplate source code using OpenAI's APIs, and the chain works, but does not write the file to the file-system.
I think it is because there's an issue with the size of the text that needs to be written to the file, the agent fails to execute
My code is as follows
```python
import os
from langchain.tools.file_management import *
from langchain.agents.agent_toolkits import FileManagementToolkit
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from dotenv import load_dotenv
load_dotenv()
toolkit = FileManagementToolkit()
write_files = toolkit.get_tools()[5]
list_files = toolkit.get_tools()[6]
read_files = toolkit.get_tools()[4]
llm = OpenAI(temperature=0)
tools = [
Tool(
name="Write Files to directory",
func = write_files.run,
description = "useful for when you need to write files to a local file system"
),
Tool(
name="List Files in directory",
func = list_files.run,
description = "useful for when you need to list files in a local file system"
),
Tool(
name="Read Files in directory",
func = read_files.run,
description = "useful for when you need to read files in a local file system"
)
]
self_write_files_git = initialize_agent(
tools,
llm,
agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose = True
)
self_write_files_git.run("Generate a source code for a boilerplate Python Flask Application")
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce
1. My original source code is this
```python
import os
from langchain.tools.file_management import *
from langchain.agents.agent_toolkits import FileManagementToolkit
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from dotenv import load_dotenv
load_dotenv()
toolkit = FileManagementToolkit()
write_files = toolkit.get_tools()[5]
list_files = toolkit.get_tools()[6]
read_files = toolkit.get_tools()[4]
llm = OpenAI(temperature=0)
tools = [
Tool(
name="Write Files to directory",
func = write_files.run,
description = "useful for when you need to write files to a local file system"
),
Tool(
name="List Files in directory",
func = list_files.run,
description = "useful for when you need to list files in a local file system"
),
Tool(
name="Read Files in directory",
func = read_files.run,
description = "useful for when you need to read files in a local file system"
)
]
self_write_files_git = initialize_agent(
tools,
llm,
agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose = True
)
self_write_files_git.run("Generate a source code for a boilerplate Python Flask Application")
```
2. You can use it with a Jupyter Notebook/etc to run it you'll see the following
AgentExecutor creates the right action input, but the action_input seems cut off (JSON not properly formatted) and as a result, likely doesnt write to the filesystem
### Expected behavior
Based on the plan, which shows up, I would assume that it would write it to the file-system | Write File action_input issues. How to handle when action input is large | https://api.github.com/repos/langchain-ai/langchain/issues/5226/comments | 6 | 2023-05-25T01:38:22Z | 2023-10-24T16:08:23Z | https://github.com/langchain-ai/langchain/issues/5226 | 1,724,948,886 | 5,226 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How to use callback functions in a Langchain sequential chain, such as 1->2->3. I want to loop through the 2 function n times in the middle, where the output of the 2 function is its input. At the end of the loop, the output of the 2 function is input to the 3 function, and the final result is obtained
### Suggestion:
_No response_ | Issue:How to use callback functions in a Langchain sequential chain | https://api.github.com/repos/langchain-ai/langchain/issues/5225/comments | 6 | 2023-05-25T01:00:58Z | 2023-11-20T13:09:03Z | https://github.com/langchain-ai/langchain/issues/5225 | 1,724,924,866 | 5,225 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | PALChain loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5224/comments | 0 | 2023-05-25T00:58:09Z | 2023-05-29T13:44:48Z | https://github.com/langchain-ai/langchain/issues/5224 | 1,724,922,616 | 5,224 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.178
python3.11
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
@staticmethod
def embeddings(texts: List[Document]):
embeddings = OpenAIEmbeddings()
vectordb = Chroma.from_documents(texts, embeddings, persist_directory="chroma_db", collection_name="aixplora")
return vectordb
```
ends up in `openai.error.AuthenticationError: <empty message>`
more context here: https://github.com/grumpyp/aixplora/blob/main/backend/embeddings/index_files.py
This happened just a few hours ago btw! Before it was running, so possibly a bug which came with a release.
### Expected behavior
No error! - just embedding into my chroma db :) | openai.error.AuthenticationError: <empty message> | https://api.github.com/repos/langchain-ai/langchain/issues/5215/comments | 4 | 2023-05-24T22:04:59Z | 2023-09-18T16:11:06Z | https://github.com/langchain-ai/langchain/issues/5215 | 1,724,798,523 | 5,215 |
[
"langchain-ai",
"langchain"
] | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| OpenSearch VectorStore cannot return more than 4 retrieved result. | https://api.github.com/repos/langchain-ai/langchain/issues/5212/comments | 2 | 2023-05-24T20:49:47Z | 2023-05-25T16:51:25Z | https://github.com/langchain-ai/langchain/issues/5212 | 1,724,714,949 | 5,212 |
[
"langchain-ai",
"langchain"
] | Hi, I believe this issue is related to this one: #1372
I'm using GPT4All integration and get the following error after running `ConversationalRetrievalChain` with `AsyncCallbackManager`:
`ERROR:root:Async generation not implemented for this LLM.`
Changing to `CallbackManager` does not fix anything.
The issue is model-agnostic, i.e., I have used _ggml-gpt4all-j-v1.3-groovy.bin_ and _ggml-mpt-7b-base.bin_. The LangChain version I'm using is `0.0.179`. Any ideas how this can be potentially solved or should we just wait for a new release fixing it?
### Suggestion:
Release a fix, similar as in #1372 | GPT4All chat error with async calls | https://api.github.com/repos/langchain-ai/langchain/issues/5210/comments | 27 | 2023-05-24T19:27:35Z | 2024-03-29T23:22:46Z | https://github.com/langchain-ai/langchain/issues/5210 | 1,724,609,382 | 5,210 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Enable chains for Chat Models.
I spend some time looking at the following docs:
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
https://docs.langchain.com/docs/components/chains/index_related_chains
as well looking into the codebase, it seems this works only for completion models not Chat Models.
### Motivation
I would like use GPT-4 which is only available via Completions endpoint. I am currently building the chain manually, but I see value in native support in langchain for better codebase and easy access for others.
### Your contribution
I can help by submitting a PR, assuming I am not missing something obvious and this is already supported. | Enable chains (MapReduce, Refine, ...) for Chat Models. | https://api.github.com/repos/langchain-ai/langchain/issues/5209/comments | 1 | 2023-05-24T19:09:24Z | 2023-05-24T20:38:29Z | https://github.com/langchain-ai/langchain/issues/5209 | 1,724,587,758 | 5,209 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.178
Python 3.11.2
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`chroma.from_documents(texts, embeddings, persist_directory="chroma_db", collection_name="x")`
```
query_texts = "tell me about whatsmyserp policy"
res = collection.query(
query_texts=query_texts,
n_results=n_results,
where=where or None,
where_document=where_document or None
)
```
printing res
`{'ids': [[]], 'embeddings': None, 'documents': [[]], 'metadatas': [[]], 'distances': [[0.3748544454574585]]}`
I believe it doesn't create the collection itself or something related to that, cause if I create the collection myself before it seems to work.
More information and our discussion in the thread of the Chroma discord:
https://discord.com/channels/1073293645303795742/1110965198904369374
### Expected behavior
It should at least return the related documents,.. | Chroma integration .from_documents() isn't working | https://api.github.com/repos/langchain-ai/langchain/issues/5207/comments | 3 | 2023-05-24T18:34:51Z | 2023-09-18T16:11:11Z | https://github.com/langchain-ai/langchain/issues/5207 | 1,724,543,505 | 5,207 |
[
"langchain-ai",
"langchain"
] | ### System Info
Here is the link to the tutorial: https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
You can see on the page that the results do not seem to correlate to the question. First question about dinosaurs brings back two movies that are nothing to do with dinosaurs. Then last question asking about 2 movies about dinosaurs brings back 3 movies - 2 of which are nothing to do with dinosaurs.
In fact I found I can type "What are some movies about cabbages?" and get back 3 random movie results.
This tutorial doesn't seem to work.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the tutorial - in fact just read the the tutorial.
### Expected behavior
It should either bring back only movies matching your query, or if it brings back more than one it should give a score as to how confident it is. Right now it seems almost pointless. | I don't think Self-querying with Chroma is right | https://api.github.com/repos/langchain-ai/langchain/issues/5205/comments | 5 | 2023-05-24T18:13:30Z | 2023-09-11T16:57:00Z | https://github.com/langchain-ai/langchain/issues/5205 | 1,724,516,597 | 5,205 |
[
"langchain-ai",
"langchain"
] | HI, I have a requirement to customize the format instructions for multiple languages.
Specifically, I need to make modifications to the output_parser.get_format_instructions() string. This function currently utilizes the following structured format instructions:
```
STRUCTURED_FORMAT_INSTRUCTIONS = """The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "
json" and "
":
{{
{format}
}}
```
To modify this value, I have considered the following approach:
> langchain.output_parsers.structured.STRUCTURED_FORMAT_INSTRUCTIONS = """traduction of the string below"""
However, please note that this approach is not thread-safe. If multiple users are simultaneously using my application with different languages, there is a risk of interference between their settings.
Could you please advise on the appropriate solution for this issue?
How i can do? May i miss something?
Thanks for your help
### Suggestion:
A solution could be to change get_format_instructions(self) -> str function by adding a string params def get_format_instructions(self, cust_str_format_instructions) -> str:
Another solution is to create a class inherited from StructuredOutputParser
```
from langchain.output_parsers.structured import _get_sub_string
from langchain.output_parsers.format_instructions import STRUCTURED_FORMAT_INSTRUCTIONS
from typing import List, Any
from pydantic import Field
class CustomStructuredOutputParser(StructuredOutputParser):
language: str = Field(default=None)
cust_struct_format_instructions: str = Field(default=None)
def __init__(self, response_schemas: List[ResponseSchema], **data: Any):
super().__init__(response_schemas=response_schemas, **data)
if self.language == "fr_FR":
self.cust_struct_format_instructions = """La sortie doit être un extrait de code au format markdown, formaté selon le schéma suivant, en incluant le début et la fin "\`\`\`json" et "\`\`\`":
```json
{{
{format}
}}
```"""
@classmethod
def from_response_schemas(
cls, response_schemas: List[ResponseSchema], language: str = None, cust_struct_format_instructions: str = None
) -> 'CustomStructuredOutputParser':
return cls(response_schemas=response_schemas, language=language, cust_struct_format_instructions=cust_struct_format_instructions)
def get_format_instructions(self) -> str:
schema_str = "\n".join(
[_get_sub_string(schema) for schema in self.response_schemas]
)
if self.cust_struct_format_instructions:
return self.cust_struct_format_instructions.format(format=schema_str)
return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)
summary_response_schemas = [
ResponseSchema(name="resumé", description="Fournissez un résumé en une ou deux phrases."),
ResponseSchema(name="types_réponses", description="Fournissez un objet JSON contenant jusqu'à 4 types de réponses distincts en tant que clés, et une description pour chaque type de réponse en tant que valeurs."),
]
summary_output_parser = CustomStructuredOutputParser.from_response_schemas(summary_response_schemas, language='EN_US')
summary_output_parser.get_format_instructions()
```
The problem is that if you are modifying the code, I have to maintain it. | Issue: Customizing 'structured_format_instructions' for Non-English Languages | https://api.github.com/repos/langchain-ai/langchain/issues/5203/comments | 7 | 2023-05-24T16:32:42Z | 2024-03-27T16:06:12Z | https://github.com/langchain-ai/langchain/issues/5203 | 1,724,375,986 | 5,203 |
[
"langchain-ai",
"langchain"
] | ### Wrong condition to raise ValueError in LLMChain.prep_prompts
In `LLMChain`, in the `prep_prompts` method, a `ValueError` may be raised on lines 112-114:
https://github.com/hwchase17/langchain/blob/fd866d1801793d22dca5cabe200df4f2b80fa7a4/langchain/chains/llm.py#L100-L114
The issue is that the condition that raises this `ValueError` does not accurately capture the `ValueError`'s message.
Suppose `"stop" in input_list[0]`, but `"stop"` is not a key in any of the remaining inputs in `input_list`. Then the condition
```python
"stop" in inputs and inputs["stop"] != stop
```
is false for all `inputs` in `input_list`. For `input_list[0]`, it is false by definition of `stop` (`stop` is `input_list[0]["stop"]`), and for any other `inputs` in `input_list` it is false in this hypothetical scenario because `stop` is not a key in `inputs`.
Thus, in this scenario, the ValueError will not be raised, even though it should be.
### Suggestion:
The condition on line 111 can be changed to
```python
stop is not None and "stop" in inputs and inputs["stop"] != stop
```
to accurately produce the desired behavior. | Issue: Wrong condition to raise ValueError in LLMChain.prep_prompts | https://api.github.com/repos/langchain-ai/langchain/issues/5202/comments | 1 | 2023-05-24T16:13:55Z | 2023-09-10T16:13:04Z | https://github.com/langchain-ai/langchain/issues/5202 | 1,724,346,533 | 5,202 |
[
"langchain-ai",
"langchain"
] | I'm using FAISS in memory and I need to obtain the vector of embeddings.
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=100, separators=["\n\n", "\n", "\t", '. '])
documents = text_splitter.split_documents(docs)
total_chuks = len(documents)
for i, d in enumerate(documents):
d.metadata['paragraph'] = f'Paragraph: {i+1} of {total_chuks}'
emb = OpenAIEmbeddings(chunk_size=1)
vs = FAISS.from_documents(documents=documents, embedding=emb)
I need to obtain the vector embeddings from OpenAIEmbeddings
I'm trying to create a KMeans from the vector indices to clusted the data for searching
| Embeddings vectors from FAISS objects | https://api.github.com/repos/langchain-ai/langchain/issues/5199/comments | 3 | 2023-05-24T14:30:23Z | 2023-09-18T16:11:16Z | https://github.com/langchain-ai/langchain/issues/5199 | 1,724,140,220 | 5,199 |
[
"langchain-ai",
"langchain"
] | ### System Info
like:
import openai
openai.api_key = openai_api_key
problem:
api_key is a global variable, it is not safe on concurrency if we have different api_keys to be used concurrency. api_key will out of control.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use local variable
### Expected behavior
to be fix | openai.api_key in OpenAIEmbeddings is unsafe on concurrency | https://api.github.com/repos/langchain-ai/langchain/issues/5195/comments | 3 | 2023-05-24T12:45:23Z | 2023-09-18T16:11:21Z | https://github.com/langchain-ai/langchain/issues/5195 | 1,723,907,172 | 5,195 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | Support personal access token (PAT) in ConfluenceLoader | https://api.github.com/repos/langchain-ai/langchain/issues/5191/comments | 3 | 2023-05-24T11:15:54Z | 2023-06-03T21:57:51Z | https://github.com/langchain-ai/langchain/issues/5191 | 1,723,748,960 | 5,191 |
[
"langchain-ai",
"langchain"
] | ### System Info
**MacOS Ventura Version 13.3.1
LangChain==0.0.178**
**When querying the database, the answer does not show in the terminal. Instead, the AI response is always: "Is there anything else I can help you with?". The response seems fine when not using the SQL tool.**
```
db_chain = SQLDatabaseChain.from_llm(llm=llm, db=sql_database, verbose=True)
sql_tool = Tool(
name='Student DB',
func=db_chain.run,
description="Useful for when you need to answer questions regarding the students, their information, attendance, and anything regarding the database. "
)
tools = load_tools(
["llm-math"],
llm=llm
)
tools.append(sql_tool)
memory = ConversationBufferMemory(memory_key="chat_history")
conversational_agent = initialize_agent(
agent='conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
memory=memory
)
conversational_agent.run(input="Who were sick last May 22, 2023?")
```
Here is the output in the terminal:
<img width="1480" alt="Screenshot 2023-05-24 at 17 50 14" src="https://github.com/hwchase17/langchain/assets/108784595/47a86e82-f7d3-4a46-9a8f-5a9ec1d1d17a">
I always get the AI response instead that of the answer in 'Observation'.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. create sqldatabasechain
2. create custom too for sql querying
3. use agent type conversational-react-description
4. output answer
### Expected behavior
I would like that the output of the chain, when using the sql tool, is the Query output and not the "Is there anything else I can help you with?" AI response. | Answer from the SQLDatabaseChain does not output when using the Agent 'conversational-react-description' | https://api.github.com/repos/langchain-ai/langchain/issues/5188/comments | 3 | 2023-05-24T09:59:16Z | 2023-10-24T05:51:11Z | https://github.com/langchain-ai/langchain/issues/5188 | 1,723,618,860 | 5,188 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Langchain == 0.0.178
llama-cpp-python == 0.1.54
LLM def:
`callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])`
`llm = LlamaCpp(model_path=f"{self.llm_path}/{self.selection}.bin", n_gpu_layers=25, n_ctx=1024, n_threads=8, callback_manager=callback_manager, verbose=True)`
Loaded model info:
> llama.cpp: loading model from models/gpt4all-13B/gpt4all-13B.bin
> llama_model_load_internal: format = ggjt v3 (latest)
> llama_model_load_internal: n_vocab = 32000
> llama_model_load_internal: n_ctx = 1024
> llama_model_load_internal: n_embd = 5120
> llama_model_load_internal: n_mult = 256
> llama_model_load_internal: n_head = 40
> llama_model_load_internal: n_layer = 40
> llama_model_load_internal: n_rot = 128
> llama_model_load_internal: ftype = 9 (mostly Q5_1)
> llama_model_load_internal: n_ff = 13824
> llama_model_load_internal: n_parts = 1
> llama_model_load_internal: model size = 13B
> llama_model_load_internal: ggml ctx size = 0.09 MB
> llama_model_load_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)
> .
> llama_init_from_file: kv self size = 800.00 MB
> AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
I do not see any info regarding no of layers offloaded to gpu.
### Code Repo:
[GitHub](https://github.com/allthatido/FileGPT) | Issue: LlamaCPP still uses cpu after passing the n_gpu_layer param | https://api.github.com/repos/langchain-ai/langchain/issues/5187/comments | 5 | 2023-05-24T09:34:07Z | 2023-08-17T10:12:23Z | https://github.com/langchain-ai/langchain/issues/5187 | 1,723,569,792 | 5,187 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest, macOS ventura, 3.8/11
It seems to be a bug when calling the server in create_openapi_agent. This is returning only the first letter of the server. Please see attached.
<img width="616" alt="image" src="https://github.com/hwchase17/langchain/assets/10047986/11b95d67-e05b-406e-a1f1-c35ef3da6abe">
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a new openapi_spec
2. Edit the server field
3. Create new openapi agent --> planner.create_openapi_agent(new_spec, requests_wrapper, llm)
4. I am also using gpt-3.5-turbo model (although it doesn't matter)
5. You could also just printout `spec.servers[0]["url"]`
### Expected behavior
The openapi agent should be initialized with the url as `base_url`
Removing `[0]["url"]` from `spec.servers[0]["url"]` should fix it | Bug in openapi agent planner | https://api.github.com/repos/langchain-ai/langchain/issues/5186/comments | 4 | 2023-05-24T09:29:23Z | 2023-09-18T16:11:26Z | https://github.com/langchain-ai/langchain/issues/5186 | 1,723,561,148 | 5,186 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.166
platform linux
python 3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to conversation reply to me step by step but not predict all the dialogue and return to me, just like this:
```shell
'Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.\nuser: yes\nassistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}'
```
below is my code and full log:
````python
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.prompts.prompt import PromptTemplate
openai_api_key = 'xxxxxxxxxxxxx'
llm = OpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=2000,
openai_api_key=openai_api_key
)
template = """You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: {{ // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: {{ // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}}
}}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {{"command":{{"action_key":"create_zone","action_model":{{"zone_name":"JJ"}}}}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {{"command":{{"action_key":"create_zone","action_model":{{"zone_name":"QQ"}}}}}}
{history}
user: {input}
assistant:"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=template
)
conversation = ConversationChain(
prompt=PROMPT,
llm=llm,
verbose=True
)
conversation.predict(input="please help me to create a zone")
````
````shell
> Entering new ConversationChain chain...
Prompt after formatting:
You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: { // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: { // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}
}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"JJ"}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"QQ"}}}
user: a zone called CC should be created
assistant: Please provide the zone type for the zone called “CC”.
user: zone type is kitchen
assistant: Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}
user: please help me to create a zone
assistant:
> Finished chain.
'Please provide me with the necessary information for creating the zone, including the zone name and zone type.'
````
```python
conversation.predict(input="zone name is CC, zone type is kitchen")
```
````shell
> Entering new ConversationChain chain...
Prompt after formatting:
You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: { // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: { // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}
}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"JJ"}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"QQ"}}}
user: a zone called CC should be created
assistant: Please provide the zone type for the zone called “CC”.
user: zone type is kitchen
assistant: Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}
Human: please help me to create a zone
AI: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is CC, zone type is kitchen
assistant:
> Finished chain.
'Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.\nuser: yes\nassistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}'
````
### Expected behavior
I also put the same prompt in chatgpt console, its working properly, the model are both GPT-3.5-turbo. The console always working properly like below.
```shell
Input: I would like to create a zone
Output: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
Input: zone name is FF
Output: Please provide the zone type for the zone called “FF”.
Input: living room
Output: Alright, I will create a zone with the name “FF” and the type “living room.” Please confirm if the information is correct.
Input: correct
Output: {“command”:{“action_key”:“create_zone”,“action_model”:{“zone_name”:“FF”}}}
``` | why not langchain conversation reply to user step by step? | https://api.github.com/repos/langchain-ai/langchain/issues/5183/comments | 2 | 2023-05-24T09:02:08Z | 2023-10-15T16:07:03Z | https://github.com/langchain-ai/langchain/issues/5183 | 1,723,509,200 | 5,183 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am writing to seek assistance regarding the functionality of the LangChain API. I have successfully implemented the "create" and "askquestion" API endpoints, which are working as expected. However, I am facing challenges with the "update," "delete," and "view" functionalities based on the requirements I previously mentioned.
To provide some context, I am using LangChain in conjunction with Vector db, faiss for local storage, Python Flask for API development, and OpenAI for chat completion and embeddings. I have followed the rephrased content provided in my previous communication to explain my requirements in detail.
Specifically, I am encountering difficulties with the following functionalities:
Update: I am unable to update a particular title or content based on the provided ID. The update should include modifying the content as well as updating the associated embeddings.
Delete: I need assistance in implementing the deletion of a specific entry, including both the title and content, along with its corresponding embeddings.
View: I am unable to retrieve and display the ID, title, and content in the response.
I would greatly appreciate it if you could provide me with guidance or code examples on how to address these challenges. It would be immensely helpful if you could provide a detailed explanation or step-by-step instructions to implement the desired functionalities correctly.
Please let me know if any additional information or code snippets are required from my end to better assist you in understanding the issue. I look forward to your prompt response and guidance.
Thank you for your attention to this matter.
### Suggestion:
_No response_ | Need Support: <Need Assistance with Update, Delete, and View Functions in LangChain API prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/5180/comments | 1 | 2023-05-24T08:34:18Z | 2023-09-10T16:13:14Z | https://github.com/langchain-ai/langchain/issues/5180 | 1,723,462,516 | 5,180 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/hwchase17/langchain/discussions/5159
<div type='discussions-op-text'>
<sup>Originally posted by **axiangcoding** May 24, 2023</sup>
code example here:
```
async def summary(callback: BaseCallbackHandler):
llm = AzureChatOpenAI(
deployment_name=os.environ["OPENAI_GPT35_DEPLOYMENT_NAME"],
)
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(content)
docs = [Document(page_content=t) for t in texts]
chain = load_summarize_chain(llm, chain_type="map_reduce", verbose=False)
await chain.arun(docs, callbacks=[callback])
```
and callback defined here:
```
class SummaryCallback(BaseCallbackHandler):
def on_chain_end(self, outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None,
**kwargs: Any) -> Any:
logger.info(f"on_chain_end: {outputs}, {run_id}, {parent_run_id}, {kwargs}")
def on_tool_end(self, output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any:
logger.info(f"on_tool_end: {output}, {run_id}, {parent_run_id}, {kwargs}")
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None,
**kwargs: Any) -> Any:
logger.info(f"on_llm_end: {response}, {run_id}, {parent_run_id}, {kwargs}")
```
when i test it, console shows:
```
2023-05-24 08:42:46.143 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='There is no text provided, so there is no main idea to summarize.', generation_info=None, message=AIMessage(content='There is no text provided, so there is no main idea to summarize.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 27, 'completion_tokens': 15, 'total_tokens': 42}, 'model_name': 'gpt-3.5-turbo'}, b9cb89c9-3e89-4335-93e9-8ac8104f9de1, 08558b5a-399c-4ff8-b64a-5856439df7e0, {}
2023-05-24 08:42:46.144 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'outputs': [{'text': 'There is no text provided, so there is no main idea to summarize.'}]}, 08558b5a-399c-4ff8-b64a-5856439df7e0, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {}
2023-05-24 08:42:48.537 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', generation_info=None, message=AIMessage(content='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 39, 'completion_tokens': 24, 'total_tokens': 63}, 'model_name': 'gpt-3.5-turbo'}, 3471ac9f-2290-494e-a939-406bc7b5b8a1, bfe3f758-1275-4662-a553-5e4889aa3958, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, bfe3f758-1275-4662-a553-5e4889aa3958, 12bc5030-dced-4243-a841-be44fa411d03, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 12bc5030-dced-4243-a841-be44fa411d03, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, None, {}
```
`on_chain_end` and `on_llm_end` printed several times, which one is the final output?
</div> | How to get the final output from the load_summarize_chain async run? | https://api.github.com/repos/langchain-ai/langchain/issues/5176/comments | 2 | 2023-05-24T07:40:45Z | 2023-09-15T22:13:01Z | https://github.com/langchain-ai/langchain/issues/5176 | 1,723,374,761 | 5,176 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add support for more code text splitters
### Motivation
For now langchain only supports python
### Your contribution
Add more code text splitters | Add support for more code text splitters | https://api.github.com/repos/langchain-ai/langchain/issues/5170/comments | 0 | 2023-05-24T05:35:01Z | 2023-05-24T18:39:52Z | https://github.com/langchain-ai/langchain/issues/5170 | 1,723,203,164 | 5,170 |
[
"langchain-ai",
"langchain"
] | Hi,
I've seen applications that are able to give fast responses using the Langchain & OpenAI (chat with own data).
However, in my case responses on simple questions seem to take a long time. I've been playing around with settings but I am wondering if there is anything else I can do to increase speed.
Current settings:
- Chunk Size: 700
- Chunk Overlap: 100
- Max tokens: 150
- Streaming enabled
What am I missing?
thanks!
| Tips for speeding up OpenAI API answers? | https://api.github.com/repos/langchain-ai/langchain/issues/5169/comments | 6 | 2023-05-24T05:20:27Z | 2023-09-18T16:11:31Z | https://github.com/langchain-ai/langchain/issues/5169 | 1,723,192,424 | 5,169 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.170
python: 3.8
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here I came across an issue related to the output of **router chain**.
When I ran the tutorial of "router chain" in [langchain website](https://python.langchain.com/en/stable/modules/chains/generic/router.html), the input query is: "What is black body radiation?" and the output of LLM is:
```
'{
"destination": "physics",
"next_inputs": "What is black body radiation?"
}'
```
Use the class **RouterOutputParser** to parse the output then I got the error:
> {OutputParserException}Got invalid return object. Expected markdown code snippet with JSON object, but got:
> {
> "destination": "physics",
> "next_inputs": "What is black body radiation?"
> }
When I debug step by step I found the error raised in this function: **parse_json_markdown**
```python
def parse_json_markdown(text: str, expected_keys: List[str]) -> Any:
if "```json" not in text:
raise OutputParserException(
f"Got invalid return object. Expected markdown code snippet with JSON "
f"object, but got:\n{text}"
)
json_string = text.split("```json")[1].strip().strip("```").strip()
try:
json_obj = json.loads(json_string)
except json.JSONDecodeError as e:
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
for key in expected_keys:
if key not in json_obj:
raise OutputParserException(
f"Got invalid return object. Expected key `{key}` "
f"to be present, but got {json_obj}"
)
return json_obj
```
You can see there is no "```json" string in the output of LLM, so it will step into the "if" in the first row of this function and raise the bug.
### Expected behavior
Can anyone give me some solutions? thanks. | Invalid Output Parser Format for "Router Chain" | https://api.github.com/repos/langchain-ai/langchain/issues/5163/comments | 22 | 2023-05-24T03:40:35Z | 2023-12-20T02:12:18Z | https://github.com/langchain-ai/langchain/issues/5163 | 1,723,124,127 | 5,163 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run the following code snippet:
```python
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.schema import Document
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
json_strings = [
'{"page_content": "I love MLflow.", "metadata": {"source": "/path/to/mlflow.txt"}}',
'{"page_content": "I love langchain.", "metadata": {"source": "/path/to/langchain.txt"}}',
'{"page_content": "I love AI.", "metadata": {"source": "/path/to/ai.txt"}}',
]
input_docs = [Document.parse_raw(j) for j in json_strings]
query = "What do I like?"
chain.run(input_documents=input_docs, question=query)
# This gives me a reasonable answer:
# ' I like MLflow, langchain, and AI.\nSOURCES: /path/to/mlflow.txt, /path/to/langchain.txt, /path/to/ai.txt'
chain.input_keys
```
Output:
```
['input_documents']
```
### Expected behavior
Output:
```
['input_documents', 'question']
```
Because when I run the chain as `chain.run(input_documents=input_docs, question=query)`.
If the expected behavior is `['input_documents']`, could you elaborate the reason? Thanks! | StuffDocumentsChain input_keys does not contain "question" | https://api.github.com/repos/langchain-ai/langchain/issues/5160/comments | 0 | 2023-05-24T02:36:47Z | 2023-08-11T23:25:14Z | https://github.com/langchain-ai/langchain/issues/5160 | 1,723,080,995 | 5,160 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There are some tables out there with comments.
It would be nice if the sql agent could read them.
### Motivation
Sometimes, column names do not describe perfectly well what they contain.
If sql agent can consider the table and column comments, it will be able to better respond to queries.
### Your contribution
Maybe something like this could work for the table comments:
```diff
--- sql_database.py.orig 2023-05-23 20:34:09.877909913 -0400
+++ sql_database.py 2023-05-23 20:34:13.857925528 -0400
@@ -268,11 +268,14 @@
# add create table command
create_table = str(CreateTable(table).compile(self._engine))
table_info = f"{create_table.rstrip()}"
+ table_comment = table.comment
has_extra_info = (
- self._indexes_in_table_info or self._sample_rows_in_table_info
+ self._indexes_in_table_info or self._sample_rows_in_table_info or table_comment
)
if has_extra_info:
table_info += "\n\n/*"
+ if table_comment:
+ table_info += f"\nTable comment: {table_comment}\n"
if self._indexes_in_table_info:
table_info += f"\n{self._get_table_indexes(table)}\n"
if self._sample_rows_in_table_info:
```
| It would be nice to make the SQL helper consider the table and column comments | https://api.github.com/repos/langchain-ai/langchain/issues/5158/comments | 11 | 2023-05-24T00:40:35Z | 2024-03-27T16:06:07Z | https://github.com/langchain-ai/langchain/issues/5158 | 1,722,985,477 | 5,158 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.178
python==3.10.11
os=win
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
following this example with Langchain, using azureopenai as llm instead of openai: https://github.com/emptycrown/llama-hub/tree/main
### Expected behavior
get answer back from azureopenai resource | InvalidRequestError: Resource not found when running qa_chain.run with azureopenai llm | https://api.github.com/repos/langchain-ai/langchain/issues/5149/comments | 2 | 2023-05-23T21:42:04Z | 2023-09-10T16:13:20Z | https://github.com/langchain-ai/langchain/issues/5149 | 1,722,827,675 | 5,149 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I got the error `OSError: [Errno 22] Invalid argument` when trying to pickle the faiss vector store. with the following code
```
merge_file_path = "combined_hf_faiss_vectorstore.pkl"
with open(merge_file_path, "wb") as f:
pickle.dump(csfaq_index, f)
```
It works on mac local laptop but not on the linux machine in the Databricks cloud.
Here is the system info:
sysname='Linux', release='5.15.0-1035-aws', version='#39~20.04.1-Ubuntu SMP Wed Apr 19 15:34:33 UTC 2023', machine='x86_64'
Any suggestion?
### Suggestion:
_No response_ | Can't pickle the faiss vector store object in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5145/comments | 9 | 2023-05-23T19:56:17Z | 2024-04-26T03:26:11Z | https://github.com/langchain-ai/langchain/issues/5145 | 1,722,695,626 | 5,145 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hello,
I would like to contribute a new feature in the api module for langchain, specifically we're looking to build a mechnism to translate natural language to APIs for the Adobe Experience Platform: https://developer.adobe.com/experience-platform-apis/ . I would like to lead and contribute this module back to langchain, I have forked the codebase, let me know what else is needed from my end, I will be sending across a PR soon that covers the basics
### Motivation
We're working on an end to end ml pipeline project part of which could use this langchain functionality to translate user's natural language commands to API requests/responses, like a gpt pair programmer
### Your contribution
As I said I want to lead/contribute all of this | AEP API Module | https://api.github.com/repos/langchain-ai/langchain/issues/5141/comments | 4 | 2023-05-23T17:10:45Z | 2023-12-06T17:46:05Z | https://github.com/langchain-ai/langchain/issues/5141 | 1,722,482,074 | 5,141 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I am trying to query tabular data and play with it using langchain SparkSQLAgent, from the below link.
https://python.langchain.com/en/latest/modules/agents/toolkits/examples/spark_sql.html
I got the below error
**"ModuleNotFoundError: No module named 'pyspark.errors'"** because of the below spark code from langchain library
_try:
from pyspark.errors import PySparkException
except ImportError:__
Obviously pyspark.errors library is still not present in synapse spark pool which runs on pyspark version 3.3(latest). We dont have option to upgrade to pyspark version 3.4 in our spark pools.
Is it possible to align the library to pyspark version 3.3 also, it will help all the developers using syanpse spark now!! thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Note: errors appears in synapse spark version which runs on pyspark 3.3 runtime
llm = OpenAI(engine="",temperature=0)
toolkit = SparkSQLToolkit(db=spark_sql, llm=llm)
agent_executor = create_spark_sql_agent(
llm=llm,
toolkit=toolkit,verbose = True)
### Expected behavior
please make langchain module aligned with pyspark 3.3 version to help synapse spark developers, as pyspark 3.4 is still not available. | Unable to Use "Spark SQL Agent" in Azure Synapse Spark Pool (pyspark 3.3 version) | https://api.github.com/repos/langchain-ai/langchain/issues/5139/comments | 1 | 2023-05-23T16:42:17Z | 2023-09-10T16:13:31Z | https://github.com/langchain-ai/langchain/issues/5139 | 1,722,444,819 | 5,139 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Change the return type of `BaseRetriever`'s `get_relevant_documents` (and `aget_relevant_documents`) to return an `Iterable[Document]` rather than `List[Document]`:
https://github.com/hwchase17/langchain/blob/753f4cfc26c04debfa02bb086a441d86877884c1/langchain/schema.py#L277-L297
### Motivation
Isn't clear why the results needs to be in a concrete / eagerly formed list. This change would make it easy to write a merge retriever, etc
### Your contribution
Simple to change the type definition, though technically would be a breaking change. | BaseRetriever's get_relevant_documents to return Iterable rather than List | https://api.github.com/repos/langchain-ai/langchain/issues/5133/comments | 1 | 2023-05-23T15:29:29Z | 2023-09-10T16:13:36Z | https://github.com/langchain-ai/langchain/issues/5133 | 1,722,313,908 | 5,133 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17 @dev2049
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the following code throws an error:
```python
from langchain.llms import OpenAI
from langchain.chains import HypotheticalDocumentEmbedder
from langchain.embeddings.openai import OpenAIEmbeddings
base_embeddings = OpenAIEmbeddings()
llm = OpenAI()
# Load with `web_search` prompt
embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")
embeddings.save("/Users/liang.zhang/emb.yaml")
load_chain("/Users/liang.zhang/emb.yaml")
```
Error:
```
---------------------------------------------------------------------------
ConstructorError Traceback (most recent call last)
Input In [33], in <cell line: 1>()
----> 1 load_chain("/Users/liang.zhang/emb.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:465, in _load_chain_from_file(file, **kwargs)
463 elif file_path.suffix == ".yaml":
464 with open(file_path, "r") as f:
--> 465 config = yaml.safe_load(f)
466 else:
467 raise ValueError("File type must be json or yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/__init__.py:125, in safe_load(stream)
117 def safe_load(stream):
118 """
119 Parse the first YAML document in a stream
120 and produce the corresponding Python object.
(...)
123 to be safe for untrusted input.
124 """
--> 125 return load(stream, SafeLoader)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/__init__.py:81, in load(stream, Loader)
79 loader = Loader(stream)
80 try:
---> 81 return loader.get_single_data()
82 finally:
83 loader.dispose()
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:51, in BaseConstructor.get_single_data(self)
49 node = self.get_single_node()
50 if node is not None:
---> 51 return self.construct_document(node)
52 return None
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:60, in BaseConstructor.construct_document(self, node)
58 self.state_generators = []
59 for generator in state_generators:
---> 60 for dummy in generator:
61 pass
62 self.constructed_objects = {}
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:413, in SafeConstructor.construct_yaml_map(self, node)
411 data = {}
412 yield data
--> 413 value = self.construct_mapping(node)
414 data.update(value)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:218, in SafeConstructor.construct_mapping(self, node, deep)
216 if isinstance(node, MappingNode):
217 self.flatten_mapping(node)
--> 218 return super().construct_mapping(node, deep=deep)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:143, in BaseConstructor.construct_mapping(self, node, deep)
140 if not isinstance(key, collections.abc.Hashable):
141 raise ConstructorError("while constructing a mapping", node.start_mark,
142 "found unhashable key", key_node.start_mark)
--> 143 value = self.construct_object(value_node, deep=deep)
144 mapping[key] = value
145 return mapping
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:100, in BaseConstructor.construct_object(self, node, deep)
98 constructor = self.__class__.construct_mapping
99 if tag_suffix is None:
--> 100 data = constructor(self, node)
101 else:
102 data = constructor(self, tag_suffix, node)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:427, in SafeConstructor.construct_undefined(self, node)
426 def construct_undefined(self, node):
--> 427 raise ConstructorError(None, None,
428 "could not determine a constructor for the tag %r" % node.tag,
429 node.start_mark)
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/name:openai.api_resources.embedding.Embedding'
in "/Users/liang.zhang/emb.yaml", line 5, column 11
```
### Expected behavior
No errors should occur. | HypotheticalDocumentEmbedder loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5131/comments | 4 | 2023-05-23T13:59:44Z | 2023-09-18T16:11:36Z | https://github.com/langchain-ai/langchain/issues/5131 | 1,722,147,865 | 5,131 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the following code to load a saved APIChain fails.
```python
from langchain.chains.api.prompt import API_RESPONSE_PROMPT
from langchain.chains import APIChain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
from langchain.chains.api import open_meteo_docs
chain_new = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)
chain_new.save("/Users/liang.zhang/api.yaml")
chain = load_chain("/Users/liang.zhang/api.yaml")
```
Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [29], in <cell line: 1>()
----> 1 chain = load_chain("/Users/liang.zhang/api.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:383, in _load_api_chain(config, **kwargs)
381 requests_wrapper = kwargs.pop("requests_wrapper")
382 else:
--> 383 raise ValueError("`requests_wrapper` must be present.")
384 return APIChain(
385 api_request_chain=api_request_chain,
386 api_answer_chain=api_answer_chain,
387 requests_wrapper=requests_wrapper,
388 **config,
389 )
ValueError: `requests_wrapper` must be present.
```
### Expected behavior
No error should occur. | APIChain loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5128/comments | 3 | 2023-05-23T13:37:40Z | 2023-06-27T22:32:42Z | https://github.com/langchain-ai/langchain/issues/5128 | 1,722,107,136 | 5,128 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently only certain sets of `Chains` support `async`. It would be amazing if we could bring this support to more chains, in my case specifically the OpenAPI chain.
### Motivation
`async` support for more chains would unify code for larger applications that run several different types of chains especially with regards to the streaming callbacks.
### Your contribution
I could start with bringing `async` support to the `OpenAPI` chain as a first step. | Support `async` calls on `OpenAPI` chains | https://api.github.com/repos/langchain-ai/langchain/issues/5126/comments | 1 | 2023-05-23T10:51:43Z | 2023-09-10T16:13:39Z | https://github.com/langchain-ai/langchain/issues/5126 | 1,721,797,534 | 5,126 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Modify GoogleDriveLoader so that it can accept a Google Drive Service instead of relying on file path to token.json and credentials.json
### Motivation
I am deploying LangChain in serverless environment where I use Redis for chat memory and security token store. In this context, it would be useful to be able to directly source the Google Drive connection credentials from Redis. Typically this could be done as follow:
class GoogleDriveLoader(BaseLoader, BaseModel):
"""Loader that loads Google Docs from Google Drive."""
credentials_path: Path = Path.home() / ".credentials" / "credentials.json"
token_path: Path = Path.home() / ".credentials" / "token.json"
**service: Optional[Resource] = None #Proposed patch**
folder_id: Optional[str] = None
document_ids: Optional[List[str]] = None
file_ids: Optional[List[str]] = None
Then it is mostly about wrapping the three instances in a function that would make them optional if service is supplied as a parameter.
creds = self._load_credentials()
service = build("drive", "v3", credentials=creds)
### Your contribution
I can propose a fork if there is some interest with this evolution. | Pass Google Drive Service to GoogleDriveLoader instead of the token.json and credentials.json | https://api.github.com/repos/langchain-ai/langchain/issues/5125/comments | 1 | 2023-05-23T10:33:51Z | 2023-05-23T21:07:10Z | https://github.com/langchain-ai/langchain/issues/5125 | 1,721,767,347 | 5,125 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11.3
macosx 13.4
langchain==0.0.177
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Loosely based on the sample code provided in the Langchain documentation [here](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html)
__Issue:__
Adding metadata does not seem to work, does not return annoying on a similarity search or from QA chains
`vectorstore.add_documents(documents=docs, meta_datas=meta_data)`
```
loader = TextLoader('/path/to/file/state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate(client, "Paragraph", "content")
vectorstore.add_documents(documents=docs)
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].metadata)
```
output:
`{}`
### Expected behavior
Apologies in advance if I've misunderstood the functionality, however I would expect source doc to be returned on the query from weaviate database. I can see the source is present in the db using weaviate's API
```
result = (
client.query
.get("Paragraph", ["content", "source"])
.with_near_text({
"concepts": [query]
})
.with_limit(1)
.do()
)
print(json.dumps(result, indent=4))
```
output:
```
{
"data": {
"Get": {
"Paragraph": [
{
"content": "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.",
"source": "/path/to/file/state_of_the_union.txt"
}
]
}
}
}
```
| source metadata cannot be retrieved from Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/5124/comments | 2 | 2023-05-23T09:14:14Z | 2023-05-24T08:14:58Z | https://github.com/langchain-ai/langchain/issues/5124 | 1,721,600,792 | 5,124 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I am facing slow response times (25 - 30 second) per question with `ConversationalRetrievalQAChain` and pinecone.
```
const chain = ConversationalRetrievalQAChain.fromLLM(
this.llm,
vectorStore.asRetriever(),
);
const res = await chain.call({ question, chat_history: [''] });
```
95% of that time is spent from the time the chain.call is executed. I have tried both gpt-3.5-turbo and gpt-4 models and I face similar response times.
I've also tried to turn on streaming, and I can see that for gtp-3.5-turbo there is nothing being streamed on the first 20 seconds or so. And once it starts streaming, it is faster compared to gpt-4. But, gpt-4 takes much less time to start streaming, but then it is slower to complete the answer.
Any help would be appreciated, thank you!
| Slow response time with `ConversationalRetrievalQAChain` | https://api.github.com/repos/langchain-ai/langchain/issues/5123/comments | 4 | 2023-05-23T09:01:48Z | 2023-11-09T06:18:06Z | https://github.com/langchain-ai/langchain/issues/5123 | 1,721,576,598 | 5,123 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.