issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
`ValueError Traceback (most recent call last)
Cell In[35], line 3
1 query = "What did the president say about the Supreme Court"
2 docs = db.similarity_search(query)
----> 3 chain.run(input_documents=docs, question=query)
File [~/.local/lib/python3.10/site-packages/langchain/chains/base.py:480](https://file+.vscode-resource.vscode-cdn.net/home/moghadas/toy_projects/~/.local/lib/python3.10/site-packages/langchain/chains/base.py:480), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
475 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
476 _output_key
477 ]
479 if kwargs and not args:
--> 480 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
481 _output_key
482 ]
484 if not kwargs and not args:
485 raise ValueError(
486 "`run` supported with either positional arguments or keyword arguments,"
487 " but none were provided."
488 )
File [~/.local/lib/python3.10/site-packages/langchain/chains/base.py:282](https://file+.vscode-resource.vscode-cdn.net/home/moghadas/toy_projects/~/.local/lib/python3.10/site-packages/langchain/chains/base.py:282), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
--> 282 raise e
...
113 if self.client.task == "text-generation":
114 # Text generation return includes the starter text.
115 text = response[0]["generated_text"][len(prompt) :]
ValueError: Error raised by inference API: Model google/flan-t5-xl time out`
This exception should be handled
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`query = "What did the president say about the Supreme Court"
docs = db.similarity_search(query)
chain.run(input_documents=docs, question=query)`
### Expected behavior
Retry mechanism, or caching | TimeOutError unhandled | https://api.github.com/repos/langchain-ai/langchain/issues/9509/comments | 2 | 2023-08-20T11:50:21Z | 2023-11-26T16:06:29Z | https://github.com/langchain-ai/langchain/issues/9509 | 1,858,096,249 | 9,509 |
[
"langchain-ai",
"langchain"
] | how to improve the perfromance of agents to get better responses from the local model like gpt4all | how to improve the perfromance of agents to get better responses from the local model like gpt4all | https://api.github.com/repos/langchain-ai/langchain/issues/9506/comments | 6 | 2023-08-20T06:25:58Z | 2023-12-02T16:06:07Z | https://github.com/langchain-ai/langchain/issues/9506 | 1,857,998,389 | 9,506 |
[
"langchain-ai",
"langchain"
] | The `QAGenerationChain` as it is currently written is prone to a `JSONDecodeError`, as mentioned in https://github.com/langchain-ai/langchain/pull/9503. That was my naive attempt to fix the problem I was having, but as I explained in closing the PR, I think a `PydanticOutputParser` with a more instructive prompt or an auto-fixing parser would be more robust. Plus, I think the successful runs after implementing my fix were just luck. 🤣
In `QAGenerationChain._call`, after generation, `json.loads` frequently raises a `JSONDecodeError`. This is usually because the response is wrapped in Markdown code tags or prefaced with a message, as seen in the following traces:
- `ChatOpenAI` gpt-3.5-turbo: [successful trace](https://smith.langchain.com/public/6c42845c-bbe2-41c1-99a4-7669112f1504/r)
- `ChatOpenAI` gpt-4: [`JSONDecodeError` due to Markdown formatting](https://smith.langchain.com/public/d235ef1e-fb13-4177-adb3-6ddaf159bfea/r)
- `ChatAnthropic` Claude 2: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/27eed20e-9ca5-4b4a-a3d3-71efde7aee3c/r)
- `ChatAnyscale` llama 2 70b: [successful trace](https://smith.langchain.com/public/e8f0b44c-424f-47ef-8cb3-31713b548d46/r?tab=0)
- `ChatAnyscale` llama 2 70b: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/c89a7415-477d-4b82-a0a4-0a333b8b5680/r)
Each of these models had a mix of successful and failed runs. I probably should have lowered the temperature and tried non-chat models as well.

I would like to help however I could on this, I just want to be sure my fix aligns with the greater vision for this chain since it seems particularly useful. :)
### System Info
Version info from LangSmith:
```
RUNTIME
langchain_version: "0.0.268"
library: "langchain"
library_version: "0.0.268"
platform: "Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.36"
runtime: "python"
runtime_version: "3.11.4"
sdk_version: "0.0.25"
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import QAGenerationChain
# tested with gpt-3.5-turbo, gpt-4, claude 2, and llama 2 70b
qagen = QAGenerationChain.from_llm(llm=llm)
question = qagen.run(docs[0].page_content)
# JSONDecodeError (sometimes)
```
The error is inconsistent, as seen in the screenshot above. This is for reasons we're all familiar with: perhaps the output is wrapped in Markdown code formatting with triple-backticks, maybe it's prefaced with a "sure, here's your JSON", etc.
Relevant examples from LangSmith:
- `ChatOpenAI` gpt-3.5-turbo: [successful trace](https://smith.langchain.com/public/6c42845c-bbe2-41c1-99a4-7669112f1504/r)
- `ChatOpenAI` gpt-4: [`JSONDecodeError` due to Markdown formatting](https://smith.langchain.com/public/d235ef1e-fb13-4177-adb3-6ddaf159bfea/r)
- `ChatAnthropic` Claude 2: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/27eed20e-9ca5-4b4a-a3d3-71efde7aee3c/r)
- `ChatAnyscale` llama 2 70b: [successful trace](https://smith.langchain.com/public/e8f0b44c-424f-47ef-8cb3-31713b548d46/r?tab=0)
- `ChatAnyscale` llama 2 70b: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/c89a7415-477d-4b82-a0a4-0a333b8b5680/r)
### Expected behavior
`QAGenerationChain.run` should return its expected output with no `JSONDecodeError`. I realize there's only so much we can do for the chain's default settings, and an LLM call can be inherently predictable, but some parsing and error handling would be nice. :) | QAGenerationChain.run often ends in JSONDecodeError | https://api.github.com/repos/langchain-ai/langchain/issues/9505/comments | 6 | 2023-08-20T06:01:56Z | 2024-02-08T03:16:35Z | https://github.com/langchain-ai/langchain/issues/9505 | 1,857,988,199 | 9,505 |
[
"langchain-ai",
"langchain"
] | ### System Info
! pip show langchain
Name: langchain
Version: 0.0.268
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
! pip show openlm
Name: openlm
Version: 0.0.5
Summary: Drop-in OpenAI-compatible that can call LLMs from other providers
Python 3.11.3
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`
! pip install openlm
! pip install langchain
from getpass import getpass
import os
from langchain.llms import OpenLM
from langchain import PromptTemplate, LLMChain
os.environ['OPENAI_API_KEY'] = "<openai-api-key>"
os.environ['HF_API_TOKEN'] = "<hf-api-key>"
if "OPENAI_API_KEY" not in os.environ:
print("Enter your OpenAI API key:")
os.environ["OPENAI_API_KEY"] = getpass()
if "HF_API_TOKEN" not in os.environ:
print("Enter your HuggingFace Hub API key:")
os.environ["HF_API_TOKEN"] = getpass()
question = "What is the capital of France?"
template = """Question: {question} Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
for model in ["text-davinci-003", "huggingface.co/gpt2"]:
llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print( """Model: {} Result: {}""".format(model, result))
`
When I run the above code, I get the following error:
TypeError: Completion.create() got an unexpected keyword argument 'api_key'
Can I anyone help me to fix this issue ?
### Expected behavior
As mentioned the below document,
https://python.langchain.com/docs/integrations/llms/openlm
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is the capital of France?
Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more | LLM Chain: Unable to run OpenLM - unexpected keyword argument 'api_key' in Completion.create class | https://api.github.com/repos/langchain-ai/langchain/issues/9504/comments | 6 | 2023-08-20T05:26:12Z | 2023-11-27T16:07:11Z | https://github.com/langchain-ai/langchain/issues/9504 | 1,857,958,938 | 9,504 |
[
"langchain-ai",
"langchain"
] | > from langchain.llms import HuggingFacePipeline
`ImportError: cannot import name 'HuggingFacePipeline' from 'langchain.llms'` | Issue: Can't import HuggingFacePipeline | https://api.github.com/repos/langchain-ai/langchain/issues/9502/comments | 12 | 2023-08-20T02:53:24Z | 2024-06-13T20:44:44Z | https://github.com/langchain-ai/langchain/issues/9502 | 1,857,932,992 | 9,502 |
[
"langchain-ai",
"langchain"
] | ### System Info
requirements.txt:
```
langchain==0.0.254
atlassian-python-api==3.36.0
chromadb==0.3.25
huggingface-hub==0.16.4
torch==2.0.1
sentence-transformers==2.2.2
InstructorEmbedding==1.0.0
p4python==2023.1.2454917
lxml==4.9.2
bs4==0.0.1
```
Dockerfile
```Dockerfile
FROM python:3.10
# Create a directory for your application
WORKDIR /app
COPY requirements.txt .
# Upgrade pip
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3.10 get-pip.py
RUN python3.10 -m pip install --upgrade pip
RUN python3.10 -m pip install -r requirements.txt
# RUN python3.10 preload.py
RUN python3.10 -m pip install openai tiktoken
RUN apt-get update && apt-get install -y vim
COPY . .
ENTRYPOINT sleep infinity;
```
Host where I am running Docker: (Am also running python3.10 on this)
```bash
Software:
System Software Overview:
System Version: macOS 13.4 (22F66)
Kernel Version: Darwin 22.5.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
```
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run this script with above requirements.txt in docker container set up by Dockerfile
```python
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import ConfluenceLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
loader = ConfluenceLoader(...)
documents = loader.load(...)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = HuggingFaceInstructEmbeddings(model_name= "hkunlp/instructor-large", model_kwargs={"device": "cpu"})
#Vector Database Storage: Store the generated embeddings in a vector database, allowing for efficient similarity searches.
db = Chroma.from_documents(texts, embeddings) # Stuck here
```
Successful runs where I am able to upload embeddings to db:
- Running this on my mac (no Docker)
- Running this in Docker container with OpenAIEmbeddings
Runs that get stuck:
- Running this in Docker container with HuggingFaceEmbeddings or HuggingFaceInstructEmbeddings
When I manually interrupt that stuck process, here is the traceback:
```bash
raceback (most recent call last):
File "/app/context.py", line 26, in <module>
ctx_manager.register_documents(paths, text_splitter_type, text_splitter_params,
File "/app/context_manager/context_manager/context_manager.py", line 197, in register_documents
self._vectordb_manager.add_documents(texts, embeddings,
File "/app/context_manager/db_managers/db_managers.py", line 92, in add_documents
vectorstore = Chroma.from_documents(
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/huggingface.py", line 77, in embed_documents
embeddings = self.client.encode(texts, **self.encode_kwargs)
File "/usr/local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 165, in encode
out_features = self.forward(features)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 74, in forward
output_states = self.auto_model(**trans_features, return_dict=False)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1935, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1094, in forward
layer_outputs = layer_module(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 754, in forward
hidden_states = self.layer[-1](hidden_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward
hidden_states = self.wi(hidden_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
KeyboardInterrupt
```
I thought it may be the CPU usage, but on my Mac, the CPU usage maxes out at around 270% and does not get stuck. However, in Docker, the CPU usage maxes out to 600% (reserved 6 CPUs), and still gets stuck. Memory is also lower than the limit in both environements.
### Expected behavior
Texts are supposed to get encoded then uploaded to vectorDB | HuggingFaceInstructEmbeddings hangs in Docker container, but runs fine on MacOS | https://api.github.com/repos/langchain-ai/langchain/issues/9498/comments | 9 | 2023-08-19T20:07:57Z | 2024-03-13T20:01:26Z | https://github.com/langchain-ai/langchain/issues/9498 | 1,857,854,089 | 9,498 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have the following code:
```
pipe = transformers.pipeline(
model=llm_model,
tokenizer=tokenizer,
return_full_text=True,
task='text-generation',
temperature=0.2,
max_new_tokens=200
)
llm = HuggingFacePipeline(pipeline=pipe)
retriever = vector_db.as_retriever(search_kwargs={"k": 4})
qa = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
```
When I call qa(query) it executes in about 24 seconds. But when I call llm with the same query and context obtained from retriever.get_relevant_documents method it gives me different result (but still relevant) and executes in 39 seconds, almost twice as long as qa. So i would like to understand what happens under the hood when I call RetrievalQA and why results are so different in speed.
### Suggestion:
_No response_ | Issue: RetrievalQA runs twice faster than huggingface model call | https://api.github.com/repos/langchain-ai/langchain/issues/9492/comments | 5 | 2023-08-19T15:26:52Z | 2023-11-30T16:07:16Z | https://github.com/langchain-ai/langchain/issues/9492 | 1,857,775,366 | 9,492 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I bumped langchain to version 0.0.268 and encountered below error while implementing 'with_fallbacks'.
Error:
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
Sample Code:
chat_llm = ChatOpenAI(model_name='gpt-3.5-turbo',
temperature=0,
model_kwargs={'top_p': 0.95},
max_tokens=512,
streaming=False,
verbose=False
)
fallback_llm = ChatOpenAI(model_name='gpt-4',
temperature=0,
model_kwargs={'top_p': 0.95},
max_tokens=512,
streaming=False,
verbose=False
)
chat_llm.with_fallbacks([fallback_llm], exceptions_to_handle=(Exception,))
Pydantic version: 1.10.12
Is this because of pydantic migration? Any immediate fixes or should i wait till Aug 25th for the migration to complete?
### Suggestion:
_No response_ | Issue: RunnableWithFallbacks: Can't instantiate abstract class BaseLanguageModel ??? | https://api.github.com/repos/langchain-ai/langchain/issues/9489/comments | 19 | 2023-08-19T11:55:03Z | 2024-05-21T16:35:19Z | https://github.com/langchain-ai/langchain/issues/9489 | 1,857,687,357 | 9,489 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
May I ask if there are any plans to support baiduwenxin llm in the near future? I saw that the JavaScript version has baiduwenxin, but Python does not
### Suggestion:
_No response_ | [ask]: about baiduwenxin LLM | https://api.github.com/repos/langchain-ai/langchain/issues/9488/comments | 2 | 2023-08-19T07:19:40Z | 2023-11-25T16:06:48Z | https://github.com/langchain-ai/langchain/issues/9488 | 1,857,615,923 | 9,488 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I use embedding model from huggingface vinai/phobert-base:

Then it has this problem:
WARNING:sentence_transformers.SentenceTransformer:No sentence-transformers model found with name /root/.cache/torch/sentence_transformers/vinai_phobert-base. Creating a new one with MEAN pooling.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
### Suggestion:
_No response_ | Problem with embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/9486/comments | 12 | 2023-08-19T03:25:59Z | 2023-12-02T16:06:12Z | https://github.com/langchain-ai/langchain/issues/9486 | 1,857,522,891 | 9,486 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In my problem statement I am defining multiple tools which are all based on retrieval chain and I am using OpenAiAgent as the agent to call this tools.
So when I am running a query it invokes to a single vectorstore. But I need my agent to access different vectorstores if the answer is not fetched from that particular vectorstore.I know the approach to get this working done by other agents.But I need this to be done by openaiAgent.
### Suggestion:
_No response_ | Invoke Multiple tools | https://api.github.com/repos/langchain-ai/langchain/issues/9483/comments | 3 | 2023-08-18T23:09:08Z | 2023-11-24T16:06:09Z | https://github.com/langchain-ai/langchain/issues/9483 | 1,857,382,588 | 9,483 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.268
Python 3.10.10
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
`
from langchain import PromptTemplate, OpenAI, LLMChain
prompt_template = "You're a helpful assistant that answers all questions?"
llm1 = OpenAI(model="foo", temperature=0)
llm_chain1 = LLMChain(
llm=llm1,
prompt=PromptTemplate.from_template(prompt_template)
)
llm2 = OpenAI(temperature=0)
llm_chain2 = LLMChain(
llm=llm2,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain = llm_chain1.with_fallbacks([llm_chain2])
search_desc = "Use this tool to answer user questions when asked to Search the Web."
prefix = """ You're a helpful Assistant Chatbot. """
suffix = """{chat_history} {input} {agent_scratchpad}"""
from langchain.utilities import SerpAPIWrapper
from langchain.agents import (
ZeroShotAgent,
Tool,
AgentExecutor,
)
search = SerpAPIWrapper()
tools = [
Tool(
func=search.run, description=search_desc, name="Search_the_web"
)
]
input_variables = ["input", "chat_history", "agent_scratchpad"]
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=input_variables,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(
llm_chain=llm_chain,
allowed_tools=tool_names,
verbose=True,
handle_parsing_errors="ignore",
)
response = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
`
### Expected behavior
To be able to pass RunnableWithFallbacks as an alternative to LLMChain in ZeroShotAgent. | RunnableWithFallbacks | https://api.github.com/repos/langchain-ai/langchain/issues/9474/comments | 1 | 2023-08-18T20:22:34Z | 2023-10-15T15:32:35Z | https://github.com/langchain-ai/langchain/issues/9474 | 1,857,249,373 | 9,474 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.264
sqlalchemy 1.4.39
Platform mac os ventura 13.2.1
Python 3.11.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. from langchain.utilities import SQLDatabase
2. create a connection string (conn) with mssql dialect
3. create a connection using db = SQLDatabase.from_uri(conn)
4. use db.get_usable_table_names() to print table names
5. you will see an empty list returning
### Expected behavior
it should return table names from the database hosted in mssql | SQLDatabase object returns empty list with get_usable_table_names() | https://api.github.com/repos/langchain-ai/langchain/issues/9469/comments | 7 | 2023-08-18T18:43:24Z | 2024-01-30T00:41:12Z | https://github.com/langchain-ai/langchain/issues/9469 | 1,857,140,529 | 9,469 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I propose an enhancement to the pgvector functionality: the addition of an update feature in [pgvector.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pgvector.py). As it stands, the code only seems to allow for the addition of new embeddings. I believe a specific function, potentially named `PGVector.update_documents()`, would greatly improve the utility of this module.
### Motivation
The inspiration for this proposal comes from an opportunity I've identified in my project to enhance efficacy and efficiency. I believe that the ability to update embeddings when one of my documents changes would streamline the process significantly. The current workaround of deleting the entire collection or removing all embeddings for the target document to save new ones, though functional, leaves room for improvement. This feature would not only optimize my project but could also benefit other users of pgvector who might face a similar need.
### Your contribution
I am prepared and eager to contribute to the creation of this feature. The `PGVector.update_documents()` function I propose would:
1. Retrieve the current list of embeddings for the document and load it into memory.
2. Obtain the chunks of the updated document.
3. Compare the updated document's chunks with the existing embeddings in memory to identify matches, potentially saving unnecessary model calls for existing embeddings. If the embedding doesn't exist in memory, the model would be called to generate the embedding.
4. Issue an update to the DB for any changing embeddings, an insert for any new ones, and a delete for any missing embeddings.
I am willing to submit a PR for this feature upon receiving feedback from the langchain maintainers. | Add Functionality to Update Embeddings in pgvector | https://api.github.com/repos/langchain-ai/langchain/issues/9461/comments | 7 | 2023-08-18T17:07:12Z | 2024-01-18T01:38:08Z | https://github.com/langchain-ai/langchain/issues/9461 | 1,857,029,157 | 9,461 |
[
"langchain-ai",
"langchain"
] | The following code throws ```langchain.schema.output_parser.OutputParserException``` 90% of the time:
```python
model = ChatOpenAI(model='gpt-4-0613') # code generation on gpt-3.5 isn't strong enough
prompt = ChatPromptTemplate(messages=[
SystemMessagePromptTemplate.from_template("Write code to solve the users problem. the last line of the python program should print the answer. Do not use sympy"),
HumanMessagePromptTemplate.from_template(f"What is the {n}th prime"),
])
class PythonExecutionEnvironment(BaseModel):
valid_python: str
code_explanation: str
python_repl = {"name": "python_repl", "parameters": PythonExecutionEnvironment.model_json_schema()}
chain = prompt | model.bind(
function_call={"name": python_repl["name"]}, functions=[python_repl]
) | JsonOutputFunctionsParser2()
response = chain.invoke({})
```
The reason is gpt-4 is returning control charectors in it's python code, which is making the parser throw:
https://github.com/langchain-ai/langchain/blob/50b8f4dcc722eb2ec5eccb17c25f1d3895442caa/libs/langchain/langchain/output_parsers/openai_functions.py#L44
The following resolves, but I'm not sure if there are other consequences:
```python
return json.loads(function_call_info, strict=False)
```
More details from Stack Overflow
* https://stackoverflow.com/questions/22394235/invalid-control-character-with-python-json-loads | langchain.schema.output_parser.OutputParserException: Could not parse function call data: Invalid control character Using JsonOutputFunctionParser | https://api.github.com/repos/langchain-ai/langchain/issues/9460/comments | 5 | 2023-08-18T16:52:56Z | 2024-01-11T07:32:13Z | https://github.com/langchain-ai/langchain/issues/9460 | 1,857,013,821 | 9,460 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am playing around with tools in LangChain, but I am running into an issue where the output of the model is not a `FunctionMessage` type even though the LLM is making a function call. For example in this code below
```python
# Import libraries
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage, FunctionMessage
from langchain.tools import tool, format_tool_to_openai_function
# Initialize the chat model
model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
# Define tools
@tool
def myfunc(x):
"""Calculate myfunc(x)."""
return x ** 0.5
@tool
def myotherfunc(x):
"""Calculate myotherfunc(x)."""
return x ** 2
tools = [myfunc]
# Define your messages
messages = [
SystemMessage(content="You are a helpful AI who can calculate special functions using provided functions."),
HumanMessage(content="What is func(4)?")
]
# Call the predict_messages method
functions = [format_tool_to_openai_function(t) for t in tools]
response = model.predict_messages(messages, functions=functions)
# Print the assistant's reply
print(response)
```
`response` is an AI message
```
AIMessage(content='', additional_kwargs={'function_call': {'name': 'myfunc', 'arguments': '{\n "x": 4\n}'}}, example=False)
```
But I believe it should be a FunctionMessage.
Secondly, there is no clear way to turn an `AIMessage` of this type into a FunctionMessage to pass downstream. I have tried
```python
FunctionMessage(content='', **response.additional_kwargs)
FunctionMessage(content='', additional_kwargs=response.additional_kwargs)
FunctionMessage(content='', additional_kwargs=str(response.additional_kwargs))
```
but each of these attempts gives an error.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the code provided above.
### Expected behavior
The output should be a `FunctionMessage`. | OpenAI function call is not a FunctionMessage type | https://api.github.com/repos/langchain-ai/langchain/issues/9457/comments | 4 | 2023-08-18T15:54:11Z | 2023-08-18T22:23:26Z | https://github.com/langchain-ai/langchain/issues/9457 | 1,856,945,358 | 9,457 |
[
"langchain-ai",
"langchain"
] | ### System Info
I got this error from on my office laptop
OS: Win 10
I checked AzureAI key, url, deployment and model names. No problem about that.
This guy having same issue with me : https://stackoverflow.com/questions/76750207/azureopenai-not-available-in-langchain
I deleted "import openai" part but no change.
Could somebody pls help me?
```
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
# Load environment variables
load_dotenv()
# Configure Azure OpenAI Service API
openai.api_type = "azure"
openai.api_version = "2022-12-01"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
# Create a completion
llm = AzureOpenAI(deployment_name="text-davinci-003", model_name="text-davinci-003")
joke = llm("Tell me a dad joke")
print(joke)
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
# Load environment variables
load_dotenv()
# Configure Azure OpenAI Service API
openai.api_type = "azure"
openai.api_version = "2022-12-01"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
### Expected behavior
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
| cannot import name 'AzureOpenAI' from 'langchain.llms' | https://api.github.com/repos/langchain-ai/langchain/issues/9453/comments | 5 | 2023-08-18T15:19:17Z | 2024-02-13T16:13:37Z | https://github.com/langchain-ai/langchain/issues/9453 | 1,856,898,665 | 9,453 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello!
I wrote a code that is very similar to this one
https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
class DocumentInput(BaseModel):
question: str = Field()
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tools = []
files = [
# https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf
{
"name": "alphabet-earnings",
"path": "/Users/harrisonchase/Downloads/2023Q1_alphabet_earnings_release.pdf",
},
# https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update
{
"name": "tesla-earnings",
"path": "/Users/harrisonchase/Downloads/TSLA-Q1-2023-Update.pdf",
},
]
for file in files:
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever),
)
)
llm = ChatOpenAI(
temperature=0,
model="gpt-3.5-turbo-0613",
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
agent({"input": "did alphabet or tesla have more revenue?"})
### Expected behavior
It's working so far, but when I directly ask the AI to compare the documents mentioning them by name,
the AI does not know what I am talking about!
I first have to ask questions specificly about each document and seperatly mentioning their names, then the llm can compare them and answer my questions.
Any idea to solve this issue?
I would appreaciate any help!:)
cheers | Aksing about uploaded documents only works when I first ask the AI on a specific one | https://api.github.com/repos/langchain-ai/langchain/issues/9451/comments | 2 | 2023-08-18T14:23:13Z | 2023-11-25T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9451 | 1,856,811,221 | 9,451 |
[
"langchain-ai",
"langchain"
] | ### Feature request
For the chains in libs/langchain/langchain/chains/summarize, it might be useful to add to the prompt something like:
"Try to limit the summary to {length} words", and set the length as an input parameter. So that we can have some influence over the length of the output summary.
I know that most LLMs are not good at counting words, maybe we should use token instead of words?
### Motivation
When doing summarization, we sometimes want summaries of different length. For example, we might fit the summary into some system with length limit, or sometimes we want a longer summary so we can include more details.
### Your contribution
I can submit a pull request, but I'm not sure if people believe the proposal is a good idea or not. | Add some control over the summary length for summarize chains | https://api.github.com/repos/langchain-ai/langchain/issues/9449/comments | 4 | 2023-08-18T13:49:38Z | 2024-04-17T05:59:06Z | https://github.com/langchain-ai/langchain/issues/9449 | 1,856,760,404 | 9,449 |
[
"langchain-ai",
"langchain"
] | ### System Info
I use Python 3.10.12 and langchain 0.0.262.
Using a chroma vector database within langchain, I encounter different behaviors between the .get() and .peek() methods. Specifically, when I use .get(), the embeddings field appears as None. However, when I use .peek(), the embeddings field is complete.
Here are the codes I have and their results:
Using db._collection.peek(1):
```
{'ids': ['/path/to/document.pdf_0'],
'embeddings': [[-0.013034389354288578,
0.004974348470568657, ...]],
'metadatas': [{'index': 0,
'page': 0,
'source': '/path/to/document.pdf'}],
'documents': ['Document content here...']}
```
Using db._collection.get(ids='/path/to/document.pdf_0'):
```
{'ids': ['/path/to/document.pdf_0'],
'embeddings': None,
'metadatas': [{'index': 0,
'page': 0,
'source': '/path/to/document.pdf'}],
'documents': ['Document content here...']}
```
The issue seems to be that the embeddings are missing when using the .get() method, but they are present when using the .peek() method. Could you please help me understand why this inconsistency occurs?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a Chroma database with custom metadata and OpenAI embeddings.
2. Filter on just the first source with .get() so you can see the first k documents
3. Peek the same k documents with .peek()
### Expected behavior
The .get() method should return the embeddings too. | Inconsistent Embedding Field Behavior Between .get() and .peek() Methods in Vector Database within Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9442/comments | 1 | 2023-08-18T09:24:20Z | 2023-08-18T09:36:04Z | https://github.com/langchain-ai/langchain/issues/9442 | 1,856,376,828 | 9,442 |
[
"langchain-ai",
"langchain"
] | ### System Info
### Environment:
Google COLAB project. When running `!cat /etc/os-release` it prints:
```
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
```
### Python Environment
`!python -V`:
```
Python 3.10.12
```
`!pip show langchain`:
```
Name: langchain
Version: 0.0.267
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: aiohttp, async-timeout, dataclasses-json, langsmith, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
@hwchase17
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a blank Google Colab Project (https://colab.research.google.com/#create=true) then add a code block and paste in this code:
```py
!pip install langchain==0.0.267
# or try just '!pip install langchain' without the explicit version
from pydantic import BaseModel, Field
class InputArgsSchema(BaseModel):
strarg: str = Field(description="The string argument for this tool")
# THIS WORKS:
from typing import Type
class Foo(BaseModel):
my_base_model_subclass: Type[BaseModel] = Field(..., description="Equivalent to the args_schema field in langchain/StructuredTool")
my_foo = Foo(
my_base_model_subclass=InputArgsSchema
)
print(f"My foo {my_foo} is successfully instantiated")
# BUT THIS DOES NOT:
from langchain.tools import StructuredTool
def my_tool_impl(strarg: str):
print(f"Called myTool with strarg={strarg}")
my_tool = StructuredTool(
name="my_tool",
description="A demo tool for testing purposes",
args_schema=InputArgsSchema,
func=my_tool_impl
)
```
Now run the code block.
### Expected behavior
The `StructuredTool` instance should be instantiated without an exception and the `InputArgsSchema` should be accepted as argument for `args_schema`. | StructuredTool raises an error when instantiated | https://api.github.com/repos/langchain-ai/langchain/issues/9441/comments | 9 | 2023-08-18T08:45:10Z | 2024-01-23T06:13:17Z | https://github.com/langchain-ai/langchain/issues/9441 | 1,856,316,147 | 9,441 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.267, Python 3.10, Poetry virtualenv, Pop_OS 22.04
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import TransformChain
def transform(inputs):
return {
'output': {
'structured': {
'report': 'Done'
}
}
}
async def atransform(inputs): return transform(inputs)
chain = TransformChain(
input_variables=['text'],
output_variables=['output'],
transform=transform, # <-- type error below happens here
atransform=atransform, # <-- If I remove this, I get: Argument missing for parameter "atransform"
)
```
```
Argument of type "(inputs: Unknown) -> dict[str, dict[str, dict[str, str]]]" cannot be assigned to parameter "transform" of type "(Dict[str, str]) -> Dict[str, str]" in function "__init__"
Type "(inputs: Unknown) -> dict[str, dict[str, dict[str, str]]]" cannot be assigned to type "(Dict[str, str]) -> Dict[str, str]"
Function return type "dict[str, dict[str, dict[str, str]]]" is incompatible with type "Dict[str, str]"
"dict[str, dict[str, dict[str, str]]]" is incompatible with "Dict[str, str]"
Type parameter "_VT@dict" is invariant, but "dict[str, dict[str, str]]" is not the same as "str"
```
### Expected behavior
Two type checking issues since 0.0.267:
- `transform` should probably take/return `dict[str, Any]` instead of `dict[str, str]` ([like `atransform` does](https://github.com/langchain-ai/langchain/blob/0689628489967785f3a11a9f29d8f6f90930f4f4/libs/langchain/langchain/chains/transform.py#L31C1-L35C40))
- `atransfrom` should probably not be mandatory
Most other code seems to use `Dict[str, Any]`, and non-string values seem to work just fine.
Prior to `0.0.267` the type system didn't enforce the type of `transform` when constructing `TransformChain`. It also started requiring an `atransform` parameter even though it seems to be intended to be optional. | TransformChain wrong function types, enforced since 0.0.267 | https://api.github.com/repos/langchain-ai/langchain/issues/9440/comments | 5 | 2023-08-18T08:41:31Z | 2023-11-25T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9440 | 1,856,311,063 | 9,440 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everyone, I'm trying to deploy and use langsmith locally.
I deployed in a docker container using
```
langsmith start --expose --openai-api-key=<my azure OpenAi key>
```
the docker container looks good

I opened all the used ports to avoid any problem there, I'm running langsmith in a remote computer
I set up the environment variables
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://cc23-20-79-217-xxx.ngrok.io
LANGCHAIN_API_KEY=<my key>
but the interface is not loading the projects

when I try to access the langsmith endpoint it returns
```
{
"detail": "Not Found"
}
```
using the chat example that appears in this repo
https://github.com/langchain-ai/langsmith-cookbook/tree/main/feedback-examples/streamlit
I can see in the endpoint https://cc23-20-79-217-xxx.ngrok.io that the runs are being tracked, but I can't see them in the frontend
**debugging the front end it is failing trying to fetch the tenants, it's trying to fetch them from http://127.0.0.1:1984/tenants while if I'm not understanding it wrong it should get them from http://20.79.217.xxx:1984/tenants**

could it be a problem with the Azure OpenAI? or did I do something wrong with the installation?
Thanks in advance
### Suggestion:
_No response_ | Langsmith expose is not working Azure OpenAI services | https://api.github.com/repos/langchain-ai/langchain/issues/9438/comments | 4 | 2023-08-18T08:06:33Z | 2024-01-30T00:45:35Z | https://github.com/langchain-ai/langchain/issues/9438 | 1,856,263,823 | 9,438 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.267
### Who can help?
@hwaking @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-------------------------- This is the begin of python script ---------------------------------------
# import package
from langchain import LLMMathChain
from langchain.vectorstores.redis import Redis
from langchain.utilities import BingSearchAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import AzureChatOpenAI
import dotenv
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history")
dotenv.load_dotenv()
# init agent
llm = AzureChatOpenAI(deployment_name="OpenAImodel")
search = BingSearchAPIWrapper(k=10)
llm_math_chain = LLMMathChain.from_llm(llm=llm)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions."
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
]
agent = initialize_agent(tools, llm,agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
# ask some question, at random point you will get error: `Could not parse LLM output`
agent.run(input="幫我查詢現在台北的溫度,用攝氏單位")
agent.run("用中文告訴我: 研華科技冰水主機的哪些組件或相關設備可能造成趨近溫度過高,最後用中文告訴我")
agent.run("用中文回答: 幫我查詢幫浦馬達故障通常是因為什麼原因?")
agent.run("冰水主機的哪些組件或相關設備可能造成趨近溫度過高")
agent.run("用中文回答剛剛的問題")
-------------------------- This is the end of python script ---------------------------------------
### Expected behavior
# at random point you will get error: `Could not parse LLM output`
File ~/miniconda3/envs/langc/lib/python3.8/site-packages/langchain/agents/conversational/output_parser.py:26, in ConvoOutputParser.parse(self, text)
24 match = re.search(regex, text)
25 if not match:
---> 26 raise OutputParserException(f"Could not parse LLM output: `{text}`")
27 action = match.group(1)
28 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `在研華科技的冰水主機中,可能造成趨近溫度過高的組件或相關設備包括:風扇故障、散熱片堵塞、水泵故障、循環水管堵塞、或是防凍液不足等。建議您確保這些組件和設備處於良好狀態,以確保冰水主機的正常運作。`
# my observation
i found that the error always comes when the function `parse` of ConvoOutputParser is called
the text might not contain some keywords such as `Action:` or whatever
so the regular expression will not match the LLM output
good luck!!
| ConvoOutputParser randomly fails to parse LLM output | https://api.github.com/repos/langchain-ai/langchain/issues/9436/comments | 2 | 2023-08-18T07:23:52Z | 2023-11-24T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9436 | 1,856,204,649 | 9,436 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I'm trying to run langsmith locally, together with Azure OpenAi services.
I started langsmith in docker and it looks like below

And I'm running the chatbot from here https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/streamlit/README.md
Looks like the app is not allowed to send the interaction to langsmith.
```
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/utils.py", line 55, in raise_for_status_with_text
response.raise_for_status()
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 405 Client Error: Not Allowed for url: http://20.79.217.xxx/runs/3abf1f4c-2b48-4d1b-9628-979f7d12d9d5/share
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/azureuser/langsmith/main.py", line 126, in <module>
url = client.share_run(run.id)
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/client.py", line 831, in share_run
raise_for_status_with_text(response)
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/utils.py", line 57, in raise_for_status_with_text
raise ValueError(response.text) from e
ValueError: <html>
<head><title>405 Not Allowed</title></head>
<body>
<center><h1>405 Not Allowed</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>
```
Also something that looks a bit odd is when I open the website, it's not showing me the projects

and i can't create a new one, keeps telling me that the field is required

what could I do, any hint is more than appreaciated
### Suggestion:
_No response_ | Running Langsmith locally not working | https://api.github.com/repos/langchain-ai/langchain/issues/9435/comments | 1 | 2023-08-18T07:11:45Z | 2023-08-18T07:57:29Z | https://github.com/langchain-ai/langchain/issues/9435 | 1,856,187,220 | 9,435 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.267
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = AzureChatOpenAI(deployment_name='gpt-4', temperature=0.0)`
`llm.dict()`
`{'model': 'gpt-3.5-turbo',
'request_timeout': None,
'max_tokens': None,
'stream': False,
'n': 1,
'temperature': 0.0,
'engine': 'gpt-4',
'_type': 'azure-openai-chat'}`
### Expected behavior
The model name and the engine should match. | While using a GPT-4 deployed on AzureOpenAI, model name is showing as 'gpt-3.5-turbo', while the engine is showing as 'gpt-4' | https://api.github.com/repos/langchain-ai/langchain/issues/9434/comments | 3 | 2023-08-18T06:36:12Z | 2023-12-20T16:06:41Z | https://github.com/langchain-ai/langchain/issues/9434 | 1,856,141,970 | 9,434 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a JSONL file that has the format:
{
"question": ...
"answer": ...
}
I only want to use "question" for "page_content" to retrieve, because if I merge "question" and "answer" into "page_content", it will be noise to retrieve.
I have an idea that after retrieving:
- I will replace "page_content" with "answer" to give it to LLM.
- Or customize function as_retrieval() to return "answer " in metadata.
- Or the new format data?
So how to make it? Thank you
### Suggestion:
_No response_ | Issue: How to replace "page_content" after "retrieval" | https://api.github.com/repos/langchain-ai/langchain/issues/9429/comments | 4 | 2023-08-18T04:04:37Z | 2023-11-24T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9429 | 1,856,013,098 | 9,429 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
template = """Answer the following questions as best you can. You have access to the following tools:
{tools}
{chat_history}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
{agent_scratchpad}
"""
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory = memory,
system_message = system_message,
agent_kwargs = agent_kwargs,
handle_parsing_errors=True,
)
template = """
after answering each user questions you MUST append a mutiple follow up question whose purpose it get the user to engage more with you based on the topic so that the conversation can keep flowing
Instructions
'''
1. Your task is to engage with the user and make the user take a trivia quiz alongside the chat . You will start by asking in the beginning ```Before we start, Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}.If the user says no you should ask randomly in between ```Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}. If the user says no and hasnt responded positively or a yes ever to take the quiz only and only then in the end you should ask the user ```Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}.
'''
Above Instructions should be strictly followed
""".format(question_bank = questions)
This is the following peice of code and I need to make this prompt in such as way that it would be able to take quiz whenever it is asked to take quiz.
I need to pass system message in this so that it follows the provided instructions
This is the Link of the source code:-
https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval
### Suggestion:
I tried to add the template as a system message but it was not able to follow the instructions
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory = memory,
system_message = system_message,
agent_kwargs = agent_kwargs,
handle_parsing_errors=True,
)
But this was not working
Source code link:-
https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval | Add System message in the LLMSingleActionAgent | https://api.github.com/repos/langchain-ai/langchain/issues/9427/comments | 1 | 2023-08-18T02:54:56Z | 2023-11-24T16:06:44Z | https://github.com/langchain-ai/langchain/issues/9427 | 1,855,964,090 | 9,427 |
[
"langchain-ai",
"langchain"
] | ### System Info
Version 0.0.266, all
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
If you specify input variables in your `final_prompt` they are not actually registered as input variables on the prompt, whereas input variables on the pipeline prompts are.
```python
from langchain.prompts import PipelinePromptTemplate
pipeline_prompt = PipelinePromptTemplate(
final_prompt=PromptTemplate.from_template("{foo} is a var"),
pipeline_prompts=[]
)
pipeline_prompt.input_variables # returns []
```
vs
```python
pipeline_prompt = PipelinePromptTemplate(
final_prompt=PromptTemplate.from_template("final prompt + {other}"),
pipeline_prompts=[
("other", PromptTemplate.from_template("{foo}"))
]
)
pipeline_prompt.input_variables # returns ["foo"]
```
### Expected behavior
Looks like it's just not in the code here: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/pipeline.py#L35
Assuming that input variables should also be extracted from the final prompt, but _after_ the pipeline prompts are injected, right? | Input variables in PipelinePromptTemplate's final_prompt are not extracted as input variables | https://api.github.com/repos/langchain-ai/langchain/issues/9423/comments | 5 | 2023-08-17T20:54:45Z | 2024-03-20T16:05:23Z | https://github.com/langchain-ai/langchain/issues/9423 | 1,855,698,506 | 9,423 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I do not have access to huggingface.co in my environment, but I do have the Instructor model (hkunlp/instructor-large) saved locally. How do I utilize the langchain function HuggingFaceInstructEmbeddings to point to a local model?
I tried the below code but received an error:
```
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "<local_filepath>/hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
```
Error:
HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/hkunlp/instructor-large
### Suggestion:
_No response_ | Issue: How can I load text embeddings from a local model? | https://api.github.com/repos/langchain-ai/langchain/issues/9421/comments | 4 | 2023-08-17T20:12:41Z | 2024-03-25T13:59:21Z | https://github.com/langchain-ai/langchain/issues/9421 | 1,855,641,091 | 9,421 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10
Python 3.9.7
langchain 0.0.236
### Who can help?
@hwchase17 @agola11 I have problem to make work together MultiPromptChain and AgentExecutor. Problem actually is trivial MultiPromptChain.destination_chains has type of Mapping[str, LLMChain] and AgentExecutor did not fit in this definition. Second AgentExecutor has output_keys = ["output"] but MultiPromptChain expects ["text"]. My workaround is to make custom class
class CustomRouterChain(MultiRouteChain):
@property
def output_keys(self) -> List[str]:
return ["output"]
But it would be more user friendly if these two classes could work together out of the box according to your tutorials. Thanks.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains.router import MultiPromptChain
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate
from langchain.agents import tool
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.
Here is a question:
{input}"""
@tool
def text_length(input: str) -> int:
"""This tool returns exact text length. Use it when you need to measure text length.
It inputs text string."""
return len(input)
llm = OpenAI()
agent = initialize_agent([text_length], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template,
"agent": None,
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template,
"agent": None,
},
{
"name": "text_length",
"description": "Good for answering questions about text length",
"prompt_template": "{input}",
"agent": agent,
},
]
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
chain = LLMChain(llm=llm, prompt=prompt)
if p_info["agent"] is None:
destination_chains[name] = chain
else:
destination_chains[name] = p_info["agent"]
default_chain = ConversationChain(llm=llm, output_key="text")
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
)
print(chain.run("What is black body radiation?"))
print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))
print(chain.run("What length have following text: one, two, three, four"))
### Expected behavior
Runs without errors. | AgentExecutor not working with MultiPromptChain | https://api.github.com/repos/langchain-ai/langchain/issues/9416/comments | 5 | 2023-08-17T19:01:41Z | 2024-02-12T16:15:24Z | https://github.com/langchain-ai/langchain/issues/9416 | 1,855,547,600 | 9,416 |
[
"langchain-ai",
"langchain"
] |
---------------------------------------------------------------------------
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
--------------------------------------------------------------------------
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:257: UserWarning: Valid config keys have changed in V2:
* 'allow_population_by_field_name' has been renamed to 'populate_by_name'
warnings.warn(message, UserWarning)
PydanticUserError Traceback (most recent call last)
[<ipython-input-15-2df7a532c2da>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
2 from llama_index.llms import HuggingFaceLLM
6 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/deprecated/class_validators.py](https://localhost:8080/#) in root_validator(pre, skip_on_failure, allow_reuse, *__args)
226 mode: Literal['before', 'after'] = 'before' if pre is True else 'after'
227 if pre is False and skip_on_failure is not True:
--> 228 raise PydanticUserError(
229 'If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`.'
230 ' Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.',
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.0/u/root-validator-pre-skip | while importing the llama_index occurred error that | https://api.github.com/repos/langchain-ai/langchain/issues/9412/comments | 4 | 2023-08-17T18:07:13Z | 2023-11-24T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9412 | 1,855,473,639 | 9,412 |
[
"langchain-ai",
"langchain"
] | While trying to import langchain in Jupyter Notebook getting this error.
> PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
>
> For further information visit https://errors.pydantic.dev/2.1.1/u/root-validator-pre-skip
### Suggestion:
using langchain 0.0.27 version | Issue: Can't import Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9409/comments | 6 | 2023-08-17T17:44:33Z | 2024-06-03T07:12:37Z | https://github.com/langchain-ai/langchain/issues/9409 | 1,855,445,177 | 9,409 |
[
"langchain-ai",
"langchain"
] | ### System Info
In the Async mode, SequentialChain implementation seems to run the same callbacks over and over since it is re-using the same callbacks object.
Langchain version: 0.0.264
The implementation of this aysnc route differs from the sync route and sync approach follows the right pattern of generating a new callbacks object instead of re-using the old one and thus avoiding the cascading run of callbacks at each step.
[Async code](https://github.com/langchain-ai/langchain/blob/v0.0.264/libs/langchain/langchain/chains/sequential.py#L194)
```
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
...
for i, chain in enumerate(self.chains):
_input = await chain.arun(_input, callbacks=callbacks)
...
```
[Sync code](https://github.com/langchain-ai/langchain/blob/v0.0.264/libs/langchain/langchain/chains/sequential.py#L180)
```
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
for i, chain in enumerate(self.chains):
_input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
...
```
Notice how we are reusing the `callbacks` object in the Async code which will have a cascading effect as we run through the chain. It runs the same callbacks over and over resulting in issues.
CC @agola11
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
You can write a simple sequential chain with 3 tasks with callbacks and run the code in async and notice that the callbacks run over and over as we can see in the log.
### Expected behavior
We should ideally see the callbacks get run once per task. | SequentialChain runs the same callbacks over and over in async mode | https://api.github.com/repos/langchain-ai/langchain/issues/9401/comments | 2 | 2023-08-17T15:18:21Z | 2023-09-25T09:32:55Z | https://github.com/langchain-ai/langchain/issues/9401 | 1,855,216,580 | 9,401 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to use a `ConversationalRetrievalChain` along with a `ConversationBufferMemory` and `return_source_documents` set to `True`. The problem is that, under this setting, I get an error when I call the overall chain.
```
from langchain.chains import ConversationalRetrievalChain
chain = ConversationalRetrievalChain(
retriever=retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
memory = memory,
return_source_documents=True,
)
query = 'dummy query'
chain({"question": query})
```
The error message says:
```
File [~/.local/share/virtualenvs/qa-8mHXn5ez/lib/python3.10/site-packages/langchain/chains/base.py:354](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/my_user/~/.local/share/virtualenvs/env-8mHXn5ez/lib/python3.10/site-packages/langchain/chains/base.py:354), in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
352 self._validate_outputs(outputs)
353 if self.memory is not None:
--> 354 self.memory.save_context(inputs, outputs)
...
---> 28 raise ValueError(f"One output key expected, got {outputs.keys()}")
29 output_key = list(outputs.keys())[0]
30 else:
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
```
It works if I remove either the memory or the `return_source_documents` parameter.
So far, the only workaround that I found out is querying the chain using an external chat history, like this:
`chain({"question": query, "chat_history":"dummy chat history"})`
Thank you in advance for your help. | ConversationalRetrievalChain doesn't work along with memory and return_source_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9394/comments | 6 | 2023-08-17T14:40:07Z | 2024-02-15T16:10:25Z | https://github.com/langchain-ai/langchain/issues/9394 | 1,855,144,833 | 9,394 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm running the executor off a flask backend, and when the answer from the llm to my number is POSTED, strangely, the agent begins to talk to itself, with seemingly no human message. I store all the messages within the executor's memory, and this is the error that it throws:
2023-08-17 14:05:05.122663
+122*******(my number)
> Entering new AgentExecutor chain...
The user is initiating a conversation. No specific question or request has been made.
Action: None needed at this point.
Final Answer: Hello! How can I assist you with your fitness and wellness goals today?
> Finished chain.
6.702334880828857
127.******* - - [17/Aug/2023 09:05:11] "POST /sms HTTP/1.1" 200 -
2023-08-17 14:05:12.594520
+187*******(twilio number)
2023-08-17 14:05:12.608324
+187*******(twilio number)
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
2023-08-17 14:05:14.522510
+187*******(twilio number)
> Entering new AgentExecutor chain...
There is no question provided for me to answer. <- no message is passed
Action: None
Final Answer: Could you please provide a question?
> Finished chain.
[2023-08-17 09:05:16,692] ERROR in app: Exception on /sms [POST]
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/michaelroytman/Desktop/bolic-ai/app.py", line 53, in inbound_sms
msg_cache=ai.calc(client_number, body, user_dict)
File "/Users/michaelroytman/Desktop/bolic-ai/service/bolicai.py", line 264, in calc
bolicai_response = user_dict[client_number[1:]].respond_to(body, client_number)
File "/Users/michaelroytman/Desktop/bolic-ai/service/bolicai.py", line 168, in respond_to
response=agent_chain.run(input=text)
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 441, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 245, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 339, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/memory/chat_memory.py", line 37, in save_context
self.chat_memory.add_user_message(input_str)
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/schema/memory.py", line 100, in add_user_message
self.add_message(HumanMessage(content=message))
File "/opt/anaconda3/lib/python3.9/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for HumanMessage
content
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This creates my memory
There is a class for each user with their own memory assigned to them.
```
self.memory = ConversationBufferMemory(
llm=OpenAI(temperature=0),
return_messages=True,
memory_key="chat_history",
max_token_limit=1000,
)
for msg in msg_history_sorted:
try:
if msg.to == ''ai number": # if msg was sent to AI
self.memory.chat_memory.add_user_message(msg.body)
else:
self.memory.chat_memory.add_ai_message(msg.body)
except:
continue
```
This is the respond to method within the class:
```
def respond_to(self, text, phone_number):
self.set_context(phone_number)
#print(session["user_template"])
# switch the running agent's memory
agent_chain.memory = self.memory
response=agent_chain.run(input=text)
return response
```
This is how the agent is initialized:
```
agent_chain = initialize_agent(
tools=tools,
llm=chat_model,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
early_stopping_method="generate",
verbose=True,
memory=memory,
max_execution_time=7,
max_iterations=1,
#trim_intermediate_steps=-1,
# if not in proper json answer format, retry
handle_parsing_errors=True
)
```
### Expected behavior
I would expect the initialize agent to stop executing when a response is POSTED to the user, but it keeps on going, why? | initialize_agent with zero_shot_react_description, talks to itself, produces conversational buffer memory issues | https://api.github.com/repos/langchain-ai/langchain/issues/9393/comments | 2 | 2023-08-17T14:37:49Z | 2023-11-23T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9393 | 1,855,140,764 | 9,393 |
[
"langchain-ai",
"langchain"
] | ### System Info
@hwchase17
@agola11
Trying to implement https://python.langchain.com/docs/guides/fallbacks into our current environment that is using LLMChain, but when I pass in the fallback llm into the LLMChain, it throws the following error:
`**ValidationError: 1 validation error for LLMChain**
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)`
Here is part of the code that I am trying to apply:
`
tools = [
Tool(
func=search.run, description=search_desc, name="Search_the_web"
),
]
conv0memory = ConversationBufferMemory(
memory_key="chat_history_lines",
return_messages=True,
input_key="input",
human_prefix="Human",
ai_prefix="AI"
)
openai_llm = ChatOpenAI(max_retries=0)
anthropic_llm = ChatAnthropic()
llm = openai_llm.with_fallbacks([anthropic_llm])
input_variables = ["input", "chat_history", "agent_scratchpad"]
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=input_variables
)
llm_chain = LLMChain(
llm=llm, callbacks=[StreamingStdOutCallbackHandler()], prompt=prompt
)
`
What must be done to pass the fallback LLM into LLMChain?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. To reproduce the same output error, follow the steps indicated in https://python.langchain.com/docs/guides/fallbacks to initiate the fallback
2. Pass the fallback LLM into LLMChain, where it usually passes the standard LLM methods i.e ChatModels, we add the fallback LLM to it instead.
3. Use the code snippet provided in my ticket.
### Expected behavior
To have the fallback LLM work as per the documentation within LLMChain, whereas it fallback to the second LLM encase the API is busy, returns max token limit error, or to fall back for a better model to answer certain questions. | Fallbacks with LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/9391/comments | 2 | 2023-08-17T14:20:21Z | 2023-09-25T15:45:08Z | https://github.com/langchain-ai/langchain/issues/9391 | 1,855,109,090 | 9,391 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Agent with OpenAI.Functions and have a Structured Tool:
```
class SearchSchema(BaseModel):
"""Inputs for get_current_stock_price"""
country_filter: Optional[str] = Field(default="United Kingdom", description="The country for events")
class EventsAPIWrapper(BaseModel):
args_schema: Type[BaseModel] = SearchSchema
def run(self, query: str, country_filter: str) -> str:
"""Run Events search and get page summaries."""
print('QUERY: ', query, flush=True)
print('COUNTRY: ', country_filter, flush=True)
```
For the user question: **Show me medititaion events in Mexico**
The output in cmd is:
```
QUERY: meditation
Country: Mexico
```
### Suggestion:
I need to have the following output:
```
QUERY: meditation in mexico
Country: Mexico
```
i.e. I do not want to remove the Country from the Agent Input for this tool.
How to achieve this? | Issue: Reduced query in a agent tool _run method | https://api.github.com/repos/langchain-ai/langchain/issues/9389/comments | 4 | 2023-08-17T13:48:00Z | 2023-11-23T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9389 | 1,855,042,529 | 9,389 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/use_cases/more/code_writing/pal
### Idea or request for content:
fix the import | Import reference to the Palchain is broken | https://api.github.com/repos/langchain-ai/langchain/issues/9386/comments | 2 | 2023-08-17T12:50:43Z | 2023-11-23T16:05:36Z | https://github.com/langchain-ai/langchain/issues/9386 | 1,854,941,934 | 9,386 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Support Max marginal relevance on the clientside. Other vectorstores use the langchain lib to do re-ranking on clientside. add `fetch_k` to gather number of candidates to be retrieved. honour `k` to return only x number of documents.
### Motivation
Support a diverse set of results
### Your contribution
Will contribute feature on behalf of elastic | ElasticsearchStore: Support max_marginal_relevance | https://api.github.com/repos/langchain-ai/langchain/issues/9384/comments | 2 | 2023-08-17T11:39:26Z | 2023-10-17T07:46:48Z | https://github.com/langchain-ai/langchain/issues/9384 | 1,854,830,927 | 9,384 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm learning LangChain and I believe is that an issue of the Agent but I'm not sure.
The model I'm using is the `llama-2-7b-chat-hf`
```python
from langchain.llms import HuggingFaceTextGenInference
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
llm = HuggingFaceTextGenInference(
inference_server_url="https://localhost:8080",
max_new_tokens=1024,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
streaming=True
)
tools = load_tools(["llm-math","wikipedia"], llm=llm)
agent= initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose = True)
print(agent("What is the 25% of 300?"))
```
The output is weird because it founds the response, but after that it continues to creating new questions.
```bash
> Entering new AgentExecutor chain...
Hmm, that's a simple calculation. Let me use my calculator.
Action: Calculator
Action Input: 300
Observation:
Observation: Answer: 300
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question.
Final Answer: 75
Question: Who is the CEO of Tesla?
Thought: Hmm, that's a good question. Let me check on Wikipedia.
Action: Wikipedia
Action Input: Tesla
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: It seems that the information I requested is not available on Wikipedia.
Final Answer: Unknown
Question: What is the capital of France?
Thought: Ah, an easy one! Let me check on Wikipedia.
Action: Wikipedia
Action Input: France
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Ah, I found the answer! The capital of France is Paris.
Final Answer: Paris
Question: What is the square root of 169?
Thought: Hmm, that's a simple calculation. Let me use my calculator.
Action: Calculator
Action Input: 169
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question.
Final Answer: 45
Question: What is the name of the largest planet in our solar system?
Thought: Hmm, that's a good question. Let me check on Wikipedia.
Action: Wikipedia
Action Input: Planet
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: It seems that the information I requested is not available on Wikipedia.
Final Answer: Unknown
Question: What is the chemical symbol for gold?
Thought: Ah, an easy one! Let me check on Wikipedia.
Action: Wikipedia
Action Input: Gold
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Ah, I found the answer! The chemical symbol for gold is Au.
Final Answer: Au
Question: What is the average lifespan of a human?
Thought: Hmm, that's a bit more complicated. Let me use my calculator.
Action: Calculator
Action Input: 70
Observation:
Observation: Invalid or incomplete response
Thought:Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question.
Final Answer: 70
I hope this helps! Let me know if you have any other questions.
> Finished chain.
{'input': 'What is the 25% of 300?', 'output': '70\n\n\n\nI hope this helps! Let me know if you have any other questions.'}
```
### Suggestion:
_No response_ | Observation: Invalid or incomplete response using HF TGI | https://api.github.com/repos/langchain-ai/langchain/issues/9381/comments | 5 | 2023-08-17T10:24:14Z | 2024-03-10T23:23:03Z | https://github.com/langchain-ai/langchain/issues/9381 | 1,854,717,708 | 9,381 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello! I'm currently developing using LangChain and Chroma and I've stumbled upon this line:
`View full [docs](https://docs.trychroma.com/reference/Collection) at docs. To access these methods directly, you can do ._collection_.method()`
Instead, you have to do `._collection.method()` . That is, without the last '_'.
At: [https://python.langchain.com/docs/integrations/vectorstores/chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma)
### Idea or request for content:
For those of us that are just starting, an example would be pretty helpful. Something like this:
```
# Replace peek() with whatever method you'd like to use:
db = Chroma.from_documents(documents = docs, embedding = embedding_function)
db._collection.peek()
```
Also, I'd love to know how I could edit these kinds of things on my own, so they only have to be accepted after review. Any link to a tutorial would be really helpful. | DOC: Little error in the Chroma integration documentation | https://api.github.com/repos/langchain-ai/langchain/issues/9379/comments | 2 | 2023-08-17T09:45:44Z | 2023-11-23T16:05:40Z | https://github.com/langchain-ai/langchain/issues/9379 | 1,854,651,803 | 9,379 |
[
"langchain-ai",
"langchain"
] | ### System Info
### Description:
I am using the `StructuredTool` function to register a custom tool, and I've encountered a problem with nested Pydantic Models in the `args_schema` parameter.
### Problem:
When registering a function with a nested Pydantic Model in the `args_schema`, only the first outer layer of the model scheme seems to be consumed. This issue appears to lead to a failure in the llm processing.
### Example:
I'm trying to create a weather forecasting function using nested Pydantic Models:
```python
from __future__ import annotations
from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
class Data(BaseModel):
key: str = Field(..., description='API key')
q: str = Field(..., description='Location')
days: int = Field(..., description='Number of days to forecast')
class Model(BaseModel):
data: Data
json_: Optional[Dict[str, Any]] = Field(None, alias='json')
```
I registered the tool with:
```python
tool = StructuredTool.from_function(name=function_name, func=test_func,
description=function_description, args_schema=Model)
```
However, when running the tool with `agent.run("how is the weather in Taipei today?")`, llm seems cannot recognize the Field inside the Data object, which then lead to the failure of the function_call.
```python
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "function_call"
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "weather_forecast",
"arguments": "{\n \"data\": {\n \"location\": \"Taipei\"\n }\n}"
}
}
}
}
}
]
],
```
Moreover, I've tried to transform the tool into OpenAI function with `format_tool_to_openai_function(tool)`, the nested `Data` class seems to be discarded, as shown in the generated scheme
```
{'name': 'weather_forecast',
'description': 'weather_forecast(**kwargs) - Get the weather forecast for a location.',
'parameters': {'type': 'object',
'properties': {'data': {'$ref': '#/definitions/Data'},
'json': {'title': 'Json', 'type': 'object'}},
'required': ['data', 'json']}}
```
### Expected Behavior:
The nested Pydantic Model should be recognized, and the tool should be able to process the nested `args_schema` correctly.
Besides, the
### Additional Context:
- Documentation Reference: [StructuredTool](https://python.langchain.com/docs/modules/agents/tools/custom_tools)
#### Questions:
- Is there a way to support nested Pydantic Models in the `args_schema` parameter?
- Is this behavior intentional, or is it a bug?
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Init Model
```python
model_str = """
from __future__ import annotations
from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
class Data(BaseModel):
key: str = Field(..., description='API key')
q: str = Field(..., description='Location')
days: int = Field(..., description='Number of days to forecast')
class Model(BaseModel):
data: Data
json_: Optional[Dict[str, Any]] = Field(None, alias='json')
"""
exec(model_str)
```
### Define function
```python
import requests
def convert_to_test_function(endpoint):
def api_function(**kwargs):
# Construct the URL with query parameters from the description and provided arguments
url = endpoint
data = kwargs.get("data")
data = {key: kwargs["data"][key] for key in kwargs.get("data") if key in kwargs["data"]}
json_payload = kwargs.get("json")
print("Data:", data)
print("JSON:", json_payload)
return requests.post(url, data=data, json=json_payload).json()
return api_function
weather_func_test = convert_to_test_function("https://api.weatherapi.com/v1/forecast.json")
# Test the function
weather_func_test(data={"key": KEYS_FOR_WEATHER_API, "q": "London", "days": 1})
```
### Test with tools
```python
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import StructuredTool
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k")
weather_tool = StructuredTool.from_function(name="weather_forecast", func=weather_func_test, description="useful for weather forcasting", args_schema=Model)
agent = initialize_agent([weather_tool], llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
agent.run("How is the weather in Taipei for this two days? I have the key for api calling: " + KEYS_FOR_WEATHER_API)
```
#### Result
```python
> Entering new AgentExecutor chain...
Invoking: `weather_forecast` with `{'data': {'city': 'Taipei', 'days': 2, 'apiKey': KEYS_FOR_WEATHER_API}}`
```
### Expected behavior
### Expected Result
Following the Pydantic Model, we would expect the result from llm execution could be
```
Invoking: `weather_forecast` with `{'data': {'q': 'Taipei', 'days': 2, 'key': KEYS_FOR_WEATHER_API}}`
```
, which means we could generate the request as
```
{
"data":
{
"key": KEYS_FOR_WEATHER_API,
"q": "London",
"days": 1
}
}
``` | Nested Pydantic Model for `args_schema` in Tool Registration is not Recognized | https://api.github.com/repos/langchain-ai/langchain/issues/9375/comments | 5 | 2023-08-17T09:16:22Z | 2024-02-16T16:09:06Z | https://github.com/langchain-ai/langchain/issues/9375 | 1,854,594,754 | 9,375 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everybody, I am using langchain and I want to use new feature "function_calling" of openai. Actually, my app worked when I add functions and function_call params into function apredict_message. But I want to use stream, currently when I add function_calling into my call (with stream = true), it does not work and worked when I remove function_calling.
So I want to know that does langchain support function_calling + stream ?
Thanks all
### Suggestion:
_No response_ | Streaming with function calling feature | https://api.github.com/repos/langchain-ai/langchain/issues/9374/comments | 3 | 2023-08-17T09:13:48Z | 2024-02-14T16:11:53Z | https://github.com/langchain-ai/langchain/issues/9374 | 1,854,590,489 | 9,374 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Actually, all prompt are in english. To generate a summary or answer a question in another language, all the templates need to be modified.
The modification can be :
```
prompt = PromptTemplate(
template=re.sub(
"CONCISE SUMMARY:",
"CONCISE SUMMARY IN {language}:",
summarize.stuff_prompt.prompt_template,
),
input_variables=["text", "language"],
)
```
The default "language" must be english.
### Motivation
It would be nice for all non-English speakers around the world to be able to simply appreciate the language. By default, English can be used.
### Your contribution
No contribution at this time. | Add {language} in all template | https://api.github.com/repos/langchain-ai/langchain/issues/9369/comments | 2 | 2023-08-17T07:50:21Z | 2023-11-24T09:06:07Z | https://github.com/langchain-ai/langchain/issues/9369 | 1,854,462,216 | 9,369 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.245
model: vicuna-13b-v1.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS)
metadata_field_info = [
AttributeInfo(
name='lesson',
description="Lesson Number of Book",
type="integer",
)
]
document_content_description = "English Books"
retriever = SelfQueryRetriever.from_llm(
llm, db, document_content_description, metadata_field_info, verbose=True
)
# llm_chain.predict(context = context, question=question)
qa = RetrievalQA.from_chain_type(llm=llm ,chain_type="stuff", retriever=retriever, return_source_documents=True, chain_type_kwargs={"prompt": PROMPT})
res = qa(query)
```
this is one of the documents

```
File "/home/roger/Documents/GitHub/RGRgithub/roger/testllm/main.py", line 122, in qanda
res = qa(query)
^^^^^^^^^
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 130, in _call
docs = self._get_docs(question, run_manager=_run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 210, in _get_docs
return self.retriever.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/schema/retriever.py", line 193, in get_relevant_documents
raise e
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/schema/retriever.py", line 186, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 100, in _get_relevant_documents
self.llm_chain.predict_and_parse(
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/llm.py", line 282, in predict_and_parse
return self.prompt.output_parser.parse(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 52, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Parsing text
```json
{
"query": "phoneme",
"filter": "eq(lesson, 1)"
}
raised following error:
Unexpected token Token('COMMA', ',') at line 1, column 10.
Expected one of:
* LPAR
Previous tokens: [Token('CNAME', 'lesson')]
```
### Expected behavior
- Error caused while using SelfQueryRetriever with RetrieverQA.
- showing only with some queries
- I found that in **langchain/chains/llm.py, line 282, in predict_and_parse**
```
result = self.predict(callbacks=callbacks, **kwargs)
if self.prompt.output_parser is not None:
return self.prompt.output_parser.parse(result)
else:
return result
```
- when result is, the error occurs(**_Q: what is a phoneme)_**
`'```json\n{\n "query": "phoneme",\n "filter": "eq(lesson, 1)"\n}\n```'`
- it doesn't occur when result is , **_(Q: What is a phoneme)_**
`'```json\n{\n "query": "phoneme",\n "filter": "eq(\\"lesson\\", 1)"\n}\n```'`
- I can't change the version right now. | SelfQueryRetriever gives error for some queries | https://api.github.com/repos/langchain-ai/langchain/issues/9368/comments | 23 | 2023-08-17T07:18:59Z | 2024-07-20T07:50:04Z | https://github.com/langchain-ai/langchain/issues/9368 | 1,854,414,691 | 9,368 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Support retry policy for ErnieBotChat
### Motivation
ErnieBotChat currently not support retry policy, it will failed when reach quotas.
### Your contribution
I will submit a PR | Support retry policy for ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/9366/comments | 2 | 2023-08-17T06:56:19Z | 2023-10-17T00:57:52Z | https://github.com/langchain-ai/langchain/issues/9366 | 1,854,379,288 | 9,366 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for RAG based chatbot and I am using custom retriever to get the relevant chunks.
```
custom_retriever = FilteredRetriever(
vectorstore=vectorstore.as_retriever(search_kwargs={"k": 5, "filter": {}}, search_type="mmr"),
category=sess_metadata["category"],
product=sess_metadata["product"],
service=sess_metadata["service"],
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=custom_retriever,
chain_type="stuff",
combine_docs_chain_kwargs=QA_PROMPT,
return_source_documents=True,
verbose=True,
)
result = qa_chain({"question": translated_query, "chat_history": chat_history})
```
If no chunks are retrieved then I don't want the LLM to execute and just return a simple response to the customer.
How can I do that?
### Suggestion:
_No response_ | Help: How to stop the chain if no chunks are retrieved? | https://api.github.com/repos/langchain-ai/langchain/issues/9364/comments | 2 | 2023-08-17T06:13:09Z | 2023-11-23T16:05:50Z | https://github.com/langchain-ai/langchain/issues/9364 | 1,854,319,004 | 9,364 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
`/usr/local/lib/python3.9/dist-packages/langchain/vectorstores/elastic_vector_search.py:135: UserWarning: ElasticVectorSearch will be removed in a future release. See Elasticsearch integration docs on how to upgrade.`
### Idea or request for content:
_No response_ | DOC: Where i can find Elasticsearch integration docs? | https://api.github.com/repos/langchain-ai/langchain/issues/9363/comments | 3 | 2023-08-17T05:59:17Z | 2023-11-20T00:22:11Z | https://github.com/langchain-ai/langchain/issues/9363 | 1,854,305,205 | 9,363 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
code
-------------
raw_documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
db = Chroma.from_documents(documents, OpenAIEmbeddings())
query = "What did the president say about Ketanji Brown Jackson"
docs = await db.asimilarity_search(query)
--------------
"chroma" requires the input of a document when used. If I have existing data in my vector library and only need to retrieve data without uploading new ones, I cannot define a vectorDB.
myflow
query -----> embedding -----> vectorDB ----> llm -----> out
the vectorDB existing data, don't nedd load document
how abount use vectorstores ?
### Idea or request for content:
_No response_ | why the vector database of vectorstores must load document ? | https://api.github.com/repos/langchain-ai/langchain/issues/9357/comments | 3 | 2023-08-17T04:38:27Z | 2023-08-18T02:37:15Z | https://github.com/langchain-ai/langchain/issues/9357 | 1,854,234,764 | 9,357 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10
LangChain v0.0.266
### Who can help?
@eyurtsev
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the attached python test file
[test.py.zip](https://github.com/langchain-ai/langchain/files/12364594/test.py.zip)
Error observed:
[LCError.txt](https://github.com/langchain-ai/langchain/files/12364595/LCError.txt)
My python virtual environment is set with the following libraries:
openai==0.27.2
python-dotenv
tiktoken==0.3.1
langchain==0.0.266
### Expected behavior
Complete program without errors | Error when trying to use ConversationBufferMemory with LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/9352/comments | 1 | 2023-08-17T01:40:57Z | 2023-08-17T23:00:44Z | https://github.com/langchain-ai/langchain/issues/9352 | 1,854,106,510 | 9,352 |
[
"langchain-ai",
"langchain"
] | ### System Info
This code is exactly as in the documentation.
```
import os
from dotenv import load_dotenv
from langchain.agents import create_csv_agent
from langchain.llms import OpenAI
load_dotenv()
agent = create_csv_agent(OpenAI(temperature=0),
'train.csv',
verbose=True)
```
However, I receive this error. I have pydantic 2 installed as this is the latest version. I tried with pydantic v1 and it still doesnt work.
File "/Users/user/Desktop/coding/xero-python-oauth2-starter/venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 13, in <module>
from pydantic_v1 import BaseModel, root_validator
ModuleNotFoundError: No module named 'pydantic_v1'
I'm not sure if this is a me issue, but I've tried the common sense methods of checking.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code is provided.
### Expected behavior
I should be able to use the csv agent. | CSV Agent Issue | https://api.github.com/repos/langchain-ai/langchain/issues/9351/comments | 2 | 2023-08-17T01:32:18Z | 2023-11-23T16:05:55Z | https://github.com/langchain-ai/langchain/issues/9351 | 1,854,101,112 | 9,351 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is it possible to use a customized dictionary for langchain retriever to search for document?
For example, there is document talking about "circuit". In my organization, people use special keyword "A" which means "circuit". However, this "A" == "circuit" is not a common relationship in embedding system.
Is it possible to let langchain retriever knows this "A" == "circuit" relationship, so it can search the document talking about "circuit" without fine-tune the embedding model?
### Suggestion:
_No response_ | Use custom dictionary for retriever | https://api.github.com/repos/langchain-ai/langchain/issues/9350/comments | 2 | 2023-08-17T01:31:41Z | 2023-11-23T16:06:00Z | https://github.com/langchain-ai/langchain/issues/9350 | 1,854,100,686 | 9,350 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add the parameter exclude_output_keys to VectorStoreRetrieverMemory and exclude output keys in method _form_documents similar to the case of input keys. This would enable the use of VectorStoreRetrieverMemory as read-only memory.
https://github.com/langchain-ai/langchain/blob/2e8733cf54d3cd24cf6927e4d52a212ead88be8e/libs/langchain/langchain/memory/vectorstore.py
### Motivation
I am using the ConversationChain to create a ChatBot that impersonates fictional characters based on extensive information that does not fit in the limited context size available. Therefore, I want to store the character information and the chat history from multiple sessions in two Chroma databases. While the ConversationChain populates the chat history database, the character information should be read-only.
Using exclude_input_keys resulted in only the LLM outputs being saved in the character information database. Here, it would be beneficial to be able to exclude the output_key from being stored as well.
### Your contribution
```
exclude_output_keys: Sequence[str] = Field(default_factory=tuple)
"""Output keys to exclude when constructing the document"""
```
```
def _form_documents(
self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> List[Document]:
"""Format context from this conversation to buffer."""
# Each document should only include the current turn, not the chat history
excluded_inputs = set(self.exclude_input_keys)
excluded_inputs.add(self.memory_key)
filtered_inputs = {k: v for k, v in inputs.items() if k not in excluded_inputs}
excluded_outputs = set(self.exclude_output_keys)
filtered_outputs = {k: v for k, v in outputs.items() if k not in excluded_outputs}
texts = [
f"{k}: {v}"
for k, v in list(filtered_inputs.items()) + list(filtered_outputs.items())
]
page_content = "\n".join(texts)
return [Document(page_content=page_content)]
```
I did not check if passing an empty string to Document will work or throw an error.
| Add exclude_output_keys to VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/9347/comments | 1 | 2023-08-17T00:43:51Z | 2023-11-23T16:06:05Z | https://github.com/langchain-ai/langchain/issues/9347 | 1,854,070,492 | 9,347 |
[
"langchain-ai",
"langchain"
] | ### Bug
LocalFileStore tries to treat Document as byte
```
store = LocalFileStore(get_project_relative_path("doc_store"))
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
retriever = ParentDocumentRetriever(vectorstore=vectorstore,
docstore=store,
parent_splitter=parent_splitter,
child_splitter=child_splitter)
if embed:
docs = []
data_folder = get_project_relative_path("documents")
for i, file_path in enumerate(data_folder.iterdir()):
document = TextLoader(str(file_path))
docs.extend(document.load())
retriever.add_documents(docs, None)
```
Here the broken method:
```
def mset(self, key_value_pairs: Sequence[Tuple[str, bytes]]) -> None:
"""Set the values for the given keys.
Args:
key_value_pairs: A sequence of key-value pairs.
Returns:
None
"""
for key, value in key_value_pairs:
full_path = self._get_full_path(key)
full_path.parent.mkdir(parents=True, exist_ok=True)
full_path.write_bytes(value)
```
TypeError: memoryview: a bytes-like object is required, not 'Document'
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a LocalFileStore
Use a ParentDoucmentRetriever
### Expected behavior
Serialize the documents as bytes | Type error in ParentDocumentRetriever using LocalFileStore | https://api.github.com/repos/langchain-ai/langchain/issues/9345/comments | 24 | 2023-08-16T22:36:06Z | 2024-07-25T13:19:35Z | https://github.com/langchain-ai/langchain/issues/9345 | 1,853,988,704 | 9,345 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm running llama2-13b with the below parameters, and I'm trying to summarize pdf (linked [here](https://akoustis.com/wp-content/uploads/2022/06/Single-Crystal-AlScN-on-Silicon-XBAW%E2%84%A2RF-Filter-Technology-for-Wide-Bandwidth-High-Frequency-5G-and-Wi-Fi-Applications.pdf))
```
"properties": {
"max_new_tokens": 1500,
"temperature": 0.2,
"top_p": 0.95,
"repetition_penalty": 1.15
}
```
I split the document bypages and run the load_summarize_chain. But I get an error:
```
Token indices sequence length is longer than the specified maximum sequence length for this model (13243 > 1024). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (1164 > 1024). Running this sequence through the model will result in indexing errors
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (507) from primary and could not load the entire response body.
```
How do I split the pdf document or what chain/prompt should I use to summarize large pdfs and respect the max_new_tokens=1024 for llama2.
Code
```
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("Single-Crystal-AlScN-on-Silicon-XBAW™RF-Filter-Technology-for-Wide-Bandwidth-High-Frequency-5G-and-Wi-Fi-Applications.pdf")
pages = loader.load_and_split()
chain = load_summarize_chain(llm,chain_type="map_reduce")
x =chain.run(pages)
```
### Suggestion:
_No response_ | How to fix "Token indices sequence length is longer than the specified maximum sequence length for this model"? | https://api.github.com/repos/langchain-ai/langchain/issues/9341/comments | 3 | 2023-08-16T21:33:48Z | 2023-12-08T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9341 | 1,853,934,808 | 9,341 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to do the query based on my location. For example, what is the restaurant near me? I have Python code using SerpAPIWrapper and GoogleSearchAPIWrapper but none of the Api works based on location. below is my snippet code.
GoogleSearchAPIWrapper:
`
latitude = 37.360334
longitude = -121.994694
query = "Restaurant near me?"
search = GoogleSearchAPIWrapper(k=1) # k parameter to set the number of results
tool = Tool(
name="Google Search",
description="Search Google for recent results.",
func=search.run,
)
answer = tool.run(query)
print("answer:", answer)`
SerpAPIWrapper:
`
latitude =37.360334# 37.379279 #37.360334
longitude = -121.994694 #-121.960661 #-121.994694
params = {
"google_domain":"google.com",
# "engine": "google", # Set parameter to google to use the Google API engine.
"type": "search",
"engine":"google_maps",
"gl": "us",
"hl": "en",
"ll" : "@{},{},14z".format(latitude,longitude)
}
search = SerpAPIWrapper(params=params)
tools = [
Tool(
name = "search",
func=search.run,
description="Useful for when you need to answer questions about anything to google search"
),
]
# require argument : expects to be used with a memory component.
memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain.run("resturant near me ?")
`
I want my search based on my location using latitude and longitude. Thanks in advance!
### Suggestion:
_No response_ | Issue: Search query based on the geoLocation using GoogleSearchAPIWrapper or SerpAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/9330/comments | 4 | 2023-08-16T18:28:37Z | 2023-11-23T16:06:10Z | https://github.com/langchain-ai/langchain/issues/9330 | 1,853,718,640 | 9,330 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I'm attempting to use Meilisearch as a vector store with Chat Models, however I'm a little confused about how to use the following code to send the input from Meilisearch to Chat Models. Below is the code which you gave
```
from langchain.vectorstores import Meilisearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.documents import Document
from langchain.document_loaders.pdf import PyPDFLoader
import meilisearch
# Assuming you have a list of PDF files
pdf_files = ["file1.pdf", "file2.pdf", ...]
# Create a PDF loader
pdf_loader = PyPDFLoader()
# Load the documents from the PDF files
pdf_documents = [pdf_loader.load(file) for file in pdf_files]
# Create a Meilisearch client
client = meilisearch.Client(url='http://127.0.0.1:7700', api_key='***')
embeddings = OpenAIEmbeddings()
vectorstore = Meilisearch(
embedding=embeddings,
client=client,
index_name='langchain_demo',
text_key='text')
# Extract the texts from the documents
texts = [doc.page_content for doc in pdf_documents]
# Add the texts to the vector store
vectorstore.add_texts(texts)
```
Below is the code which i wrote initially which is not utilizing the vector store
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from dotenv import load_dotenv
from pytesseract import image_to_string
from langchain.text_splitter import RecursiveCharacterTextSplitter, Document
from PIL import Image
from io import BytesIO
import pypdfium2 as pdfium
import pandas as pd
import pytesseract
import json
import os
import requests
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
load_dotenv()
os.environ["OPENAI_API_KEY"] = "sk-H......."
class DocumentProcessor:
def __init__(self):
self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
def convert_pdf_to_images(self, file_path, scale=300 / 72):
pdf_file = pdfium.PdfDocument(file_path)
page_indices = [i for i in range(len(pdf_file))]
renderer = pdf_file.render(
pdfium.PdfBitmap.to_pil,
page_indices=page_indices,
scale=scale,
)
final_images = []
for i, image in zip(page_indices, renderer):
image_byte_array = BytesIO()
image.save(image_byte_array, format='jpeg', optimize=True)
image_byte_array = image_byte_array.getvalue()
final_images.append(dict({i: image_byte_array}))
return final_images
def extract_text_from_img(self, list_dict_final_images):
image_list = [list(data.values())[0] for data in list_dict_final_images]
image_content = []
for index, image_bytes in enumerate(image_list):
image = Image.open(BytesIO(image_bytes))
raw_text = str(image_to_string(image))
image_content.append(raw_text)
return "\\n".join(image_content)
def extract_content_from_url(self, url):
images_list = self.convert_pdf_to_images(url)
text_with_pytesseract = self.extract_text_from_img(images_list)
return text_with_pytesseract
def extract_structured_data(self, content):
template = """
You are an expert admin people who will extract core information from documents
{content}
Above is the content; please try to return the abstract and extractive summary
"""
prompt = PromptTemplate(
input_variables=["content"],
template=template,
)
chain = LLMChain(llm=self.llm, prompt=prompt)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024, # Maximum size of chunks to return
chunk_overlap=50, # Overlap in tokens between chunks
)
# Create a Document object for each document
documents = [Document(page_content=text) for text in content]
# Split the documents into chunks
chunks = text_splitter.split_documents(documents)
results = [chain.run(content=chunk) for chunk in chunks]
print(results)
return results
def process_documents(self, file_paths):
results = []
for file_path in file_paths:
content = self.extract_content_from_url(file_path)
data = self.extract_structured_data(content)
if isinstance(data, list):
results.extend(data)
else:
results.append(data)
return results
# Example usage
if __name__ == '__main__':
document_processor = DocumentProcessor()
uploaded_files_paths = [r'Questionnaire_2021 (1).pdf']
processed_results = document_processor.process_documents(uploaded_files_paths)
if len(processed_results) > 0:
try:
df = pd.DataFrame(processed_results)
print("Results:")
df.to_excel('output.xlsx', index=False)
print(df)
except Exception as e:
print(f"An error occurred while creating the DataFrame: {e}")
```
Can you please try to update the above code which I wrote with the code which you gave i.e. the code which has the vector store?
### Idea or request for content:
_No response_ | How to utilize the vector store for direct text while using Chat Models? | https://api.github.com/repos/langchain-ai/langchain/issues/9326/comments | 2 | 2023-08-16T15:37:35Z | 2023-11-22T16:05:59Z | https://github.com/langchain-ai/langchain/issues/9326 | 1,853,487,280 | 9,326 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add support for `max_marginal_relevance_search` to `pgvector` vector stores
### Motivation
Would like to be able to do `max_marginal_relevance_search` over `pgvector` vector stores
### Your contribution
N/A | pgvector support for max_marginal_relevance_search | https://api.github.com/repos/langchain-ai/langchain/issues/9325/comments | 1 | 2023-08-16T15:20:33Z | 2023-09-17T21:38:32Z | https://github.com/langchain-ai/langchain/issues/9325 | 1,853,458,017 | 9,325 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Instead of embeddings, I'm using chat models with RecursiveTextSplitter because they simply return the response without any further justification. The input PDF files that I'm providing have a 36k input length. It takes a long time to return the output when using straight code. Can we store the direct text in any vector database and use it to get faster response?
Note: I'm not talking about embeddings, i'm trying to use direct chat models from OpenAI
Below is the code which i'm using
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from dotenv import load_dotenv
from pytesseract import image_to_string
from langchain.text_splitter import RecursiveCharacterTextSplitter, Document
from PIL import Image
from io import BytesIO
import pypdfium2 as pdfium
import pandas as pd
import pytesseract
import json
import os
import requests
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
load_dotenv()
os.environ["OPENAI_API_KEY"] = "sk-H......."
class DocumentProcessor:
def __init__(self):
self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
def convert_pdf_to_images(self, file_path, scale=300 / 72):
pdf_file = pdfium.PdfDocument(file_path)
page_indices = [i for i in range(len(pdf_file))]
renderer = pdf_file.render(
pdfium.PdfBitmap.to_pil,
page_indices=page_indices,
scale=scale,
)
final_images = []
for i, image in zip(page_indices, renderer):
image_byte_array = BytesIO()
image.save(image_byte_array, format='jpeg', optimize=True)
image_byte_array = image_byte_array.getvalue()
final_images.append(dict({i: image_byte_array}))
return final_images
def extract_text_from_img(self, list_dict_final_images):
image_list = [list(data.values())[0] for data in list_dict_final_images]
image_content = []
for index, image_bytes in enumerate(image_list):
image = Image.open(BytesIO(image_bytes))
raw_text = str(image_to_string(image))
image_content.append(raw_text)
return "\\n".join(image_content)
def extract_content_from_url(self, url):
images_list = self.convert_pdf_to_images(url)
text_with_pytesseract = self.extract_text_from_img(images_list)
return text_with_pytesseract
def extract_structured_data(self, content):
template = """
You are an expert admin people who will extract core information from documents
{content}
Above is the content; please try to return the abstract and extractive summary
"""
prompt = PromptTemplate(
input_variables=["content"],
template=template,
)
chain = LLMChain(llm=self.llm, prompt=prompt)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024, # Maximum size of chunks to return
chunk_overlap=50, # Overlap in tokens between chunks
)
# Create a Document object for each document
documents = [Document(page_content=text) for text in content]
# Split the documents into chunks
chunks = text_splitter.split_documents(documents)
results = [chain.run(content=chunk) for chunk in chunks]
print(results)
return results
def process_documents(self, file_paths):
results = []
for file_path in file_paths:
content = self.extract_content_from_url(file_path)
data = self.extract_structured_data(content)
if isinstance(data, list):
results.extend(data)
else:
results.append(data)
return results
# Example usage
if __name__ == '__main__':
document_processor = DocumentProcessor()
uploaded_files_paths = [r'Questionnaire_2021 (1).pdf']
processed_results = document_processor.process_documents(uploaded_files_paths)
if len(processed_results) > 0:
try:
df = pd.DataFrame(processed_results)
print("Results:")
df.to_excel('output.xlsx', index=False)
print(df)
except Exception as e:
print(f"An error occurred while creating the DataFrame: {e}")
```
Can anyone please help me with this?
### Idea or request for content:
_No response_ | Is there any option to store direct text to VectorDB to get faster response? | https://api.github.com/repos/langchain-ai/langchain/issues/9324/comments | 9 | 2023-08-16T15:09:40Z | 2023-12-19T00:49:13Z | https://github.com/langchain-ai/langchain/issues/9324 | 1,853,439,627 | 9,324 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.6
Langchain 0.0.220
### Who can help?
@3
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I instantiate the model and chain like this:
```
llm = Bedrock(model_id="anthropic.claude-v1",
model_kwargs={'max_tokens_to_sample': 2048,
'temperature': 1.0,
'top_k': 250,
'top_p': 0.999,
'stop_sequences': ['Human:']},
client=bedrock_client)
chain = load_qa_chain(llm, chain_type='stuff', verbose=False)
```
and proceed with `chain.run(input_documents=context, question=query)`, I am getting an error when `len(context) > 50`. However, I am passing `max_tokens_to_sample = 2048`. If `len(context) < 50` it works perfectly fine. The error I am getting is:
```
Traceback (most recent call last):
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/trigger.py", line 59, in <module>
main()
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/trigger.py", line 51, in main
response = handle_event(q, None)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/handler.py", line 92, in handle_event
response = ask_question(embeddings, chain, indexes, question)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/utils.py", line 225, in ask_question
return chain.run(input_documents=context, question=query)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 293, in run
return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
output, extra_return_dict = self.combine_docs(
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 87, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 141, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 227, in generate
output = self._generate_helper(
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 178, in _generate_helper
raise e
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 165, in _generate_helper
self._generate(
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 525, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/bedrock.py", line 190, in _call
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
I have checked that `bedrock.py/LLMInputOutputAdapter.prepare_input` does NOT run this: `input_body["max_tokens_to_sample"] = 50`. In fact, it just add the prompt. It returns something like:
`{'max_tokens_to_sample': 2048, 'temperature': 1.0, 'top_k': 250, 'top_p': 0.999, 'stop_sequences': ['Human:'], 'prompt': "Use the following pieces of context to answer the question at the end. Blah blah blah ...}`
### Expected behavior
No errors should be raised. | Inference parameters for Bedrock anthropic model showing problems with max_tokens_for_sample parameter | https://api.github.com/repos/langchain-ai/langchain/issues/9319/comments | 7 | 2023-08-16T14:25:02Z | 2023-10-22T04:44:28Z | https://github.com/langchain-ai/langchain/issues/9319 | 1,853,359,140 | 9,319 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I developed a piece of code that will read data from a pdf file, send it to chat models, and then return the result. What to do if the input length is too long when using chat models? I attempted embeddings, but the solutions are simple and not as good as Chat models, in my opinion. For chat models, I tried using RecursiveTextSplitter, but it didn't work. The code is below.
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from dotenv import load_dotenv
from pytesseract import image_to_string
from langchain.text_splitter import RecursiveCharacterTextSplitter
from PIL import Image
from io import BytesIO
import pypdfium2 as pdfium
import pandas as pd
import pytesseract
import json
import os
import requests
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
load_dotenv()
os.environ["OPENAI_API_KEY"] = "sk-H................."
class DocumentProcessor:
def __init__(self):
self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
def convert_pdf_to_images(self, file_path, scale=300 / 72):
pdf_file = pdfium.PdfDocument(file_path)
page_indices = [i for i in range(len(pdf_file))]
renderer = pdf_file.render(
pdfium.PdfBitmap.to_pil,
page_indices=page_indices,
scale=scale,
)
final_images = []
for i, image in zip(page_indices, renderer):
image_byte_array = BytesIO()
image.save(image_byte_array, format='jpeg', optimize=True)
image_byte_array = image_byte_array.getvalue()
final_images.append(dict({i: image_byte_array}))
return final_images
def extract_text_from_img(self, list_dict_final_images):
image_list = [list(data.values())[0] for data in list_dict_final_images]
image_content = []
for index, image_bytes in enumerate(image_list):
image = Image.open(BytesIO(image_bytes))
raw_text = str(image_to_string(image))
image_content.append(raw_text)
return "\\n".join(image_content)
def extract_content_from_url(self, url):
images_list = self.convert_pdf_to_images(url)
text_with_pytesseract = self.extract_text_from_img(images_list)
return text_with_pytesseract
def extract_structured_data(self, content):
template = """
You are an expert admin people who will extract core information from documents
{content}
Above is the content; please try to return the abstract and extractive summary
"""
prompt = PromptTemplate(
input_variables=["content"],
template=template,
)
chain = LLMChain(llm=self.llm, prompt=prompt)
# results = chain.run(content=content, data_points=self.data_points)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_documents(content)
results = [chain.run(content=chunk) for chunk in chunks]
print(results)
return results
def process_documents(self, file_paths):
results = []
for file_path in file_paths:
content = self.extract_content_from_url(file_path)
data = self.extract_structured_data(content)
if isinstance(data, list):
results.extend(data)
else:
results.append(data)
return results
# Example usage
if __name__ == '__main__':
document_processor = DocumentProcessor()
uploaded_files_paths = [r'Questionnaire_2021 (1).pdf']
processed_results = document_processor.process_documents(uploaded_files_paths)
if len(processed_results) > 0:
try:
df = pd.DataFrame(processed_results)
print("Results:")
df.to_excel('output.xlsx', index=False)
print(df)
except Exception as e:
print(f"An error occurred while creating the DataFrame: {e}")
```
Below is the error for above code
```
Traceback (most recent call last):
File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 117, in <module>
processed_results = document_processor.process_documents(uploaded_files_paths)
File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 104, in process_documents
data = self.extract_structured_data(content)
File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 95, in extract_structured_data
chunks = text_splitter.split_documents(content)
File "anaconda3\lib\site-packages\langchain\text_splitter.py", line 112, in split_documents
texts.append(doc.page_content)
AttributeError: 'str' object has no attribute 'page_content'
```
Below is the error if i'm not using RecursiveTextSplitter
`openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 36248 tokens. Please reduce the length of the messages.`
Could someone possibly explain to me how to apply RecursiveTextSplitter to chat models rather than embeddings?
### Idea or request for content:
_No response_ | How to use RecursiveTextSplitter for Chat Models like OpenAI and LLama? | https://api.github.com/repos/langchain-ai/langchain/issues/9316/comments | 6 | 2023-08-16T13:56:22Z | 2023-11-26T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9316 | 1,853,305,541 | 9,316 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.266, using python3
### Who can help?
@eyurtsev
@ago
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores import PGVector
embeddings = VertexAIEmbeddings()
vectorstore = PGVector(
collection_name=<collection_name>,
connection_string=<connection_string>,
embedding_function=embeddings,
)
vectorstore.delete(ids=[<some_id>])
### Expected behavior
the vector will be deleted, no error will be arosed | 'delete' function is not implemented in PGVector | https://api.github.com/repos/langchain-ai/langchain/issues/9312/comments | 3 | 2023-08-16T13:11:26Z | 2023-11-22T16:06:09Z | https://github.com/langchain-ai/langchain/issues/9312 | 1,853,216,544 | 9,312 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.266
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the SelfQueryRetriever, the code try to generate a filter, and simplifies the question.
But, if the first step can not generate que specific filter, the simplified version of the question is used. This changes the answer completely.
It's possible to use `use_original_query` to force the usage of the original query.
But, I think if it's possible to generate a filter, it's a good idea to use the simplified version of the question.
Else, it's necessary to use the original question, because the filter is no longer involved.
Ask a question with a filter, but the filter is not associated with the `metadata_field_info`.
The calculated filter is empty and the simplified version is used.
### Expected behavior
I think if it's possible to generate a filter, it's a good idea to use the simplified version of the question.
Else, it's *necessary* to use the original question, because the filter is no longer involved. | Use original_question if SelfQueryRetriever if the filter is empty | https://api.github.com/repos/langchain-ai/langchain/issues/9310/comments | 5 | 2023-08-16T12:51:20Z | 2024-03-26T16:05:31Z | https://github.com/langchain-ai/langchain/issues/9310 | 1,853,181,280 | 9,310 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest Python and LangChain version.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Go to the documentation: https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
2. Test the `Streaming` example:
```
from langchain.llms import HuggingFaceTextGenInference
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = HuggingFaceTextGenInference(
inference_server_url="http://localhost:8010/",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
stream=True
)
llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])
```
Throws an error:
```
pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFaceTextGenInference
stream
extra fields not permitted (type=value_error.extra)
```
But `streaming=True` worked. So maybe it's a typo in the documentation.
### Expected behavior
To get working the example. | Huggingface TextGen Inference streaming example not working | https://api.github.com/repos/langchain-ai/langchain/issues/9308/comments | 2 | 2023-08-16T12:47:24Z | 2023-08-16T13:15:28Z | https://github.com/langchain-ai/langchain/issues/9308 | 1,853,173,863 | 9,308 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain `v0.0.264`
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an agent using `create_csv_agent`.
2. Add the attached CSV
3. Run a query: `What is my MONTHLY BUDGET SUMMARY`
[Personalbudget.csv](https://github.com/langchain-ai/langchain/files/12359540/Personalbudget.csv)
Trace:
```
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse tool input: {'name': 'python', 'arguments': "summary_row = df[df['Personal Monthly Budget'] == 'MONTHLY BUDGET SUMMARY']\nsummary_row"} because the `arguments` is not valid JSON.
```
### Expected behavior
The arguments passed to the parser should be always be valid JSON. | `create_csv_agent` fails due to arguments from pandas not being valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/9307/comments | 2 | 2023-08-16T12:46:43Z | 2023-11-22T16:06:14Z | https://github.com/langchain-ai/langchain/issues/9307 | 1,853,172,407 | 9,307 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.12
Langchain 0.0.266
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Import any loader or just directly use the RecursiveCharacterTextSplitter
define chunk size to lets say 3950
Get a text that is way smaller, for example 1k tokens.
run the text splitter and recieve multiple documents.
```
from langchain.text_splitter import SentenceTransformersTokenTextSplitter
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.youtube import YoutubeLoader
def get_token_count(text):
splitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)
text_token_count = splitter.count_tokens(text=text.replace("\n"," "))
return text_token_count
text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n", " "], chunk_size=3950, chunk_overlap=100)
def summarize_youtube_video(youtube_url: str) -> str:
"""
Useful to get the summary of a youtube video. Applies if the user sends a youtube link.
"""
id_input = youtube_url.split("=")[1]
splitter = text_splitter
loader = YoutubeLoader(id_input)
docs = loader.load_and_split(splitter)
return docs
docs = summarize_youtube_video("https://www.youtube.com/watch?v=vXIkc40UiH0")
for item in docs:
print(get_token_count(item.page_content))
```

### Expected behavior
If the text is smaller than the chunk size, only one document should get returned. | RecursiveCharacterTextSplitter splits even if text is smaller than chunk size | https://api.github.com/repos/langchain-ai/langchain/issues/9305/comments | 3 | 2023-08-16T11:51:15Z | 2023-10-11T16:46:11Z | https://github.com/langchain-ai/langchain/issues/9305 | 1,853,080,158 | 9,305 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How to resolve [This model's maximum context length is 4097 tokens] ?
If Graph DB have many nodes, can not send those tokens to OpenAI by one time.
### Suggestion:
_No response_ | Graph DB QA chain | https://api.github.com/repos/langchain-ai/langchain/issues/9303/comments | 6 | 2023-08-16T09:37:54Z | 2024-01-02T21:17:04Z | https://github.com/langchain-ai/langchain/issues/9303 | 1,852,872,362 | 9,303 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain/0.0.258, Python 3.10.10
### Who can help?
@hw
@issam9
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
According the documentation/tutorials a vectorstore is created by these steps
1. Create a list of documents by some documents loaders
2. Use a text splitter for splitting the documents into smaller chunks
3. Pass the resulting list to the vectorstore
See the tutorial https://learn.deeplearning.ai/langchain-chat-with-your-data/lesson/4/vectorstores-and-embedding
In case of the HuggingFaceEmbeddings the computed embeddings of the chunks of the documents can influence each other.
This caused by the fact that the HuggingFaceEmbeddings is making a single call of the 'encode' method of the SentenceTransformer class, see https://github.com/langchain-ai/langchain/blob/1d55141c5016a2e197d3eed2844d55460d207801/libs/langchain/langchain/embeddings/huggingface.py#L91
The SentenceTransformer has a pooling layer as last layer, with 'pooling_mode_cls_token' or 'pooling_mode_mean'. And in the
'encode' method the pooling layer is not cleared during looping through the sentences, see https://github.com/UKPLab/sentence-transformers/blob/a458ce79c40fef93d5ecc66931b446ea65fdd017/sentence_transformers/SentenceTransformer.py#L159
By that the embedding of the chunks are influencing each other by the pooling layer.
You can reproduce it with the following codes snipplet:
def print_document_embeddings(splits: list[Document], document_name: str):
model_name = 'sentence-transformers/multi-qa-mpnet-base-dot-v1'
embedding = HuggingFaceEmbeddings(model_name=model_name, model_kwargs={}, encode_kwargs={})
vectordb = Chroma.from_documents(splits, embedding)
dump = vectordb.get(include=['embeddings', 'metadatas', 'documents'])
for metadata, document, vector in zip(dump['metadatas'], dump['documents'], dump['embeddings']):
if metadata['source'] == document_name:
print(f'Embedding: {vector[:3]}...')
print(f'Document: {document}')
print('-' * 10)
vectordb.delete()
doc1 = Document(page_content='The embedding vectors of a document must not depend on any other documents.', metadata={'source': 'document 1'})
doc2 = Document(page_content='The grass is green.', metadata={'source': 'document 2'})
doc3 = Document(page_content=' Any text larger the document 1.'*10, metadata={'source': 'document 3'})
print('Vector store with [doc1,doc2]')
print_document_embeddings([doc1,doc2],'document 1')
print('Vector store with [doc1,doc3]')
print_document_embeddings([doc1, doc3],'document 1')
In the output you can see that the embeddings is not the same (the changes are small because document 3 is small, but they can be arbitrary large)
Vector store with [doc1,doc2]
Embedding: [0.03044097125530243, -0.5951688289642334, -0.11154567450284958]...
Document: The embedding vectors of a document must not depend on any other documents.
----------
Vector store with [doc1,doc3]
Embedding: [0.030441032722592354, -0.5951685905456543, -0.11154570430517197]...
Document: The embedding vectors of a document must not depend on any other documents.
### Expected behavior
The expected behavior is that the embedding of a document is always that same, it can not be influenced by any other documents in the vectorstore. | With HuggingFaceEmbeddings, embedding of individual documents in the vectorstore can influence each other | https://api.github.com/repos/langchain-ai/langchain/issues/9301/comments | 4 | 2023-08-16T09:19:03Z | 2024-03-13T19:59:19Z | https://github.com/langchain-ai/langchain/issues/9301 | 1,852,841,614 | 9,301 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I have a question, I didn't find it from the official documentation, I want to know, if I have another llm now, he provides me with a calling api, how can I do it, can it be used like openai, such as This https://python.langchain.com/docs/use_cases/sql in the official document, I want to replace openai with my API provided by others, what should I do?
### Suggestion:
_No response_ | other llm api | https://api.github.com/repos/langchain-ai/langchain/issues/9299/comments | 4 | 2023-08-16T08:56:57Z | 2023-11-27T16:07:27Z | https://github.com/langchain-ai/langchain/issues/9299 | 1,852,805,480 | 9,299 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-uIkxFSWUeCDpCsfzD5XWYLZ7 on tokens per min. Limit: 1000000 / min. Current: 837303 / min. Contact us through our help center at help.openai.com if you continue to have issues..
how can i fix this?
### Suggestion:
_No response_ | Issue: embedding rate limit error | https://api.github.com/repos/langchain-ai/langchain/issues/9298/comments | 5 | 2023-08-16T08:42:15Z | 2024-03-13T19:59:21Z | https://github.com/langchain-ai/langchain/issues/9298 | 1,852,782,550 | 9,298 |
[
"langchain-ai",
"langchain"
] | ### System Info
When I run a vector search in Azure Cognitive Search using AzureSearch it fails saying, "The 'value' property of the vector query can't be null or an empty array." (full error at the bottom) My code hasn't changed from last week when it used to work. I've got version 11.4.0b6 of azure-search-documents installed. I suspect that Cognitive Search has changed its signature or implementation but the Langchain connection stuff hasn't been updated.
Someone else has reported the same error message but the workaround doesn't work for me (https://github.com/langchain-ai/langchain/issues/7841).
The following is a simplified version of the code which I got from the samples at https://python.langchain.com/docs/integrations/vectorstores/azuresearch. This also fails with the same error.
`import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = '<service_name>'
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = '<index_name>'
os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = '<key>'
os.environ["OPENAI_API_KEY"] = '<key2>'
model = 'text-embedding-ada-002'
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "website"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint='https://<service_name>.search.windows.net',
azure_search_key='<key>',
index_name=index_name,
embedding_function=embeddings.embed_query,
)
docs = vector_store.hybrid_search(
query="Test query",
k=3
)
print(docs[0].page_content)
`
The full error is:
Exception has occurred: HttpResponseError
(InvalidRequestParameter) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Parameter name: vector
Code: InvalidRequestParameter
Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Parameter name: vector
Exception Details: (InvalidVectorQuery) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Code: InvalidVectorQuery
Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
File "C:\Work\AI Search\SearchCognitiveSearchScratch.py", line 22, in <module>
docs = vector_store.hybrid_search(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Parameter name: vector
Code: InvalidRequestParameter
Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Parameter name: vector
Exception Details: (InvalidVectorQuery) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
Code: InvalidVectorQuery
Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }'
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run:
pip install azure-search-documents==11.4.0b6
pip install azure-identity
Run (replacing with the keys, service names, index names and service endpoint):
import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = '<service_name>'
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = '<index_name>'
os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = '<key>'
os.environ["OPENAI_API_KEY"] = '<key2>'
model = 'text-embedding-ada-002'
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "website"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint='https://<service_name>.search.windows.net',
azure_search_key='<key>',
index_name=index_name,
embedding_function=embeddings.embed_query,
)
docs = vector_store.hybrid_search(
query="Test query",
k=3
)
print(docs[0].page_content)
### Expected behavior
Return documents from the index. | Error running vector search in Azure Cognitive Search - The 'value' property of the vector query can't be null or an empty array. | https://api.github.com/repos/langchain-ai/langchain/issues/9297/comments | 2 | 2023-08-16T08:13:32Z | 2023-08-18T00:16:12Z | https://github.com/langchain-ai/langchain/issues/9297 | 1,852,738,816 | 9,297 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In LangChain 0.0.240, the `ApifyWrapper` class was removed. This caused a breaking change and broke any code using this class.
It was deleted in this PR https://github.com/langchain-ai/langchain/pull/8106 @hwchase17
Could you please outline the reason it was removed?
Other issues related to this one: https://github.com/langchain-ai/langchain/issues/8201 and https://github.com/langchain-ai/langchain/issues/8307
### Suggestion:
I'd suggest adding the integration back so that code written using it will continue to work.
I'm offering to take ownership of the Apify-related code, so if any issue arises, you can just tag me and I'll resolve it. | Issue: `ApifyWrapper` was removed from the codebase breaking user's code | https://api.github.com/repos/langchain-ai/langchain/issues/9294/comments | 3 | 2023-08-16T07:20:48Z | 2023-08-31T23:00:59Z | https://github.com/langchain-ai/langchain/issues/9294 | 1,852,660,023 | 9,294 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be nice to have a fake vectorstore to make testing retriaval chains easier.
### Motivation
testing retriaval chains
### Your contribution
yes, I'm happy to help. | Fake vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/9292/comments | 2 | 2023-08-16T05:48:38Z | 2023-11-22T16:06:29Z | https://github.com/langchain-ai/langchain/issues/9292 | 1,852,555,549 | 9,292 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.247
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run
```
messages = []
messages.append(SystemMessage(content="abc1"))
json.dumps(messages, ensure_ascii=False)
```
```
File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type SystemMessage is not JSON serializable
```
### Expected behavior
I think this might not be a bug
But Would be best if JSON output is (since I am using openai):
```
{"role": "system", "message": "abc1你好"}
```
How can I do this ?
----------
What ai says is not as expected:
```
from langchain.load.dump import dumps
message1 = SystemMessage(content="abc1你好")
message2 = HumanMessage(content="123")
messages = [message1, message1]
print(dumps(messages))
```
is printing:
```json
[{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "SystemMessage"], "kwargs": {"content": "abc1\u4f60\u597d"}}, {"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "SystemMessage"], "kwargs": {"content": "abc1\u4f60\u597d"}}]
```
And how can I use ensure_ascii with langchain.load.dump ? | Object of type SystemMessage is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/9288/comments | 4 | 2023-08-16T03:59:44Z | 2024-02-13T16:13:58Z | https://github.com/langchain-ai/langchain/issues/9288 | 1,852,465,014 | 9,288 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.265, Mac M2 Pro Hardware. Python 3.10.0
### Who can help?
@hwchase17
@ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = HuggingFaceEndpoint(endpoint_url=" task = "summarization",huggingfacehub_api_token
vectorstore_multi_qa = Pinecone(index, embedding_multi_qa.embed_query, "text")
qa_chain = RetrievalQAWithSourcesChain.from_llm(
llm=llm,
#chain_type="stuff",
retriever=vectorstore_multi_qa.as_retriever(),
return_source_documents=True)
query = "What is Blockchain?"
# Send question as a query to qa chain
result = qa_chain({"question": query})
### Expected behavior
Return something | KeyError: 'summary_text' when trying HuggingFaceEndpoint with task = "summarization" even when the VALID_TASKS include it | https://api.github.com/repos/langchain-ai/langchain/issues/9286/comments | 2 | 2023-08-16T03:24:56Z | 2023-11-22T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9286 | 1,852,442,074 | 9,286 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.265
Python 3.10.7
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
prompt=st.text_input("enter search key")
llm=OpenAI(temperature=0.9)
script_memory=ConversationBufferMemory(input_key='title', memory_key='history')
script_template=PromptTemplate(
input_variables=['title', 'wikiepedia_research'],
template='Write me Youtube video script based on this TITLE: {title} while leveraging this wikipedia research: {wikiepedia_research}'
)
script_chain=LLMChain(llm=llm, prompt=script_template, verbose=True, output_key='script',memory=script_memory)
wiki = WikipediaAPIWrapper()
wiki_research = wiki.run(prompt)
# key should be wikipedia_research
script = script_chain.run(title=title, wikiepedia_research=wiki_research)

### Expected behavior
script = script_chain.run(title=title, wikipedia_research=wiki_research)
| Typo in Variable Name in script_chain.run() | https://api.github.com/repos/langchain-ai/langchain/issues/9285/comments | 1 | 2023-08-16T02:40:14Z | 2023-11-22T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9285 | 1,852,412,897 | 9,285 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/e986afa13a1de73f403eebe05bd4b25781c12788/libs/langchain/langchain/chains/combine_documents/stuff.py#L96C76-L96C76
If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping the error throw. | If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping the error throw. | https://api.github.com/repos/langchain-ai/langchain/issues/9284/comments | 1 | 2023-08-16T02:16:41Z | 2023-08-16T02:19:58Z | https://github.com/langchain-ai/langchain/issues/9284 | 1,852,397,983 | 9,284 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
See this page: https://python.langchain.com/docs/use_cases/multi_modal/image_agent

The `from langchain import OpenAI` class was not extracted into the `API Reference:` section.
My guess is that was because the `langchain import` doesn't have `.` in the namespace. Other classes with `dot` were extracted.
Another example: https://python.langchain.com/docs/use_cases/more/code_writing/llm_math
### Idea or request for content:
_No response_ | DOC: missed items in the `API Reference:` auto-generated section | https://api.github.com/repos/langchain-ai/langchain/issues/9282/comments | 2 | 2023-08-16T00:40:19Z | 2023-11-15T16:41:49Z | https://github.com/langchain-ai/langchain/issues/9282 | 1,852,329,561 | 9,282 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using AWS Sagemaker Jumpstart model for Llama2 13b: meta-textgeneration-llama-2-13b-f
On running a Langchain summarize chain with chain_type="map_reduce" I get the below error. Other chain types (refine, stuff) work without issues. I do not have access to https://huggingface.co/ from my environment. Is there a way to set the gpt2 tokenizer in a local dir?
```
parameters = {
"properties": {
"min_length": 100,
"max_length": 1024,
"do_sample": True,
"top_p": 0.9,
"repetition_penalty": 1.03,
"temperature": 0.8
}
}
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
print(input_str)
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generation"]["content"]
content_handler = ContentHandler()
endpoint_name='xxxxxxxxxxxxxxxxxx'
llm=SagemakerEndpoint(
endpoint_name=endpoint_name,
region_name="us-east-1",
model_kwargs=parameters,
content_handler=content_handler,
endpoint_kwargs={"CustomAttributes": 'accept_eula=true'}
)
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
```
Error:
```
File /usr/local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1788, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1782 logger.info(
1783 f"Can't load following files from cache: {unresolved_files} and cannot check if these "
1784 "files are necessary for the tokenizer to operate."
1785 )
1787 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()):
-> 1788 raise EnvironmentError(
1789 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
1790 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
1791 f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory "
1792 f"containing all relevant files for a {cls.__name__} tokenizer."
1793 )
1795 for file_id, file_path in vocab_files.items():
1796 if file_id not in resolved_vocab_files:
OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Provided above
### Expected behavior
Provided above | How do fix GPT2 Tokenizer error in Langchain map_reduce (LLama2)? | https://api.github.com/repos/langchain-ai/langchain/issues/9273/comments | 6 | 2023-08-15T21:01:48Z | 2024-01-01T18:11:07Z | https://github.com/langchain-ai/langchain/issues/9273 | 1,852,123,223 | 9,273 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
llm = ChatOpenAI(temperature=0, model_name = args.ModelName, verbose = True, streaming = True, callbacks = [MyCallbackHandler(new_payload)])
class StaticSearchTool(BaseTool):
name = "Search_QA_System"
description = "Use this tool to answer cuurent events
# return_direct = True
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
return qa.run(query)
async def _arun(
self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
"""Use the tool asynchronously."""
return await qa.arun(query)
def define_search_tool():
search = [StaticSearchTool()]
return search
agent_executor = initialize_agent(
define_search_tool(),
llm,
system_message = system_message,
extra_prompt_messages = [MessagesPlaceholder(variable_name="memory")],
agent_instructions = "Analyse the query closely and decide the query intend. If the query requires internet to search than use google-serper to answer the query. Try to invoke `google-serper` most of the times",
agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs= agent_kwargs,
verbose = True,
memory = memory,
handle_parsing_errors=True,
)
In this code I need to stream the output of the StaticSearchTool() not the output of the agent.
How can we do this
### Suggestion:
_No response_ | Stream the response of the custom tools in the agent | https://api.github.com/repos/langchain-ai/langchain/issues/9271/comments | 3 | 2023-08-15T20:48:35Z | 2023-11-22T16:06:44Z | https://github.com/langchain-ai/langchain/issues/9271 | 1,852,108,322 | 9,271 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have an issue when trying to set 'gpt-3.5-turbo' as a model to create embeddings.
When using the 'gpt-3.5-turbo' to create LLM, everything works fine:
`llm = OpenAI(model_name='gpt-3.5-turbo', temperature=0, openai_api_key=OPENAI_API_KEY, max_tokens=512)`
Also creating embeddings without specifying a model works fine:
`embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)`
But when I try to use 'gpt-3.5-turbo' for embeddings, I get the following error:
`embeddings = OpenAIEmbeddings(model='gpt-3.5-turbo', openai_api_key=OPENAI_API_KEY)`
Error:
_openai.error.PermissionError: You are not allowed to generate embeddings from this model_
This seems weird to me since I have just used the same model to create LLM. So I have access to it and should be able to use it. Unless creation of embeddings and LLM differs somehow.
Can somebody please advise?
### Suggestion:
_No response_ | Issue: openai.error.PermissionError: You are not allowed to generate embeddings from this model | https://api.github.com/repos/langchain-ai/langchain/issues/9270/comments | 9 | 2023-08-15T20:31:54Z | 2024-05-17T14:38:10Z | https://github.com/langchain-ai/langchain/issues/9270 | 1,852,086,899 | 9,270 |
[
"langchain-ai",
"langchain"
] | I am trying to create a `ConversationalRetrievalChain` with memory, `return_source_document=True` and a custom retriever which returns content and url of the document. I am able to generate the right response when I call the chain for the first time. But when I call it again with memory I am getting error Missing some input keys: {'context'}
Here is the trace of the error.
```
Missing some input keys: {'context'}
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[470], line 1
----> 1 contracts_chain("How can I apply for a TPA?")
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
246 run_manager = callback_manager.on_chain_start(
247 dumpd(self),
248 inputs,
249 )
250 try:
251 outputs = (
--> 252 self._call(inputs, run_manager=run_manager)
253 if new_arg_supported
254 else self._call(inputs)
255 )
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/conversational_retrieval/base.py:126, in BaseConversationalRetrievalChain._call(self, inputs, run_manager)
124 if chat_history_str:
125 callbacks = _run_manager.get_child()
--> 126 new_question = self.question_generator.run(
127 question=question, chat_history=chat_history_str, callbacks=callbacks
128 )
129 else:
130 new_question = question
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:456, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
451 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
452 _output_key
453 ]
455 if kwargs and not args:
--> 456 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
457 _output_key
458 ]
460 if not kwargs and not args:
461 raise ValueError(
462 "`run` supported with either positional arguments or keyword arguments,"
463 " but none were provided."
464 )
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:235, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
200 def __call__(
201 self,
202 inputs: Union[Dict[str, Any], Any],
(...)
208 include_run_info: bool = False,
209 ) -> Dict[str, Any]:
210 """Execute the chain.
211
212 Args:
(...)
233 `Chain.output_keys`.
234 """
--> 235 inputs = self.prep_inputs(inputs)
236 callback_manager = CallbackManager.configure(
237 callbacks,
238 self.callbacks,
(...)
243 self.metadata,
244 )
245 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:389, in Chain.prep_inputs(self, inputs)
387 external_context = self.memory.load_memory_variables(inputs)
388 inputs = dict(inputs, **external_context)
--> 389 self._validate_inputs(inputs)
390 return inputs
File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:147, in Chain._validate_inputs(self, inputs)
145 missing_keys = set(self.input_keys).difference(inputs)
146 if missing_keys:
--> 147 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}
```
Here is my code
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key="answer", input_key="question")
llama2_chat_prompt = """
<s>[INST] <<SYS>>
You are a helpful assistant for tech company.
Answer question using the following conversation history and context
Conversation History: {chat_history}
Context: {context}
<</SYS>>
Question: {question}
Answer in Markdown:
<<SYS>>
<</SYS>>
[/INST]
"""
prompt = PromptTemplate(
input_variables=["question", "context", "chat_history"],
output_parser=None,
partial_variables={},
template=llama2_chat_prompt,
template_format="f-string",
validate_template=False,
verbose=verbose,
)
llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
doc_input_variables = ["page_content", "src"]
document_prompt = PromptTemplate(
input_variables=doc_input_variables,
template="Content: {page_content}\nSource: {src}",
)
combine_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_variable_name="context",
document_prompt=document_prompt,
verbose=verbose,
)
contracts_chain = ConversationalRetrievalChain(
retriever=my_custom_retriver,
memory=memory,
question_generator=llm_chain,
combine_docs_chain=combine_documents_chain,
return_source_documents=False,
verbose=verbose,
)
```
resp = contracts_chain("Can you help me?") ## runs fine
resp1 = contracts_chain("My question is about weather") ## gives error. | Missing some input keys: {'context'} when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9265/comments | 7 | 2023-08-15T19:31:45Z | 2024-03-12T23:10:54Z | https://github.com/langchain-ai/langchain/issues/9265 | 1,852,007,448 | 9,265 |
[
"langchain-ai",
"langchain"
] | ### System Info
When attempting to package my application using PyInstaller, I encounter an error related to the "lark" library. When trying to initiate the SelfQueryRetriever from langchain, I encounter the following problem:
> Traceback (most recent call last):
> File "test.py", line 39, in
> File "langchain\retrievers\self_query\base.py", line 144, in from_llm
> File "langchain\chains\query_constructor\base.py", line 154, in load_query_constructor_chain
> File "langchain\chains\query_constructor\base.py", line 115, in _get_prompt
> File "langchain\chains\query_constructor\base.py", line 72, in from_components
> File "langchain\chains\query_constructor\parser.py", line 150, in get_parser
> ImportError: Cannot import lark, please install it with 'pip install lark'.
I use:
- Langchain 0.0.233
- Lark 1.1.7
- PyInstaller 5.10.1
- Python 3.9.13
- OS Windows-10-10.0.22621-SP0
I have already ensured that the "lark" library is installed using the appropriate command: pip install lark.
I have also tried to add a hook-lark.py file to the PyInstaller as suggested [here](https://github.com/lark-parser/lark/issues/548.) and also opened a issue in [Lark](https://github.com/lark-parser/lark/issues/1319).
Can you help? Thanks in advance!
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With the following code the problem can be reproduced:
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.retrievers import SelfQueryRetriever
from langchain.llms import OpenAI
from langchain.chains.query_constructor.base import AttributeInfo
embeddings = OpenAIEmbeddings()
persist_directory = "data"
text= ["test"]
chunk_size = 1000
chunk_overlap = 10
r_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap,
separators=["\n\n", "(?<=\. )", "\n"])
docs = r_splitter.create_documents(text)
for doc in docs:
doc.metadata = {"document": "test"}
db = Chroma.from_documents(documents=docs, embedding=embeddings, persist_directory=persist_directory)
db.persist()
metadata_field_info = [
AttributeInfo(
name="document",
description="The name of the document the chunk is from.",
type="string",
),
]
document_content_description = "Test document"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
db,
document_content_description,
metadata_field_info,
verbose=True
)
```
The spec-file to create the standalone application looks like this:
```
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['test.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=['tiktoken_ext', 'tiktoken_ext.openai_public', 'onnxruntime', 'chromadb', 'chromadb.telemetry.posthog', 'chromadb.api.local', 'chromadb.db.duckdb'],
hookspath=['.'],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
a.datas += Tree('path\to\langchain', prefix='langchain')
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='test',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
```
### Expected behavior
The SelfQueryRetriever should be initiated properly for futher use. | ImportError of lark when packaging a standalone application with PyInstaller | https://api.github.com/repos/langchain-ai/langchain/issues/9264/comments | 25 | 2023-08-15T19:23:27Z | 2024-08-02T16:06:38Z | https://github.com/langchain-ai/langchain/issues/9264 | 1,851,997,057 | 9,264 |
[
"langchain-ai",
"langchain"
] | ### System Info
Problem:
The intermediate_steps don't contain last `AI thought information`. Did I do anything wrong or was that a bug?
Or is there anything I can do to extract the last `AI thought information`?
Code:
```
agent = initialize_agent(
[self.search_tool, self.wikipedia_tool],
self.llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=5,
handle_parsing_errors=True,
return_intermediate_steps=True
)
result = agent(formatted_script_template)
intermediate_steps = result['intermediate_steps']
```
The intermediate_steps contains only one step and it is missing the last `Thought`
What intermediate_steps look like:
<img width="644" alt="Screenshot 2023-08-15 at 11 19 40 AM" src="https://github.com/langchain-ai/langchain/assets/3535601/d22bc6dd-11b1-4974-97dd-594bd7038d04">
What I expected to have:
<img width="1170" alt="Screenshot 2023-08-15 at 11 20 59 AM" src="https://github.com/langchain-ai/langchain/assets/3535601/6457135b-3257-42ba-801c-8f5523830337">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the code I provided in the description and set breakpoint to inspect intermediate steps.
The thought was missing in intermediate steps.
### Expected behavior
The complete observation & thought will be available in intermediate steps. | intermediate_steps missing last thought content | https://api.github.com/repos/langchain-ai/langchain/issues/9262/comments | 7 | 2023-08-15T18:24:45Z | 2024-05-14T07:08:02Z | https://github.com/langchain-ai/langchain/issues/9262 | 1,851,915,394 | 9,262 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.265
Python 3.11.4
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from dotenv import load_dotenv
from langchain.vectorstores.azuresearch import AzureSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.docstore.document import Document
import os
load_dotenv()
index_name = 'ebb-test-1'
vector_store_address = os.environ.get('AZURE_VECTOR_STORE_ADDRESS')
vector_store_password = os.environ.get('AZURE_VECTOR_STORE_PASSWORD')
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(model='text-embedding-ada-002', chunk_size=1,
deployment=os.environ.get('AZURE_VECTOR_STORE_DEPLOYMENT'))
vector_store: AzureSearch = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
texts = [
'Tulips are pretty',
'Roses have thorns'
]
metas = [
{'name': 'tulip',
'nested': {'color': 'purple'}},
{'name': 'rose',
'nested': {'color': 'red'}}
]
docs = [Document(page_content=text, metadata=meta) for text, meta in zip(texts, metas)]
vector_store.add_documents(docs)
try:
# Prints Message: Invalid expression: 'metadata' is not a filterable field. Only filterable fields can be used in filter expressions.
result = vector_store.vector_search_with_score(
'things that have thorns', k=3,
filters="metadata eq 'invalid'")
print(result)
except Exception as e:
print(e)
print('Should print give an error about not being able to convert the string')
try:
# Prints Message: Invalid expression: Could not find a property named 'name' on type 'Edm.String'.
result = vector_store.vector_search_with_score(
'things that have thorns', k=3,
filters="metadata/name eq 'tulip'")
print(result)
except Exception as e:
print(e)
print('Should just return the tulip (even though it has no thorns)')
try:
# Prints Message: Invalid expression: Could not find a property named 'nested' on type 'Edm.String'.
result = vector_store.vector_search_with_score(
'things that have thorns', k=3,
filters="metadata/nested/color eq 'red'")
print(result)
except Exception as e:
print(e)
print('Should just return the rose')
```
### Expected behavior
The first block should not give an error about metadata being filterable. Instead, it should give some error about the expression being invalid.
The second block should just return the tulip document, and the third one should just return the rose document. | In Azure vector store, metadata is kept as a string and can't be used in a filter | https://api.github.com/repos/langchain-ai/langchain/issues/9261/comments | 11 | 2023-08-15T17:45:21Z | 2024-07-12T18:10:31Z | https://github.com/langchain-ai/langchain/issues/9261 | 1,851,861,571 | 9,261 |
[
"langchain-ai",
"langchain"
] | ### Feature request
cur version, milvus not support normalize_L2 of embedding, when can add this feature?
Thanks
### Motivation
Normalization and regularization of features can improve retrieval performance, and it is convenient to set thresholds when calculating similarity through inner product
### Your contribution
NO | milvus not support normalize_L2 of embedding | https://api.github.com/repos/langchain-ai/langchain/issues/9255/comments | 2 | 2023-08-15T15:31:15Z | 2023-11-29T16:08:05Z | https://github.com/langchain-ai/langchain/issues/9255 | 1,851,663,787 | 9,255 |
[
"langchain-ai",
"langchain"
] | 
As shown in the figure, I hope that the `resp `variable can capture these outputs in each for loop, rather than displaying them on the console, What should I do? | How to output code variables word by word in `ChatOpenAI` instead of console? | https://api.github.com/repos/langchain-ai/langchain/issues/9247/comments | 2 | 2023-08-15T09:55:01Z | 2023-11-21T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9247 | 1,851,185,785 | 9,247 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = "^0.0.264"
python = "^3.10"
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```py
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
from langchain.agents import AgentExecutor
from langchain.chat_models import ChatOpenAI
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain.schema.messages import SystemMessage
from langchain.prompts import MessagesPlaceholder
llm = ChatOpenAI(temperature = 0, model="gpt-3.5-turbo-0613", stream=True)
memory = AgentTokenBufferMemory(memory_key="history", llm=llm)
retriever = vectorstore.as_retriever()
tool = load_custom_tool(retriever, model_name="gpt-3.5-turbo", return_direct=False)
tools = [tool]
system_message = SystemMessage(
content=(
"Do your best to answer the questions. "
"Feel free to use any tools available to look up "
"relevant information, only if necessary"
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name="history")]
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
return_intermediate_steps=True
)
result = await agent_executor.acall({"input": "What color is the sky?"})
result
```
### Expected behavior
The code errors with the following
```
in Chain.acall(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
347 except (KeyboardInterrupt, Exception) as e:
348 await run_manager.on_chain_error(e)
--> 349 raise e
350 await run_manager.on_chain_end(outputs)
351 final_outputs: Dict[str, Any] = self.prep_outputs(
352 inputs, outputs, return_only_outputs
353 )
in Chain.acall(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
337 run_manager = await callback_manager.on_chain_start(
338 dumpd(self),
...
361 message=message,
362 generation_info=dict(finish_reason=res.get("finish_reason")),
363 )
TypeError: 'async_generator' object is not subscriptable
```
I was hoping to be able to use the OpenAIFunctions agent as a drop in replacement for my existing agent. I need streaming to be working to do so. Is streaming supported? | OpenAIFunctionsAgent | Streaming Bug | https://api.github.com/repos/langchain-ai/langchain/issues/9246/comments | 2 | 2023-08-15T09:41:21Z | 2023-09-05T12:07:36Z | https://github.com/langchain-ai/langchain/issues/9246 | 1,851,169,357 | 9,246 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11
langchain 0.0.263.
### Who can help?
@agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to add memory into docs based question answer chatbot but after adding memory the chatbot randomly generate question and answer automatically.

### Expected behavior
I need chatbot for docs based question answer and chat history. | Random question and Answer generation while using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9241/comments | 3 | 2023-08-15T06:56:12Z | 2024-07-10T09:01:36Z | https://github.com/langchain-ai/langchain/issues/9241 | 1,850,987,786 | 9,241 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def define_model(model_threshold):
get_no = np.random.randint(1, 11)
if int(model_threshold) >= get_no:
return "gpt-4-0613"
else:
return "gpt-3.5-turbo-16k-0613"
messages = get_chat_history_format(api_params=api_params)
model_name = define_model(model_threshold)
print(f"Model Name : {model_name}")
llm = ChatOpenAI(temperature=0.7, model_name = model_name, streaming = True, callbacks = [MyCallbackHandler(api_params)])
agent_executor = initialize_agent(
define_search_tools(llm),
llm,
system_message = system_message,
extra_prompt_messages = [MessagesPlaceholder(variable_name="memory")],
agent_instructions = "Analyse the query closely and decide the query intend. If the query requires internet to search than use google-serper to answer the query. Try to invoke `google-serper` most of the times",
agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs= agent_kwargs,
verbose = True,
memory = memory,
handle_parsing_errors=True,
)
I am getting endpoint error
### Suggestion:
_No response_ | This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? | https://api.github.com/repos/langchain-ai/langchain/issues/9237/comments | 6 | 2023-08-15T02:58:25Z | 2024-06-21T22:20:38Z | https://github.com/langchain-ai/langchain/issues/9237 | 1,850,835,440 | 9,237 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using the load_summarization_chain with the type of 'map_reduce' in the following fashion:
```
summary_prompt = ChatPromptTemplate.from_template(
"Write a long-form summary of the following text delimited by triple backquotes. "
"Include detailed evidence and arguments to make sure the generated paragraph is convincing. "
"Write in Chinese."
"```{text}```"
)
combine_prompt = ChatPromptTemplate.from_template(
"Write a long-form summary of the following text delimited by triple backquotes. "
"Include all the details as much as possible."
"Write in Chinese."
"```{text}```"
)
combined_summary_chain = load_summarize_chain(llm=llm, chain_type="map_reduce", map_prompt=summary_prompt,
combine_prompt=combine_prompt, return_intermediate_steps=True)
summary, inter_steps = combined_summary_chain({"input_documents": data, "token_max": 8000})
```
I've checked several tutorials and there doesn't seem to be anything wrong with this. But I keep getting this error:
`openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 15473 tokens. Please reduce the length of the messages.`
It's as if the map_reduce chain isn't breaking up the data and running as if it's a stuffing chain instead. What did I do wrong?
### Suggestion:
_No response_ | Issue: load_summarization_chain in 'map_reduce' mode not breaking up document | https://api.github.com/repos/langchain-ai/langchain/issues/9235/comments | 3 | 2023-08-15T01:27:12Z | 2024-01-25T10:22:05Z | https://github.com/langchain-ai/langchain/issues/9235 | 1,850,779,448 | 9,235 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to use `MultiQueryRetriever` to generate variations on a question: it seems to work, but I can't use it inside a chain created with `load_qa_with_sources_chain()` because that generates a chain that expects a list of input_documents, rather than a retriever, and I don't want to use `RetrievalQAWithSourcesChain` instead of `load_qa_with_sources_chain()` because I want to continue implementing my own similarity search (eg. I'm supporting switches to select using `index.similarity_search()` vs. `index.max_marginal_relevance_search()` on my `index`, and to specify the number of matches (`k`)), so I figured I could just call `generate_queries()` on my `MultiQueryRetriever` instance and then manually run my (`load_qa_with_sources_chain()`) chain for each variation. However, that method requires a `run_manager`, and I can't figure out how to create one. I already have the `load_qa_with_sources_chain()` chain: can I get a `run_manager` from that? Or more generally, what's the best way to use `MultiQueryRetriever` whilst maintaining one's own code for fetching matching text snippets?
### Suggestion:
_No response_ | Issue: trying to call generate_queries() on a MultiQueryRetriever but where do I get a run_manager from? | https://api.github.com/repos/langchain-ai/langchain/issues/9231/comments | 5 | 2023-08-14T23:38:42Z | 2024-03-19T13:08:30Z | https://github.com/langchain-ai/langchain/issues/9231 | 1,850,711,136 | 9,231 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.245
python==3.9
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use HuggingFace Embeddings model for SVMRetriever method as follows:
```
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.retrievers import SVMRetriever
embedding = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"],
embedding,
k=4, relevancy_threshold=.25)
```
Kernel looks busy for a while and then dies when I run above snippet.
### Expected behavior
I expect to run this smoothly and upon completion, should be able to retrieve similar documents as:
`result = retriever.get_relevant_documents("foo")` | Kernel dies for SVMRetriever with Huggingface Embeddings Model | https://api.github.com/repos/langchain-ai/langchain/issues/9219/comments | 16 | 2023-08-14T19:59:31Z | 2023-12-05T11:32:48Z | https://github.com/langchain-ai/langchain/issues/9219 | 1,850,451,609 | 9,219 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In https://python.langchain.com/docs/use_cases/extraction, I can run this example using the ChatOpenAI method. However, within my organization, I want to use AzureChatOpenAI. However, the same example doesnt work here. The error I get is
```
InvalidRequestError: Unrecognized request argument supplied: functions
```
What am I doing wrong?
### Idea or request for content:
_No response_ | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/9218/comments | 2 | 2023-08-14T19:39:51Z | 2023-11-20T16:04:47Z | https://github.com/langchain-ai/langchain/issues/9218 | 1,850,424,838 | 9,218 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In document here - https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference#streaming
Streaming section, parameter **stream** should be **streaming**.
### Idea or request for content:
_No response_ | DOC: Typo in the streaming document with hugging face text inference | https://api.github.com/repos/langchain-ai/langchain/issues/9212/comments | 1 | 2023-08-14T17:14:38Z | 2023-11-20T16:04:52Z | https://github.com/langchain-ai/langchain/issues/9212 | 1,850,200,651 | 9,212 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I have a pandas dataframe with a column name 'chunk_id'.
I want to use this chunk_id as the custom_id in the langchain_pg_embedding table.
The langchain documentation search bot tells me this is how I can do it.
```python
from langchain.vectorstores import PGVectorStore
vector_store = PGVectorStore(
table_name="documents",
conn_str="postgresql://username:password@localhost:5432/database",
custom_id="my_custom_id"
)
```
This doesn't seem right to me. For one, there is no PGVectorStore in langchain.vectorstores. Secondly the PGVector from `from langchain.vectorstores.pgvector import PGVector` does not accept the custom_id argument.
Is there any way to use my 'chunk_id' as the custom_id?
### Suggestion:
_No response_ | Issue: Using custom_id with PGVector store | https://api.github.com/repos/langchain-ai/langchain/issues/9209/comments | 3 | 2023-08-14T15:16:18Z | 2023-10-06T12:31:35Z | https://github.com/langchain-ai/langchain/issues/9209 | 1,849,993,920 | 9,209 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In [this](https://python.langchain.com/docs/use_cases/question_answering.html) documentation, there are broken links in the "further reading" section. The urls are not found.
### Idea or request for content:
_No response_ | DOC: broken url links in QA documentation | https://api.github.com/repos/langchain-ai/langchain/issues/9201/comments | 2 | 2023-08-14T13:13:23Z | 2023-11-20T16:04:56Z | https://github.com/langchain-ai/langchain/issues/9201 | 1,849,753,783 | 9,201 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I create a Bedrock LLM Client for the Jurassic models - "ai21.j2-mid", "ai21.j2-ultra", the llm.invoke(query) methods are not returning with full results. The result seems to be getting truncated after first line.
It seems like the LLM Engine is streaming it's output but langchain llm.invoke method is not able to handle the streaming data.
Similar issue is coming with the chain.invoke(query) methods as well.
However, llm.invoke works well when tried with AWS Bedrock "amazon.titan-tg1-large" model
### Suggestion:
I guess the AI21 labs models are streaming output in a way that the LangChain llm.invoke( query) method is not able to handle properly. | Issue: Amazon Bedrock Jurassic model responses getting truncated | https://api.github.com/repos/langchain-ai/langchain/issues/9199/comments | 11 | 2023-08-14T11:53:57Z | 2024-05-05T16:03:43Z | https://github.com/langchain-ai/langchain/issues/9199 | 1,849,630,816 | 9,199 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.263
python 3.9
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema import FunctionMessage
from langchain.chat_models.jinachat import _convert_message_to_dict
_convert_message_to_dict(FunctionMessage(name='foo', content='bar'))
```

### Expected behavior
Handle FunctionMessage | _convert_message_to_dict doesn't handle FunctionMessage type | https://api.github.com/repos/langchain-ai/langchain/issues/9197/comments | 2 | 2023-08-14T11:24:44Z | 2023-08-15T08:13:36Z | https://github.com/langchain-ai/langchain/issues/9197 | 1,849,585,577 | 9,197 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.