issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
langchain Version: 0.0.348
python Version: Python 3.9.18
OS: Mac OS M2 (Ventura 13.6.2)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = Bedrock(
credentials_profile_name= os.environ.get('profile_name'),
model_id="anthropic.claude-v2",
model_kwargs={"temperature": 0.1},
endpoint_url="https://bedrock-runtime.us-east-1.amazonaws.com",
region_name="us-east-1",
verbose=True
)
db = SQLDatabase.from_uri(snowflake_url, sample_rows_in_table_info=3, include_tables=["table_name"])
output= SQLDatabaseChain.from_llm(
llm,
db,
prompt=few_shot_prompt,
return_intermediate_steps=True,
)
Gives the following error:
Error: syntax error line 1 at position 0 unexpected '**The**'.
[SQL: **The** query looks good to me, I don't see any of the common mistakes listed. Here is the original query again: SELECT *
FROM table]
### Expected behavior
the output should only produce SQL query outputted plainly, should not surround it in quotes or any comments prior to the SQL Query
Desired output:
[SQL: SELECT *
FROM table] | AWS bedrock Claude v2 SQLDatabaseChain produces comments before the SQL Query | https://api.github.com/repos/langchain-ai/langchain/issues/15283/comments | 20 | 2023-12-28T19:51:15Z | 2024-06-08T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15283 | 2,058,773,284 | 15,283 |
[
"hwchase17",
"langchain"
] | ### System Info
```
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
tools = [DuckDuckGoSearchRun()]
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
logger.debug(assistant)
logger.debug(assistant.assistant_id)
agent_executor = AgentExecutor(agent=assistant, tools=tools,verbose=True)
response = agent_executor.invoke({"content": "whats the whether in london"})
print(response)
logger.debug(response)
```
I am trying to run the following from the example. It prints out the assitant information and id but after that it get completely stuck. I tried to step through the debugger but after while it continues and never comes back after calling
```
callback_manager = CallbackManager.configure(
callbacks,
self.callbacks,
self.verbose,
tags,
self.tags,
metadata,
self.metadata,
)
```
in the `__call__` method
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
tools = [DuckDuckGoSearchRun()]
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
logger.debug(tai_assistant)
logger.debug(tai_assistant.assistant_id)
agent_executor = AgentExecutor(agent=assistant, tools=tools,verbose=True)
response = agent_executor.invoke({"content": "whats the whether in london"})
print(response)
logger.debug(response)
### Expected behavior
to have an output from the agent and not be stuck | OpenAIAssistantRunnable stuck on execution with langchain tools | https://api.github.com/repos/langchain-ai/langchain/issues/15270/comments | 2 | 2023-12-28T13:33:35Z | 2023-12-28T17:46:23Z | https://github.com/langchain-ai/langchain/issues/15270 | 2,058,448,990 | 15,270 |
[
"hwchase17",
"langchain"
] | ### System Info
Python: 3.11
Langchain: 0.0.352
mistralai: 0.0.8
### Who can help?
@efriis
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
If the ChatMistralAI model is used for an agent or similar, an error appears because the official Mistral API does not currently support the stop parameter (as other APIs such as OpenAI do).
### Expected behavior
Although this is something that should be fixed by Mistral in its official API, one of the following options should be done:
- Warn the user that this model cannot be used with a stop sequence before breaking execution due to the error.
- Implement an own solution for the stop sequence in the same package and do not send that parameter to the official client call. | [mistralai]: Don´t support stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/15269/comments | 2 | 2023-12-28T13:14:32Z | 2024-01-10T00:27:22Z | https://github.com/langchain-ai/langchain/issues/15269 | 2,058,428,380 | 15,269 |
[
"hwchase17",
"langchain"
] | ### System Info
Is there any way to manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Is there any way to manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain?
### Expected behavior
possibility of manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain? | Manipulating database using chatgpt | https://api.github.com/repos/langchain-ai/langchain/issues/15266/comments | 7 | 2023-12-28T12:24:15Z | 2024-05-10T03:22:41Z | https://github.com/langchain-ai/langchain/issues/15266 | 2,058,376,378 | 15,266 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
According to the documentation listed under the page: https://python.langchain.com/docs/modules/agents/how_to/add_memory_openai_functions, adding a `BaseChatMemory` as `memory` property to an `OpenAIFunctionAgent` should add "memory" to the agent.
**Example listed under the page:**
>>Human: 'Hi'
>>Agent: 'How can I assist you today'
>>Human: 'My name is Bob'
>>Agent: 'Nice to meet you, Bob! How can I help you today?'
>>Human: 'What is my name'
>>Agent: 'Your name is Bob.'
**Actual result:**
>>Human: 'Hi'
>>Agent: 'How can I assist you today'
>>Human: 'My name is Bob'
>>Agent: 'Nice to meet you, Bob! How can I help you today?'
>>Human: 'What is my name'
>>Agent: 'I am not programmed to say your name'
RCA:
- The example implies the memory object that is passed to the functions agent instantiation actually takes care of converting the previous messages into required `ChatMessages` model, but implementation of such abstraction seems missing, atleast in langchain >= 0.0.350
- Upon checking with [visualizer](https://github.com/amosjyng/langchain-visualizer), it is seen that:

the latest invocation of agent does not include any "history" of any previous `run` with the `agent`. Curiously however, the agent executor does contain a variable `memory` which does enlist the previous conversations:

### Idea or request for content:
**Expected resolution:**
1. Update documentation to point to the correct way of incorporating memory with openai functions agent (ad-hoc implementation possibly)
2. Adding and updating implementation to make this API work as expected.
| DOC: Issue with the page titled "Add Memory to OpenAI Functions Agent | 🦜️🔗 Langchain" | https://api.github.com/repos/langchain-ai/langchain/issues/15262/comments | 2 | 2023-12-28T10:39:12Z | 2023-12-28T11:05:16Z | https://github.com/langchain-ai/langchain/issues/15262 | 2,058,277,920 | 15,262 |
[
"hwchase17",
"langchain"
] | ### Feature request
It should be possible to search a Chroma vectorstore for a particular Document by it's ID. Given that the Document object is required for the `update_document` method, this lack of functionality makes it difficult to update document metadata, which should be a fairly common use-case.
Currently, there are two methods for searching a vectorstore, `get` and `search` but neither allow me to collect a Document by it's id
`vectorstore.get`: This allows for search via `id`, however, this does not return the actual `Document` object. Instead, the return is a dictionary of lists containing the `id`, `document`, and optionally, the `embeddings` for all matched documents. This provides an easy interface for utilising documents downstream, however, this creates a challenge for document updates as the `update_document` method needs the `Document` object to be passed, which would require needless recreation for updates.
`vectorstore.search`: This returns the `Document` object as required, however, it is not possible to explicitly search via `id`, only similarity search is possible.
As such, it appears that there is currently no easy way to do this at present, without manually recreating the Document from the `get` output.
### Motivation
For my use-case, I am performing offline clustering of my embeddings in order to find the core groups of documents and would like to add the predicted label to each document as metadata "cluster_label".
Below is a simple representation of my current pipeline:
```
all_docs = vectorstore.get(include=["embeddings", "documents"])
doc_ids = all_docs["ids"]
embeddings = np.array(all_docs["embeddings"])
cluster_model, labels = fit_predict_clustering(embeddings, max_components=10)
for doc_id, label in zip(ids, labels):
# Fetch the document from the vectorstore
doc = vectorstore.get(doc_id) # returns Dict[str, List], but I need Document
# Given current implementation, I would need to now convert the above dictionary to Document
...
# Update metadata with the cluster label
doc.metadata["cluster_label"] = label
vectorstore.update(doc_id, doc)
```
### Your contribution
I'm happy to contribute to this feature if deemed beneficial. To my mind, it should be achievable by either:
1. Updating the get method to allow `Document` returning,
2. Including a new method with the required functionality, or
3. Providing a utility for easy bulk conversion from `get` output to `List[Document]`.
However, I'm open to suggestions as to the most fitting solution.
| Get Chroma vectorstore Document by `doc_id` for document / metadata updates. | https://api.github.com/repos/langchain-ai/langchain/issues/15261/comments | 1 | 2023-12-28T09:48:44Z | 2024-04-04T16:09:01Z | https://github.com/langchain-ai/langchain/issues/15261 | 2,058,224,878 | 15,261 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
help(qdrant.amax_marginal_relevance_search)
print("&&&&&&&&&&&&&&&&&")
help(qdrant.max_marginal_relevance_search)
hits = await qdrant.amax_marginal_relevance_search(text, k=20, fetch_k=100,filter=filter_empty)
print(hits)
hits1 = qdrant.max_marginal_relevance_search(text, k=20, fetch_k=100,filter=filter_empty)
print(hits1)
### Expected behavior
qdrant.amax_marginal_relevance_search have not results but qdrant.max_marginal_relevance_search hava results | qdrant.amax_marginal_relevance_search have not results but qdrant.max_marginal_relevance_search hava results | https://api.github.com/repos/langchain-ai/langchain/issues/15256/comments | 1 | 2023-12-28T07:41:26Z | 2023-12-29T03:31:51Z | https://github.com/langchain-ai/langchain/issues/15256 | 2,058,104,532 | 15,256 |
[
"hwchase17",
"langchain"
] | ### System Info
Python: 3.10
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0.8,
max_tokens=60)
error occurs at openai.py, error message is: AttributeError: module 'openai' has no attribute 'OpenAI'
the reason, I guess, is version not match.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage
)
openai = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0.8,
max_tokens=60)
messages = [
SystemMessage(content="bla"),
HumanMessage(content="bla")
]
response = openai(messages)
print(response)
### Expected behavior
no exception | langchain 0.5.7 not match latest openai | https://api.github.com/repos/langchain-ai/langchain/issues/15255/comments | 1 | 2023-12-28T07:17:09Z | 2024-04-04T16:08:56Z | https://github.com/langchain-ai/langchain/issues/15255 | 2,058,083,922 | 15,255 |
[
"hwchase17",
"langchain"
] | ### Feature request
Similar to the way callbacks are implemented in BaseLLM the embedding class should also support callbacks.
### Motivation
When using embedding models in a RAG application it would be useful to track e.g. the number of tokens.
Callbacks can be used to log usage details to monitoring services (eg Langsmith).
### Your contribution
There is a closed PR adressing the same topic https://github.com/langchain-ai/langchain/pull/7920 | Callbacks for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/15253/comments | 2 | 2023-12-28T06:29:24Z | 2024-06-11T16:07:18Z | https://github.com/langchain-ai/langchain/issues/15253 | 2,058,046,954 | 15,253 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
What should I do if I want to log the number of tokens shot with llm in chain via lcel?
### Suggestion:
lcel chain token usage tracking | Issue: lcel chain token usage tracking | https://api.github.com/repos/langchain-ai/langchain/issues/15249/comments | 3 | 2023-12-28T04:51:21Z | 2024-06-24T16:07:30Z | https://github.com/langchain-ai/langchain/issues/15249 | 2,057,986,272 | 15,249 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I do not understand how chains are built with the transfer of information between generations. here is an example of the code in the langchain [documentation](https://python.langchain.com/docs/expression_language/why):
```
from langchain_core.runnables import RunnablePassthrough
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
output_parser = StrOutputParser()
model = llm
chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| output_parser
)
chain.invoke("ice cream")
```
here in the promo, please write a joke about ice cream, based on this example, my question will be: "how to make the chain continue further and, for example, analyze this joke (that is, work further with what was generated)."
There was an idea to just create a second promt and add it to the chain:
```
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
prompt1 = ChatPromptTemplate.from_template(
"What was the joke about?"
)
output_parser = StrOutputParser()
model = llm
chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| output_parser
| prompt1
| model
| output_parser
)
```
But it won't work that way, because for some reason the model doesn't know the context...

### Idea or request for content:
_No response_ | DOC: langchain LCEL - transfer of information between generations | https://api.github.com/repos/langchain-ai/langchain/issues/15247/comments | 10 | 2023-12-28T04:06:17Z | 2024-04-05T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15247 | 2,057,963,845 | 15,247 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Based on the documentation and RFC standards referenced in the links:
- https://peps.python.org/pep-0604/
- https://www.blog.pythonlibrary.org/2021/09/11/python-3-10-simplifies-unions-in-type-annotations/
it's evident that the introduction of using | instead of 'union' for type annotations is a feature that was introduced in Python 3.10.
However, I've observed that in our project's pyproject.toml and ci.yaml files, the Python version is specified as python = ">=3.8.1,<4.0".
This leads me to question whether LangChain will face issues with type checking or even running in the specified Python 3.8 environment, given that it doesn't support the | syntax for unions.
If there are any considerations or plans, such as updating the pyproject.toml and ci.yaml to make LangChain compatible with a minimum of Python 3.10, or if it's appropriate for me to submit a PR to address the use of the | operator in type annotations within LangChain, I'd appreciate your input and guidance.
### Suggestion:
Upgrade the python version, or fix and remove | syntax, I would be happy to do this, please let me know your decision
@hwchase17 | python 3.10 `|` union syntax compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15244/comments | 1 | 2023-12-28T02:53:57Z | 2023-12-28T06:06:43Z | https://github.com/langchain-ai/langchain/issues/15244 | 2,057,929,816 | 15,244 |
[
"hwchase17",
"langchain"
] | ### System Info
如何对langchain加载的chatglm-6b模型进行量化处理
### Who can help?
@hwchase17 @hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# 本地模型
else:
from configs.model_config import VLLM_MODEL_DICT
if kwargs["model_names"][0] in VLLM_MODEL_DICT and args.infer_turbo == "vllm":
import fastchat.serve.vllm_worker
from fastchat.serve.vllm_worker import VLLMWorker, app, worker_id
from vllm import AsyncLLMEngine
from vllm.engine.arg_utils import AsyncEngineArgs,EngineArgs
args.tokenizer = args.model_path # 如果tokenizer与model_path不一致在此处添加
args.tokenizer_mode = 'auto'
args.trust_remote_code= True
args.download_dir= None
args.load_format = 'auto'
args.dtype = 'auto'
args.seed = 0
args.worker_use_ray = False
args.pipeline_parallel_size = 1
args.tensor_parallel_size = 1
args.block_size = 16
args.swap_space = 4 # GiB
args.gpu_memory_utilization = 0.90
args.max_num_batched_tokens = None # 一个批次中的最大令牌(tokens)数量,这个取决于你的显卡和大模型设置,设置太大显存会不够
args.max_num_seqs = 256
args.disable_log_stats = False
args.conv_template = None
args.limit_worker_concurrency = 5
args.no_register = False
args.num_gpus = 4 # vllm worker的切分是tensor并行,这里填写显卡的数量
args.engine_use_ray = False
args.disable_log_requests = False
# 0.2.1 vllm后要加的参数, 但是这里不需要
args.max_model_len = None
args.revision = None
args.quantization = None
args.max_log_len = None
args.tokenizer_revision = None
# 0.2.2 vllm需要新加的参数
args.max_paddings = 256
if args.model_path:
args.model = args.model_path
if args.num_gpus > 1:
args.tensor_parallel_size = args.num_gpus
for k, v in kwargs.items():
setattr(args, k, v)
engine_args = AsyncEngineArgs.from_cli_args(args)
engine = AsyncLLMEngine.from_engine_args(engine_args)
worker = VLLMWorker(
controller_addr = args.controller_address,
worker_addr = args.worker_address,
worker_id = worker_id,
model_path = args.model_path,
model_names = args.model_names,
limit_worker_concurrency = args.limit_worker_concurrency,
no_register = args.no_register,
llm_engine = engine,
conv_template = args.conv_template,
)
sys.modules["fastchat.serve.vllm_worker"].engine = engine
sys.modules["fastchat.serve.vllm_worker"].worker = worker
sys.modules["fastchat.serve.vllm_worker"].logger.setLevel(log_level)
### Expected behavior
这里加载本地模型的时候如何对模型进行量化 | 如何对langchain加载的chatglm-6b模型进行量化处理 | https://api.github.com/repos/langchain-ai/langchain/issues/15243/comments | 3 | 2023-12-28T02:17:17Z | 2024-04-04T16:08:46Z | https://github.com/langchain-ai/langchain/issues/15243 | 2,057,912,633 | 15,243 |
[
"hwchase17",
"langchain"
] | ### System Info
I used the standard code example from the langchain documentation about Fireworks where I inserted my API key. That's the mistake I made:
```
[llm/start] [1:llm:Fireworks] Entering LLM run with input:
{
"prompts": [
"Name 3 sports."
]
}
[llm/error] [1:llm:Fireworks] [761ms] LLM run errored with error:
"AuthenticationError({'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}})Traceback (most recent call last):\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_core\\language_models\\llms.py\", line 540, in _generate_helper\n self._generate(\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 100, in _generate\n response = completion_with_retry_batching(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 296, in completion_with_retry_batching\n return batch_sync_run()\n ^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 293, in batch_sync_run\n results = list(executor.map(_completion_with_retry, prompt))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 619, in result_iterator\n yield _result_or_cancel(fs.pop())\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 317, in _result_or_cancel\n return fut.result(timeout)\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 456, in result\n return self.__get_result()\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 401, in __get_result\n raise self._exception\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\n return self(f, *args, **kw)\n ^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\n do = self.iter(retry_state=retry_state)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 314, in iter\n return fut.result()\n ^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 449, in result\n return self.__get_result()\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Program Files\\Python311\\Lib\\concurrent\\futures\\_base.py\", line 401, in __get_result\n raise self._exception\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\n result = fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\langchain_community\\llms\\fireworks.py\", line 289, in _completion_with_retry\n return fireworks.client.Completion.create(**kwargs, prompt=prompt)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\base_completion.py\", line 80, in create\n return cls._create_non_streaming(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\base_completion.py\", line 158, in _create_non_streaming\n response = client.post_request_non_streaming(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 125, in post_request_non_streaming\n self._error_handling(response)\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 91, in _error_handling\n self._raise_for_status(resp)\n\n\n File \"C:\\Users\\akidra\\AppData\\Roaming\\Python\\Python311\\site-packages\\fireworks\\client\\api_client.py\", line 67, in _raise_for_status\n raise AuthenticationError(resp.json())\n\n\nfireworks.client.error.AuthenticationError: {'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}}"
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
Cell In[25], line 7
1 from langchain.llms.fireworks import Fireworks
3 llm = Fireworks(
4 fireworks_api_key="<BPR7ILI5ar0xAVWKwwAPvE8cyL2yBFpJRGqDGU3QirD6N8W0>",
5 model="accounts/fireworks/models/mixtral-8x7b-instruct",
6 max_tokens=256)
----> 7 llm("Name 3 sports.")
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:892, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
885 if not isinstance(prompt, str):
886 raise ValueError(
887 "Argument `prompt` is expected to be a string. Instead found "
888 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
889 "`generate` instead."
890 )
891 return (
--> 892 self.generate(
893 [prompt],
894 stop=stop,
895 callbacks=callbacks,
896 tags=tags,
897 metadata=metadata,
898 **kwargs,
899 )
900 .generations[0][0]
901 .text
902 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:666, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
650 raise ValueError(
651 "Asked to cache, but no cache found at `langchain.cache`."
652 )
653 run_managers = [
654 callback_manager.on_llm_start(
655 dumpd(self),
(...)
664 )
665 ]
--> 666 output = self._generate_helper(
667 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
668 )
669 return output
670 if len(missing_prompts) > 0:
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:553, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
551 for run_manager in run_managers:
552 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 553 raise e
554 flattened_outputs = output.flatten()
555 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\llms.py:540, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
530 def _generate_helper(
531 self,
532 prompts: List[str],
(...)
536 **kwargs: Any,
537 ) -> LLMResult:
538 try:
539 output = (
--> 540 self._generate(
541 prompts,
542 stop=stop,
543 # TODO: support multiple run managers
544 run_manager=run_managers[0] if run_managers else None,
545 **kwargs,
546 )
547 if new_arg_supported
548 else self._generate(prompts, stop=stop)
549 )
550 except BaseException as e:
551 for run_manager in run_managers:
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:100, in Fireworks._generate(self, prompts, stop, run_manager, **kwargs)
98 choices = []
99 for _prompts in sub_prompts:
--> 100 response = completion_with_retry_batching(
101 self,
102 self.use_retry,
103 prompt=_prompts,
104 run_manager=run_manager,
105 stop=stop,
106 **params,
107 )
108 choices.extend(response)
110 return self.create_llm_result(choices, prompts)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:296, in completion_with_retry_batching(llm, use_retry, run_manager, **kwargs)
293 results = list(executor.map(_completion_with_retry, prompt))
294 return results
--> 296 return batch_sync_run()
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:293, in completion_with_retry_batching.<locals>.batch_sync_run()
291 def batch_sync_run() -> List:
292 with ThreadPoolExecutor() as executor:
--> 293 results = list(executor.map(_completion_with_retry, prompt))
294 return results
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:619, in Executor.map.<locals>.result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield _result_or_cancel(fs.pop())
620 else:
621 yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:317, in _result_or_cancel(***failed resolving arguments***)
315 try:
316 try:
--> 317 return fut.result(timeout)
318 finally:
319 fut.cancel()
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File C:\Program Files\Python311\Lib\concurrent\futures\thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File C:\Program Files\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~\AppData\Roaming\Python\Python311\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\fireworks.py:289, in completion_with_retry_batching.<locals>._completion_with_retry(prompt)
287 @conditional_decorator(use_retry, retry_decorator)
288 def _completion_with_retry(prompt: str) -> Any:
--> 289 return fireworks.client.Completion.create(**kwargs, prompt=prompt)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\base_completion.py:80, in BaseCompletion.create(cls, model, prompt_or_messages, request_timeout, stream, client, **kwargs)
76 return cls._create_streaming(
77 model, request_timeout, client=client, **kwargs
78 )
79 else:
---> 80 return cls._create_non_streaming(
81 model, request_timeout, client=client, **kwargs
82 )
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\base_completion.py:158, in BaseCompletion._create_non_streaming(cls, model, request_timeout, client, **kwargs)
156 client = client or FireworksClient(request_timeout=request_timeout)
157 data = {"model": model, "stream": False, **kwargs}
--> 158 response = client.post_request_non_streaming(
159 f"{client.base_url}/{cls.endpoint}", data=data
160 )
161 return cls.response_class(**response)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:125, in FireworksClient.post_request_non_streaming(self, url, data)
119 with httpx.Client(
120 headers={"Authorization": f"Bearer {self.api_key}"},
121 timeout=self.request_timeout,
122 **self.client_kwargs,
123 ) as client:
124 response = client.post(url, json=data)
--> 125 self._error_handling(response)
126 return response.json()
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:91, in FireworksClient._error_handling(self, resp)
89 if resp.is_error:
90 resp.read()
---> 91 self._raise_for_status(resp)
File ~\AppData\Roaming\Python\Python311\site-packages\fireworks\client\api_client.py:67, in FireworksClient._raise_for_status(self, resp)
65 raise InvalidRequestError(resp.json())
66 elif resp.status_code == 401:
---> 67 raise AuthenticationError(resp.json())
68 elif resp.status_code == 403:
69 raise PermissionError(resp.json())
AuthenticationError: {'fault': {'faultstring': 'Invalid ApiKey', 'detail': {'errorcode': 'oauth.v2.InvalidApiKey'}}}
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. i use https://python.langchain.com/docs/integrations/providers/fireworks
2. got the API key in https://app.fireworks.ai/api-keys
3. I inserted my key into this code:
```
from langchain.llms.fireworks import Fireworks
import os
os.environ["FIREWORKS_API_KEY"] = "<My key was here.>"
llm = Fireworks(fireworks_api_key="<My key was here.>")
llm = Fireworks(
fireworks_api_key="<My key was here.>",
model="accounts/fireworks/models/mixtral-8x7b-instruct",
max_tokens=256)
llm("Name 3 sports.")
```
### Expected behavior
this example is from the documentation - I just want it to work to move on. | error when running the sample code from the langchain documentation about fireworks | https://api.github.com/repos/langchain-ai/langchain/issues/15239/comments | 1 | 2023-12-28T01:10:59Z | 2023-12-28T01:24:35Z | https://github.com/langchain-ai/langchain/issues/15239 | 2,057,882,953 | 15,239 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
hello everyone! Is it possible to use the OpenAI-compatible URL API from text-generation-webui with langchain? the langchain [documentation](https://python.langchain.com/docs/integrations/llms/textgen) says about localhost, but I don't have access to it, I tried to insert the link into model_url, the error appeared both in google colab and in the terminal.


### Idea or request for content:
_No response_ | DOC: langchain plus OpenAI-compatible URL API equally error | https://api.github.com/repos/langchain-ai/langchain/issues/15237/comments | 6 | 2023-12-28T00:56:43Z | 2024-01-04T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15237 | 2,057,877,277 | 15,237 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am receiving this error 2 validation errors for ConversationalRetrievalChain
qa_template
extra fields not permitted (type=value_error.extra)
question_generator_chain_options
extra fields not permitted (type=value_error.extra) , for the following code :
```
retriever = vector_store.as_retriever()
sales_persona_prompt = PromptTemplate.from_template(SALES_PERSONA_PROMPT)
condense_prompt = PromptTemplate.from_template(CONDENSE_PROMPT)
question_generator_chain_options = {
"llm": non_streaming_model,
"template": condense_prompt,
}
chain = ConversationalRetrievalChain.from_llm(
streaming_model,
retriever,
qa_template=sales_persona_prompt,
question_generator_chain_options=question_generator_chain_options,
return_source_documents=False,
)
```
### Suggestion:
_No response_ | Issue: validation errors for ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15236/comments | 3 | 2023-12-28T00:30:51Z | 2024-04-04T16:08:41Z | https://github.com/langchain-ai/langchain/issues/15236 | 2,057,867,182 | 15,236 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.340
Python version: 3.11.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use the HuggingFace Hub Wrapper to create a chat model instance and use the model in a chain. However these seems to be some library discrepancies between various base files.
Below is the code that works:
from langchain_community.llms import HuggingFaceHub
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_community.chat_models.huggingface import ChatHuggingFace
from langchain.prompts import PromptTemplate, ChatPromptTemplate
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
model_kwargs={
"max_new_tokens": 512,
"top_k": 30,
"temperature": 0.1,
"repetition_penalty": 1.03,
},
)
chat_model = ChatHuggingFace(llm=llm)
messages = [
SystemMessage(content="You're a zoologist who is able to answer questions about various animals. You are tasked with answering the following question provided"),
HumanMessage(content="What is the average lifespan of an Elephant?"),
]
res = chat_model.invoke(messages)
print(res.content)
I want to modify this to allow the prompt to be more dynamic and potentially include a chain of prompts. Here is my modification:
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(content="You're a zoologist who is able to answer questions about various animals. You are tasked with answering the following question provided"),
HumanMessage(content="What is the average lifespan of an {animal}?"),
]
)
chain1 = prompt| chat_model
chain1.invoke({"animal": "giraffe"})
I get the following error: NotImplementedError: Unsupported message type: <class 'langchain_core.messages.system.SystemMessage'> - this is because in the chat.py file the import statement for the messages is the following: from langchain.schema.messages import (
AIMessage,
AnyMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
get_buffer_string,
). However the updated version I found in documentation states to use langchain_core.messages.
Even if I update the import statement to be the old version, I run into the following error: TypeError: 'ChatPromptValue' object is not subscriptable.
### Expected behavior
I should be able to execute the chain and receive the same output from the non-dynamic verison of the code - res.content output. | Executing Chain with HuggingFace Models using wrapper | https://api.github.com/repos/langchain-ai/langchain/issues/15235/comments | 1 | 2023-12-28T00:20:19Z | 2024-04-04T16:08:36Z | https://github.com/langchain-ai/langchain/issues/15235 | 2,057,862,832 | 15,235 |
[
"hwchase17",
"langchain"
] | ### System Info
python = "3.11"
langchain = "0.0.352"
cohere = "4.39"
mlflow = {extras = ["genai"], version = "2.9.2"}
### Who can help?
@harupy
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I followed the official example for embeddings here, except that I am using Cohere instead of OpenAI: https://python.langchain.com/docs/integrations/providers/mlflow
<details>
<summary>Click for specific steps</summary>
More specifically, I installed mlflow genai and set my `COHERE_API_KEY` environment variable:
```bash
pip install 'mlflow[genai]'
export COHERE_API_KEY=...
```
I created `config.yaml` like so:
```yaml
endpoints:
- name: completions
endpoint_type: llm/v1/completions
model:
provider: cohere
name: command
config:
cohere_api_key: $COHERE_API_KEY
- name: embeddings
endpoint_type: llm/v1/embeddings
model:
provider: cohere
name: embed-english-light-v3.0
config:
cohere_api_key: $COHERE_API_KEY
```
I started the mlflow deployments server:
```bash
mlflow deployments start-server --config-path config.yaml
```
<details>
<summary>The server started as expected</summary>
```
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
[2023-12-27 13:53:18 -0800] [22480] [INFO] Starting gunicorn 21.2.0
[2023-12-27 13:53:18 -0800] [22480] [INFO] Listening at: http://127.0.0.1:5000 (22480)
[2023-12-27 13:53:18 -0800] [22480] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2023-12-27 13:53:18 -0800] [22481] [INFO] Booting worker with pid: 22481
[2023-12-27 13:53:18 -0800] [22482] [INFO] Booting worker with pid: 22482
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
[2023-12-27 13:53:20 -0800] [22481] [INFO] Started server process [22481]
[2023-12-27 13:53:20 -0800] [22481] [INFO] Waiting for application startup.
[2023-12-27 13:53:20 -0800] [22481] [INFO] Application startup complete.
[2023-12-27 13:53:20 -0800] [22482] [INFO] Started server process [22482]
[2023-12-27 13:53:20 -0800] [22482] [INFO] Waiting for application startup.
[2023-12-27 13:53:20 -0800] [22482] [INFO] Application startup complete.
```
</details>
In `test.py`, I added the embeddings example:
```python
from langchain.embeddings import MlflowEmbeddings
embeddings = MlflowEmbeddings(
target_uri="http://127.0.0.1:5000",
endpoint="embeddings",
)
print(embeddings.embed_query("hello"))
print(embeddings.embed_documents(["hello"]))
```
And I ran it with `python test.py`.
</details>
Here is the error I got:
```
raise HTTPError(
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations. Response text: {"detail":{"message":"invalid request: valid input_type must be provided with the provided model"}}
```
<details>
<summary>Full trace</summary>
```
xxx/python3.11/site-packages/pydantic/_internal/_config.py:321: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
warnings.warn(message, UserWarning)
Traceback (most recent call last):
File "xxx/python3.11/site-packages/mlflow/utils/request_utils.py", line 52, in augmented_raise_for_status
response.raise_for_status()
File "xxx/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yyy/test.py", line 8, in <module>
print(embeddings.embed_query("hello"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/langchain_community/embeddings/mlflow.py", line 74, in embed_query
return self.embed_documents([text])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/langchain_community/embeddings/mlflow.py", line 69, in embed_documents
resp = self._client.predict(endpoint=self.endpoint, inputs={"input": txt})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/mlflow/deployments/mlflow/__init__.py", line 294, in predict
return self._call_endpoint(
^^^^^^^^^^^^^^^^^^^^
File "xxx/python3.11/site-packages/mlflow/deployments/mlflow/__init__.py", line 139, in _call_endpoint
augmented_raise_for_status(response)
File "xxx/python3.11/site-packages/mlflow/utils/request_utils.py", line 55, in augmented_raise_for_status
raise HTTPError(
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://127.0.0.1:5000/endpoints/embeddings/invocations. Response text: {"detail":{"message":"invalid request: valid input_type must be provided with the provided model"}}
```
</details>
The issue is that `embed_query`/`embed_documents` don't allow passing in the input_type argument, which is needed by the Cohere API -- see https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/cohere.py#L81
### Proposed solution
My quick solution was to modify the two methods in `MlflowEmbeddings` to allow for kwargs:
```python
class MlflowEmbeddings:
....
def embed_documents(self, texts: List[str], **kwargs) -> List[List[float]]:
embeddings: List[List[float]] = []
for txt in _chunk(texts, 20):
resp = self._client.predict(endpoint=self.endpoint, inputs={"input": txt, **kwargs})
embeddings.extend(r["embedding"] for r in resp["data"])
return embeddings
def embed_query(self, text: str, **kwargs) -> List[float]:
return self.embed_documents([text], **kwargs)[0]
```
So `test.py` changes to:
```python
print(embeddings.embed_query("hello", input_type="search_query"))
print(embeddings.embed_documents(["hello"], input_type="search_document"))
```
This might not be the best solution since it kind of defeats the purpose of separating `embed_query` and `embed_documents` for Cohere. Another solution is to subclass MlflowEmbeddings for Cohere (and others?).
I intend to open a PR with this change, so any guidance on the best approach is much appreciated!
### Expected behavior
The code should generate embeddings for the given words | MlflowEmbeddings: input_type argument is missing, required by Cohere embeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/15234/comments | 2 | 2023-12-27T23:59:40Z | 2024-03-21T20:47:30Z | https://github.com/langchain-ai/langchain/issues/15234 | 2,057,854,254 | 15,234 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I am currently following the document to use a hugigngface LLM as a chat model: https://python.langchain.com/docs/integrations/chat/huggingface
I have setup my Huggingface API and am using Option 3 (HuggingFaceHub) to instantiate an LLM.
After running this line: chat_model._to_chat_prompt(messages)
I get the following error: ValueError: last message must be a HumanMessage
I am running the code the exactly the same as the documentation, including using the HuggingFaceH4/zephyr-7b-beta model.
Any help in resolving this issue is much appreciated.
### Idea or request for content:
_No response_ | HuggingFace Chat Wrapper - issue with HuggingFaceHub | https://api.github.com/repos/langchain-ai/langchain/issues/15232/comments | 4 | 2023-12-27T21:52:37Z | 2024-04-03T16:09:39Z | https://github.com/langchain-ai/langchain/issues/15232 | 2,057,799,134 | 15,232 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
@dosu-bot Currently im experiencing an old bug that was supposed to be fixed patches ago.
```
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py", line 134, in view_func
return function(request._get_current_object())
File "/workspace/main.py", line 109, in entry_point_http
faq_response = chain.invoke(inputs)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1510, in invoke
input = step.invoke(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain/chat_models/vertexai.py", line 187, in _generate
response = chat.send_message(question.content, **msg_params)
TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count'
```
My current version is 0.0.348 and im trying to create a Cloud Function. Here is my code:
```
from google.cloud import bigquery, storage
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.chat_models import ChatVertexAI
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.memory import ConversationSummaryBufferMemory
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from operator import itemgetter
from langchain.schema.output_parser import StrOutputParser
from langchain.callbacks.tracers import ConsoleCallbackHandler
from langchain.embeddings import VertexAIEmbeddings
from langchain.llms import VertexAI
from langchain.prompts import PromptTemplate, ChatPromptTemplate
from langchain.retrievers import BM25Retriever, EnsembleRetriever, ContextualCompressionRetriever
from langchain.retrievers.merger_retriever import MergerRetriever
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.schema import Document
from langchain.schema import StrOutputParser
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
import google.cloud.storage
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.prompts import StringPromptTemplate
from typing import List, Union
from langchain.prompts import StringPromptTemplate
from langchain.schema import AgentAction, AgentFinish, OutputParserException
from langchain.vectorstores import MatchingEngine
import re
import io
import ipywidgets as widgets
import json
import langchain
import math
import os
import pandas as pd
import time
import logging
from faq_redpro_prompt import template_faq
from faq_redpro_fewshot import few_shot_faq
PROJECT_ID_ME = os.environ.get("PROJECT_ID_ME")
ME_REGION = os.environ.get("ME_REGION")
ME_BUCKET_FAQ = os.environ.get("ME_BUCKET_FAQ")
ME_INDEX_ID_FAQ = os.environ.get("ME_INDEX_ID_FAQ")
ME_INDEX_ENDPOINT_ID_FAQ = os.environ.get("ME_INDEX_ENDPOINT_ID_FAQ")
def entry_point_http(request):
request_json = request.get_json()
# Extraer la entrada del parámetro enviado por Dialogflow CX
user_query = request_json.get('sessionInfo', {}).get('parameters', {}).get('user_query')
#Modelos
llm = VertexAI(
model_name = "text-bison",
temperature = 0.1 #Prueba
)
chat = ChatVertexAI(
model_name = "chat-bison@001",
temperature = 0.4,
top_p = 0.8,
top_k = 40,
max_output_tokens = 500
)
embeddings = VertexAIEmbeddings(model_name="textembedding-gecko-multilingual@001")
me_faqs = MatchingEngine.from_components(
project_id=PROJECT_ID_ME,
region=ME_REGION,
gcs_bucket_name=ME_BUCKET_FAQ,
embedding=embeddings,
index_id=ME_INDEX_ID_FAQ,
endpoint_id=ME_INDEX_ENDPOINT_ID_FAQ,
)
me_retriever = me_faqs.as_retriever(
search_type="similarity",
search_kwargs={
"k": 2,
},
)
faq_prompt = PromptTemplate(
template=template_faq,
input_variables=["context", "question", "few_shot_faq"]
)
chain = (
RunnablePassthrough.assign(
context=itemgetter("question") | me_retriever,
question=itemgetter("question"),
few_shot_faq=itemgetter("few_shot_faq"),
)
| faq_prompt
| chat
| StrOutputParser()
)
inputs = {"question": user_query, "few_shot_faq": few_shot_faq}
faq_response = chain.invoke(inputs)
print(f'LangChain response: {faq_response}')
formatted_results = format_response(faq_response)
response["fulfillment_response"]["messages"][0]["text"]["text"][0] = formatted_results
return (response, 200, headers)
def format_response(results):
answer = results['answer']
sources = results.get('sources', '')
if sources != '':
source_uri = sources
else:
source_documents = results.get('source_documents', '')
if source_documents != '':
source_uri = results['source_documents'][0].metadata['source']
else:
source_uri = 'Não encontrei uma fonte para essa pergunta.'
formatted_response = f"{answer}\nSources: {source_uri}"
return formatted_response
```
### Suggestion:
_No response_ | TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count' | https://api.github.com/repos/langchain-ai/langchain/issues/15228/comments | 1 | 2023-12-27T19:07:13Z | 2024-04-03T16:09:34Z | https://github.com/langchain-ai/langchain/issues/15228 | 2,057,694,401 | 15,228 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.document_loaders.parsers.pdf import PDFPlumberParser
def generate_embeddings(config: dict = None, urls = None, file_path = None, persist_directory=None):
if file_path:
parser = PDFPlumberParser()
data = parser.load(file_path)
processed_data = parser.process(data)
print(processed_data,"processed_data is-----------------llllllllllllllllllllllllllllll")
#below is the error i'm getting
data = parser.load(file_path)
AttributeError: 'PDFPlumberParser' object has no attribute 'load'
### Suggestion:
_No response_ | Issue: issue with pdfplumber | https://api.github.com/repos/langchain-ai/langchain/issues/15227/comments | 7 | 2023-12-27T18:50:09Z | 2024-04-04T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15227 | 2,057,681,667 | 15,227 |
[
"hwchase17",
"langchain"
] | ### Feature request
As per documentation there's a package for Gemini support but this only works for Gemini API and doesn't work with Vertexai.
https://python.langchain.com/docs/integrations/platforms/google
However in the vertexai docs gemini is mentioned (for some reason gemini ultra ? ) even though when tried with geimini-pro (gemini-ultra is not out yet unless Langchain folks have connections at Google :) ) it's throwing an error indicating that model doesn't exist.
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm#multimodality
Unknown model publishers/google/models/gemini-pro-vision; {'gs://google-cloud-aiplatform/schema/predict/instance/chat_generation_1.0.0.yaml': <class 'vertexai.language_models.ChatModel'>} (type=value_error)
### Motivation
gemini has been out for a while and seemingly should be supported by langchain as they already made a whole package for it.
### Your contribution
I would be willing to make a pr but I'm not even sure what's the issue since the docs supposedly mention that it should be already supported. | support gemini on vertexai | https://api.github.com/repos/langchain-ai/langchain/issues/15222/comments | 9 | 2023-12-27T17:07:05Z | 2024-04-24T16:47:21Z | https://github.com/langchain-ai/langchain/issues/15222 | 2,057,600,249 | 15,222 |
[
"hwchase17",
"langchain"
] | ### Feature request
I need a mechanism to allow more control over the ANN search performed for a given RAG chain. Consider the initial example:
```
retriever = vectorstore.as_retriever()
template = """You're a helpful assistant who is great at code generation. Don't give me any explanation or summary. I'll give you some examples that may or may not be relevant, and I want you to use the examples to write code that solves the provided problem. Return only the code that solves the problem.
PROBLEM:
{problem}
EXAMPLES:
{context}
ANSWER:
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
chain = (
{"context": retriever, "problem": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
chain.invoke("Generate an example Python method that uses LCEL to write a CQL query")
```
This approach assumes that the question will be used for creating the embedding. However, consider something like this:
```
retriever = vectorstore.as_retriever(ann_query="LCEL, AstraDB, CQL")
```
In this situation, when the retriever is invoked to embed the query, instead of performing vector search on the embedding of the very wordy
> "Generate an example Python method that uses LCEL to write a CQL query"
I want vector search to perform ANN on:
> "LCEL, AstraDB, CQL"
so that I have a greater likelihood of having the right docs stuffed into the prompt for the LLM to solve the problem, which was:
> Generate an example Python method that uses LCEL to write a CQL query
### Motivation
RAG results can be poor when the human input is very wordy or contains more info (for the LLM) than we want the vector store to search for. We need a mechanism to allow separation between the vector search query and the LLM query.
### Your contribution
I will create a PR. | Enable manual override of vector search query for controlled RAG | https://api.github.com/repos/langchain-ai/langchain/issues/15221/comments | 1 | 2023-12-27T16:54:54Z | 2024-04-03T16:09:24Z | https://github.com/langchain-ai/langchain/issues/15221 | 2,057,589,837 | 15,221 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.341
OpenAI version: 1.3.5
Model: gpt-4-1106-preview
Python version:3.10.13
Platform: Celery worker in Docker Container
### Who can help?
@eyurtsev @hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am working on implementing LangChain Agents in my Python project I am running this project using docker compose. In my project I am using Celery worker and have multiple worker services which executes from the queue. The entire setup is working fine and all celery tasks are executed as expected.
One of these workers in agent worker where I have configured LangChain Agent. I have created a function where I am loading tools, initializing agent and passing agent input. Here's the full code of my **agent module**:
| Langchain agent not executing properly in Celery worker running as Docker container | https://api.github.com/repos/langchain-ai/langchain/issues/15220/comments | 9 | 2023-12-27T16:42:21Z | 2024-03-14T14:26:45Z | https://github.com/langchain-ai/langchain/issues/15220 | 2,057,579,438 | 15,220 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Like title, haven't found anything in the doc - docubot please help
### Idea or request for content:
having a proper document could help | how to create a custom chat model | https://api.github.com/repos/langchain-ai/langchain/issues/15214/comments | 2 | 2023-12-27T13:27:25Z | 2024-04-03T16:09:19Z | https://github.com/langchain-ai/langchain/issues/15214 | 2,057,373,883 | 15,214 |
[
"hwchase17",
"langchain"
] | ### System Info

I am plannign to add new param like "affeciton"
How could I set the query databody to fill up the params here?( Langserve setup !)

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
How to handle the input variables in LCEL mode?
### Expected behavior
get the right place I put in when I new a variables in the prompt. | How to query with new variables in LCEL mode? | https://api.github.com/repos/langchain-ai/langchain/issues/15213/comments | 3 | 2023-12-27T12:51:07Z | 2024-04-03T16:09:14Z | https://github.com/langchain-ai/langchain/issues/15213 | 2,057,338,192 | 15,213 |
[
"hwchase17",
"langchain"
] | ### System Info
MacOS, M1 Pro
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code:
```
import os
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.chat_models import ChatOllama
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
load_dotenv()
messages = [
SystemMessagePromptTemplate.from_template(
"You are a truthful, accurate AI agent that responds to the user's questions, given an AI paper by Apple."
),
HumanMessagePromptTemplate.from_template("What is the paper about, in summary?"),
]
qa_prompt = ChatPromptTemplate.from_messages(messages)
chat_model = ChatOllama(
model="mistral",
)
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
pages = loader.load_and_split()
embeddings = OpenAIEmbeddings(api_key=os.getenv("OPENAI_API_KEY"))
print(pages[0])
db = None
if not os.path.exists("./faiss_index"):
db = FAISS.from_documents(pages, embeddings)
db.save_local("./faiss_index")
else:
db = FAISS.load_local("faiss_index", embeddings)
query = "What is the paper about?"
docs = db.similarity_search_with_score(query)
print(docs[0])
ConversationalRetrievalChain.from_llm(
llm=chat_model,
retriever=db.as_retriever(search_type="similarity", search_kwargs={"k": 0.8}),
verbose=True,
combine_docs_chain_kwargs={"prompt": qa_prompt},
return_source_documents=True,
)
```
additional files here: https://github.com/polooner/chatpdf/blob/main/main.py
### Expected behavior
An answer from the Chat Model | Error using ConversationalRetrievalChain.from_llm: "document_variable_name context was not found in llm_chain input_variables: [] (type=value_error)" | https://api.github.com/repos/langchain-ai/langchain/issues/15210/comments | 1 | 2023-12-27T11:46:08Z | 2024-04-03T16:09:09Z | https://github.com/langchain-ai/langchain/issues/15210 | 2,057,277,681 | 15,210 |
[
"hwchase17",
"langchain"
] | ### System Info
Baichuan Chat (with both Baichuan-Turbo and Baichuan-Turbo-192K models) has updated their APIs. There are breaking changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API but is still required in Langchain. Baichuan's Langchain integration needs to be updated to the latest version.
Also we have released out new Baichuan-Turbo-192K API. We are adding support for this.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/integrations/chat/baichuan
SECRET_KEY has been deprecated.
### Expected behavior
Baichuan Chat works normally. | Fix Baichuan's integration and introduce Baichuan-Turbo-192K API. | https://api.github.com/repos/langchain-ai/langchain/issues/15206/comments | 1 | 2023-12-27T10:21:21Z | 2024-04-03T16:09:04Z | https://github.com/langchain-ai/langchain/issues/15206 | 2,057,190,266 | 15,206 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain
with a callback handler for streaming responses back.
> qa_chain =ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1},
),
return_source_documents=True,
rephrase_question=False,
return_generated_question=False,
)
> response = qa_chain(
{
"question": user_input,
"chat_history": chat_history,
},
callbacks=[stream_handler],
)
```
class StreamHandler(BaseCallbackHandler):
def __init__(self):
self.text = ""
def on_llm_new_token(self, token: str, **kwargs: Any):
# Initialize old_text
old_text = self.text
print("old text ", old_text)
# Check if the token is not part of the prompts before adding it to the queue
print("token is", token)
if token is not None and token != "":
self.text += token
# Calculate the new content since the last emission
new_content = self.text[len(old_text) :]
socketio.emit("update_response", {"response": new_content})
```
I have provided the value of rephrase_question and return_generated_question False.
Even after that the streaming response contains the rephrased question.
But the final response from the LLM does not contain this rephrased question.
what could be the reason, please provide an appropriate solution.
### Suggestion:
_No response_ | Issue: Streaming Response contains the rephrased question in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15205/comments | 3 | 2023-12-27T10:20:47Z | 2024-04-03T16:08:59Z | https://github.com/langchain-ai/langchain/issues/15205 | 2,057,189,374 | 15,205 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: MacOS Sonoma
Python: 3.11.6
LangChain: 0.0.352
llama-cpp-python = 0.2.25
pydantic: 1.10.13 (I know that it is not the latest version, but version 1 is still officially supported)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to use LlamaCpp in conjunction with grammar, I get an error from pydantic. The following code snipped was adapted from the [docs](https://python.langchain.com/docs/integrations/llms/llamacpp#grammars): so that a `LlamaGrammar` is passed, instead of the path to the grammar file.
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.llamacpp import LlamaCpp
from llama_cpp.llama_grammar import LlamaGrammar
from pydantic import BaseModel
class SomeSchema(BaseModel):
some_field: str
LlamaCpp(
model_path="some model",
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
grammar=LlamaGrammar.from_json_schema(SomeSchema.schema_json()),
)
# Fails with:
# pydantic.errors.ConfigError: field "grammar" not yet prepared so type is still a ForwardRef, you might need to call LlamaCpp.update_forward_refs().
```
The following works though (and the grammar object is used properly:
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.llamacpp import LlamaCpp
from llama_cpp.llama_grammar import LlamaGrammar
from pydantic import BaseModel
class SomeSchema(BaseModel):
some_field: str
model = LlamaCpp(
model_path="some model",
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
)
model.grammar = LlamaGrammar.from_json_schema(SomeSchema.schema_json())
```
### Expected behavior
It should be possible to pass a `LlamaGrammar` object in the `__init__` of `LlamaCpp`, as per its [definition](https://github.com/langchain-ai/langchain/blob/f36ef0739dbb548cabdb4453e6819fc3d826414f/libs/community/langchain_community/llms/llamacpp.py#L129)
I had a quick look at the pydantic [documentation regarding this problem](https://docs.pydantic.dev/1.10/usage/postponed_annotations/), but I couldn't find the postponed annotation in question. | Pydantic forward ref issue when creating using LlamaCpp with grammar | https://api.github.com/repos/langchain-ai/langchain/issues/15204/comments | 1 | 2023-12-27T10:11:11Z | 2024-04-03T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15204 | 2,057,179,711 | 15,204 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/render.py
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/convert_to_openai.py
For backward compatibility purposes, should we proceed with a direct import?
### Suggestion:
_No response_ | Issue: Identical Content in Two Files | https://api.github.com/repos/langchain-ai/langchain/issues/15203/comments | 1 | 2023-12-27T09:49:59Z | 2024-04-03T16:08:49Z | https://github.com/langchain-ai/langchain/issues/15203 | 2,057,154,943 | 15,203 |
[
"hwchase17",
"langchain"
] | ### System Info
langchian=0.0.352
qianfan=0.2.4
When I tried the usage of agent in this [video](https://learn.deeplearning.ai/langchain/lesson/7/agents), I changed the model in it from ChatGpt-3.5-turbo to ERNIE-Bot, and the output of agent showed the following error:
```bash
> Entering new AgentExecutor chain...
Could not parse LLM output: xxxxxxxxx
Observation: Invalid or incomplete response
Thought: Could not parse LLM output: xxxxx
Observation: Invalid or incomplete response
...
```
And, ERNIE-Bot can't call (llm-math) tool correctly.
I wonder if the problem is a lack of capability in the qianfan model itself, or if there is a problem in the qianfan code.
Or is there something wrong with my usage or other issues?
### Who can help?
@danielhjz
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**my code**
```python
llm = QianfanChatEndpoint(
temperature=0.000001,
model='ERNIE-Bot'
)
tools = load_tools(
["llm-math", "wikipedia"],
llm=llm
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
agent("300的1/4是多少?")
```
**code in the video**
```python
# code in the video
llm = ChatOpenAI(
temperature=0
)
tools = load_tools(
["llm-math", "wikipedia"],
llm=llm
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
agent("What is the 25% of 300?")
```
### Expected behavior
**Run by ChatOpenAI(temperature=0)**
````bash
> Entering new AgentExecutor chain...
Thought: We need to calculate 25% of 300, which means we need to multiply 300 by 0.25.
Action:
```
{
"action": "Calculator",
"action_input": "300*0.25"
}
```
Observation: Answer: 75.0
Thought:The calculator tool returned the answer 75.0, which is correct.
Final Answer: 25% of 300 is 75.0.
> Finished chain.
{'input': 'What is the 25% of 300?', 'output': '25% of 300 is 75.0.'}
````
| "Could not parse LLM output" when using QianfanChatEndpoint in agent. | https://api.github.com/repos/langchain-ai/langchain/issues/15199/comments | 2 | 2023-12-27T08:49:02Z | 2024-04-04T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15199 | 2,057,093,818 | 15,199 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```python
conversational_qa_chain = (
_inputs | _context | ConfigurableTokenLimitProcessor(model="gpt_35_turbo").configurable_fields(
model=ConfigurableFieldSingleOption(
id="model",
name="model",
options={
"gpt_35_turbo": "gpt_35_turbo",
"gpt_35_turbo_1106": "gpt_35_turbo_1106",
"gpt_4_1106_preview": "gpt_4_1106_preview",
"gpt_4_32k": "gpt_4_32k"
},
default="gpt_35_turbo",
)
) | ANSWER_PROMPT | llm | StrOutputParser()
)
```
```python
chain = conversational_qa_chain.with_types(input_type=ChatHistory).with_fallbacks([RunnableLambda(when_all_is_lost)])
```
```python
add_routes(app,
chain,
enable_feedback_endpoint=True,
path="/test",
config_keys=["llm", "collection_name", "model", "configurable"]
)
```
It's a code developed with langserve, but if i send a request to `/test/stream` using playground, unlike before adding the `with_fallbacks` function, the response is not exposed on the screen by token, but all the responses are shown on the screen at once, what's the reason?
### Suggestion:
Even if i add `with_fallbacks`, it should be streamed on the screen for each token. | Issue: lcel langserve with_fallbacks streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15195/comments | 4 | 2023-12-27T04:53:43Z | 2024-05-22T16:07:52Z | https://github.com/langchain-ai/langchain/issues/15195 | 2,056,910,699 | 15,195 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I apologize for the naive question.it's not about an error or a bug.
I'm trying to implement routing by following the guide here: https://python.langchain.com/docs/modules/chains/foundational/router
However, I can't figure out how to use RAG.
I tried changing the last code in the guide like this.
```python
final_chain = (
RunnablePassthrough.assign(topic=itemgetter("input") | classifier_chain)
| prompt_branch
| ChatOpenAI()
| StrOutputParser()
)
```
```python
final_chain = (
{
"context": retriever,
"topic": itemgetter("input") | classifier_chain,
}
| prompt_branch
| llm
| StrOutputParser()
)
```
But I get the following error:
```shell
File "/Users/user/Library/Python/3.9/lib/python/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
```
### Suggestion:
_No response_ | Issue: <Please tell me how to combine Routing and RAG in a chain.> | https://api.github.com/repos/langchain-ai/langchain/issues/15193/comments | 5 | 2023-12-27T04:29:43Z | 2024-04-16T16:20:16Z | https://github.com/langchain-ai/langchain/issues/15193 | 2,056,898,273 | 15,193 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
why CSVLoader can't load? error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 from langchain.document_loaders.csv_loader import CSVLoader
3 loader = CSVLoader(file_path='./data/bugreport.csv', csv_args={
4 'delimiter': ',',
5 'quotechar': '"',
6 'fieldnames': ["URL","Resolved","Backport_of","Submitted","Status","CPU","Priority","Sub_Component","Updated","Fix_Versions","Affected_Version","OS","Type","Resolution","Component"]
7 })
9 data = loader.load()
File D:\miniconda\lib\site-packages\langchain\document_loaders\__init__.py:49
47 from langchain.document_loaders.bigquery import BigQueryLoader
48 from langchain.document_loaders.bilibili import BiliBiliLoader
---> 49 from langchain.document_loaders.blackboard import BlackboardLoader
50 from langchain.document_loaders.blob_loaders import (
51 Blob,
52 BlobLoader,
53 FileSystemBlobLoader,
54 YoutubeAudioLoader,
55 )
56 from langchain.document_loaders.blockchain import BlockchainDocumentLoader
File D:\miniconda\lib\site-packages\langchain\document_loaders\blackboard.py:1
----> 1 from langchain_community.document_loaders.blackboard import BlackboardLoader
3 __all__ = ["BlackboardLoader"]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\__init__.py:51
49 from langchain_community.document_loaders.bigquery import BigQueryLoader
50 from langchain_community.document_loaders.bilibili import BiliBiliLoader
---> 51 from langchain_community.document_loaders.blackboard import BlackboardLoader
52 from langchain_community.document_loaders.blob_loaders import (
53 Blob,
54 BlobLoader,
55 FileSystemBlobLoader,
56 YoutubeAudioLoader,
57 )
58 from langchain_community.document_loaders.blockchain import BlockchainDocumentLoader
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\blackboard.py:10
7 from langchain_core.documents import Document
9 from langchain_community.document_loaders.directory import DirectoryLoader
---> 10 from langchain_community.document_loaders.pdf import PyPDFLoader
11 from langchain_community.document_loaders.web_base import WebBaseLoader
14 class BlackboardLoader(WebBaseLoader):
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\pdf.py:18
16 from langchain_community.document_loaders.base import BaseLoader
17 from langchain_community.document_loaders.blob_loaders import Blob
---> 18 from langchain_community.document_loaders.parsers.pdf import (
19 AmazonTextractPDFParser,
20 DocumentIntelligenceParser,
21 PDFMinerParser,
22 PDFPlumberParser,
23 PyMuPDFParser,
24 PyPDFium2Parser,
25 PyPDFParser,
26 )
27 from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
29 logger = logging.getLogger(__file__)
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\__init__.py:5
3 from langchain_community.document_loaders.parsers.grobid import GrobidParser
4 from langchain_community.document_loaders.parsers.html import BS4HTMLParser
----> 5 from langchain_community.document_loaders.parsers.language import LanguageParser
6 from langchain_community.document_loaders.parsers.pdf import (
7 PDFMinerParser,
8 PDFPlumberParser,
(...)
11 PyPDFParser,
12 )
14 __all__ = [
15 "BS4HTMLParser",
16 "DocAIParser",
(...)
24 "PyPDFParser",
25 ]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\language\__init__.py:1
----> 1 from langchain_community.document_loaders.parsers.language.language_parser import (
2 LanguageParser,
3 )
5 __all__ = ["LanguageParser"]
File D:\miniconda\lib\site-packages\langchain_community\document_loaders\parsers\language\language_parser.py:24
18 try:
19 from langchain.text_splitter import Language
21 LANGUAGE_EXTENSIONS: Dict[str, str] = {
22 "py": Language.PYTHON,
23 "js": Language.JS,
---> 24 "cobol": Language.COBOL,
25 }
27 LANGUAGE_SEGMENTERS: Dict[str, Any] = {
28 Language.PYTHON: PythonSegmenter,
29 Language.JS: JavaScriptSegmenter,
30 Language.COBOL: CobolSegmenter,
31 }
32 except ImportError:
File D:\miniconda\lib\enum.py:437, in EnumMeta.__getattr__(cls, name)
435 return cls._member_map_[name]
436 except KeyError:
--> 437 raise AttributeError(name) from None
AttributeError: COBOL
### Suggestion:
_No response_ | Issue: <CSVLoader can't load> | https://api.github.com/repos/langchain-ai/langchain/issues/15192/comments | 9 | 2023-12-27T03:54:38Z | 2024-03-01T05:21:04Z | https://github.com/langchain-ai/langchain/issues/15192 | 2,056,881,303 | 15,192 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm going to make a chain through lcel and try to process an error through `with_fallbacks` at the end, but unlike before I put `with_fallbacks`, streaming is not possible and all responses go down at once. Can i process streaming using `with_fallbacks`?
### Suggestion:
lcel `with_fallbacks` streaming | Issue: lcel `with_fallbacks` streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15191/comments | 1 | 2023-12-27T03:40:56Z | 2023-12-27T04:53:55Z | https://github.com/langchain-ai/langchain/issues/15191 | 2,056,875,193 | 15,191 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If the rate limit of the api key is exceeded when developing a chain through lcel, I want to dynamically change another api key and retry to give the client a normal response, is there a way?
### Suggestion:
Dynamically catch an error in lcel, change the api key, and try again | Issue: openai api key rate limit error handing | https://api.github.com/repos/langchain-ai/langchain/issues/15190/comments | 2 | 2023-12-27T03:37:31Z | 2024-04-03T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15190 | 2,056,873,561 | 15,190 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I want to contribute to one of the libs and started a fork. Here are the steps I took:
I am trying to add a new feature but first need to experiment with it. I am unsure on how to get started writing some short scripts to use the libs.
1. I went into ```libs/experimental```, ```libs/core```, ```libs/community``` ```libs/langchain``` and ran ```poetry install``` in all of them.
2. I start an environment from ```libs/langchain``` with ```poetry shell```
3. I created a file inside of it, made some short code:
```from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
# load the document and split it into chunks
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
documents = loader.load_and_split()
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(documents, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
and got the following error:
```
Traceback (most recent call last):
File "/Users/polo/langchain/libs/langchain/test.py", line 9, in <module>
loader = PyPDFLoader("./llm_in_a_flash_apple.pdf")
File "/Users/polo/langchain/libs/community/langchain_community/document_loaders/pdf.py", line 154, in __init__
raise ImportError(
ImportError: pypdf package not found, please install it with `pip install pypdf`
```
This is my first time in a Python project like this. I am unsure how to get started using all the different packages while in a fork of the repository. If anyone can guide me I would love to make a PR on this, it is quite daunting for beginners to get around and start contributing!
### Idea or request for content:
_No response_ | DOC: How to write my own short scripts within a fork to test some code? | https://api.github.com/repos/langchain-ai/langchain/issues/15177/comments | 2 | 2023-12-26T18:42:18Z | 2024-05-04T08:50:34Z | https://github.com/langchain-ai/langchain/issues/15177 | 2,056,625,948 | 15,177 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: Windows
Python: 3.9.10
Langchain version: 0.0.352
openai version: 1.6.1
### Who can help?
@BeautyyuYanli
@baskaryan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.vectorstores.pgvecto_rs import PGVecto_rs
from langchain.embeddings import AzureOpenAIEmbeddings
from dotenv import dotenv_values
import os
```
```
config = dotenv_values(".env")
# os.environ["TIKTOKEN_CACHE_DIR"] = "./cache/tiktoken/"
embeddings = AzureOpenAIEmbeddings(
max_retries=3,
timeout=60,
api_key=config["api_key"],
model="text-embedding-ada-002",
openai_api_type=config["api_type"],
azure_endpoint=config["api_base"]
)
URL = "postgresql+psycopg://{username}:{password}@{host}:{port}/{db_name}".format(
port=config["db_port"],
host=config["db_host"],
username=config["db_user"],
password=config["db_pass"],
db_name=config["db_name"],
)
db = PGVecto_rs(
embedding=embeddings,
db_url=URL,
dimension=1536, # text-embedding-ada-002
collection_name="test",
)
```
```
docs = ["a text about mathematics", "a text about physics"]
meta = [{"id": "1"}, {"id": "2"}]
db.add_texts(
texts=docs,
metadatas=meta
)
retr = db.as_retriever(
search_kwargs = {
"k": 1,
"filter": {"id": "1"}
}
)
```
```
retr.invoke("physics")
```
>[Document(page_content='a text about physics', metadata={'id': '2'})]
### Expected behavior
The search should only be performed on documents where the `metadata` field contains `{"id": "1"}`.
In this case, adding a filter makes no difference to the retrieval. | pgvecto.rs: retriever filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/15173/comments | 2 | 2023-12-26T14:35:49Z | 2024-01-15T19:42:01Z | https://github.com/langchain-ai/langchain/issues/15173 | 2,056,466,694 | 15,173 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
return formatted_prompt
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa`
### Suggestion:
_No response_ | Issue: Explain Memory and How it's implemented in my Case. | https://api.github.com/repos/langchain-ai/langchain/issues/15170/comments | 4 | 2023-12-26T12:45:59Z | 2023-12-27T05:34:44Z | https://github.com/langchain-ai/langchain/issues/15170 | 2,056,381,701 | 15,170 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I would like to build RAG based on Mistral 7B model
The model is already hosted, and I provide llm_url in the custom LLM setup
I am able to make a request and get a response from the URL using the `llm._call` method, however something is wrong with the callbacks in `RetrievalQA.from_chain_type` method
It gives me below error
`'Mistral7B_LLM' object has no attribute 'callbacks'`
Am I missing anything in the below code
```
from pydantic import Extra
import requests
from typing import Any, List,Dict, Callable, Type, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM, BaseLLM
class Mistral7B_LLM(LLM):
def __init__(self):
self.__post_init__()
def __post_init__(self) -> None:
def _import_mistral7B_llm() -> Any:
from svcs.vector.src.controllers.llm.mistral7B_serving import Mistral7B_LLM
return Mistral7B_LLM
def __getattr__() -> Any:
return Mistral7B_LLM()
def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
return {
"Mistral7B_LLM": _import_mistral7B_llm,
}
__all__ = ["Mistral7B_LLM"]
class Config:
extra = Extra.forbid
@property
def _llm_type(self) -> str:
return "Mistral7B_LLM"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
max_new_tokens: Optional[int] = 156,
temperature: Optional[float] = 0.7,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = {
"inputs": [prompt],
"max_new_tokens": max_new_tokens,
"temperature": temperature,
}
headers = {"Content-Type": "application/json"}
llm_url = 'my url'
response = requests.post(llm_url, json=payload, headers=headers, verify=False)
response.raise_for_status()
# print("API Response:", response.json())
answer = response.json()["outputs"].split("[/INST]")[-1]
return answer
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"llmUrl": self.llm_url}
```
### Suggestion:
_No response_ | Issue: Custom Mistral based LLM from API for RetrievalQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/15168/comments | 5 | 2023-12-26T11:56:09Z | 2024-06-26T12:00:33Z | https://github.com/langchain-ai/langchain/issues/15168 | 2,056,342,401 | 15,168 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Given a tool that generates a dataframe, how can I pass it through the chain?
```
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are a helpful assistant for marketing department.
"""),
MessagesPlaceholder(variable_name="history"),
("user", """
Provide the answer to the question with 3 sentences long.
If the response is related to video-on-demand. Please make sure you return the content id to the answers
Question: {input}
"""),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = (
{
"input": lambda x: x["input"],
"dataframe": <<my_dataframe>>,
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
"history": lambda x: x['history']
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
AgentExecutor(agent=agent, tools=tools, verbose=True)
```
Is it possible?
### Suggestion:
_No response_ | Issue: Pass additional data through AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15165/comments | 3 | 2023-12-26T10:47:02Z | 2024-06-19T08:30:56Z | https://github.com/langchain-ai/langchain/issues/15165 | 2,056,290,653 | 15,165 |
[
"hwchase17",
"langchain"
] | ### System Info
python3.10
langchain 0.0.333
### Who can help?
@hwchase17 @agola11 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [x] Async
### Reproduction
1. I tried to use the asynchronous call chain=ConversationalRetrievalChain.from_llm combined with the local knowledge base to find the answer.
2. When chat_history is not passed in chain.acall({"question": query, "chat_history":[]}), it can correctly return the result of the streaming output.
3. When I pass in chat_history, the returned result is new_question. new_question is a processed question and is not the answer I want.
code:
` db = FAISS.load_local(COMIXGPT_VECTOR, embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": score_threshold,
"k": VECTOR_SEARCH_TOP_K})
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=prompt_template
)
chain = ConversationalRetrievalChain.from_llm(
llm=model,
chain_type="stuff",
retriever=retriever,
#memory=memory,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': prompt},
condense_question_llm=model,
verbose=True
)
task = asyncio.create_task(wrap_done(
chain.acall({"question": query, "chat_history":chat_history}),
callback.done),
)
if stream:
async for token in callback.aiter():
# Use server-sent-events to stream the response
yield json.dumps({"answer": token}, ensure_ascii=False)
yield json.dumps({"docs": source_documents}, ensure_ascii=False)
else:
answer = ""
async for token in callback.aiter():
answer += token
yield json.dumps({"answer": answer,
"docs": source_documents},
ensure_ascii=False)
await task
return StreamingResponse(knowledge_base_chat_iterator(query=query,
top_k=top_k,
history=history,
chat_history=chat_history,
model_name=model_name,
prompt_name=prompt_name),
media_type="text/event-stream")`
### Expected behavior
I don't know why it called LLM twice, and then it returned the updated Question, which was not the Assistant I wanted. The call log is as follows:
log:
> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
Human: hello
Assistant: Hi there! Is there anything I can help you with? Youre welcome, just tell me~
Human: hello hello make friends
Assistant: ok
Follow Up Input: hello!
Standalone question:
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Chat History:
Human: hello
Assistant: Hi there! Is there anything I can help you with? Youre welcome, just tell me~
Human: hello hello make friends
Assistant: ok
Question: How can I make friends?
Helpful Answer:
2023-12-26 17:27:43,269 - _client.py[line:1758] - INFO: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
> Finished chain.
> Finished chain.
| 【BUG】ConversationalRetrievalChain.from_llm and pass in chat_history, there is a problem with the callback. | https://api.github.com/repos/langchain-ai/langchain/issues/15164/comments | 2 | 2023-12-26T09:32:29Z | 2024-01-10T03:36:51Z | https://github.com/langchain-ai/langchain/issues/15164 | 2,056,226,273 | 15,164 |
[
"hwchase17",
"langchain"
] | is it correct using CharacterTextSplitter in Confluence
### Issue you'd like to raise.
confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = []
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
try:
documents.extend(loader.load(space_key=space_key,include_attachments=True,limit=100))
except:
documents=[]
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
### Suggestion:
_No response_ | Issue: How it can be splitted ? | https://api.github.com/repos/langchain-ai/langchain/issues/15162/comments | 1 | 2023-12-26T07:41:33Z | 2023-12-26T10:37:39Z | https://github.com/langchain-ai/langchain/issues/15162 | 2,056,133,474 | 15,162 |
[
"hwchase17",
"langchain"
] | ### System Info
When I set `verbose=True` when creating chains using ConversationBufferMemory as memory and **redirect** the output to a txt/log file, the return messages shows that the ConversationBufferMemory saves same round conversation twice. You can get the example in later part of this issue.
**This problem will not happen if I just print the return messages in terminal instead of redirecting them into files.**
Does ConversationBufferMemory actually save conversation twice? If so, this will waste half of the input tokens to LLMs. How can I set some variables to make it only save once with any round of conversation?
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name = 'gpt-4-1106-preview', temperature = 0.0)
def_memory = ConversationBufferMemory(memory_key="history", return_messages=True)
def_chain = ConversationChain(
llm = llm,
memory = def_memory,
verbose = True)
def_queries = ['When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].',
'You can see duplication in memory of this query.']
for def_q in def_queries:
ret = def_chain.run(def_q)
def_memory.save_context({"input": def_q}, {"output": ret})
# pls redirect the output into some .txt or .log file
```
### Expected behavior
### Below is my redirected gh.log file, I bold the duplicate part
[1m> Entering new ConversationChain chain...[0m
Prompt after formatting:
[32;1m[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
[]
Human: When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].
AI:[0m
[1m> Finished chain.[0m
[1m> Entering new ConversationChain chain...[0m
Prompt after formatting:
[32;1m[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
**[HumanMessage(content='When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].'), AIMessage(content='[UNDERSTOOD]'), HumanMessage(content='When answering questions below, you should play a role as a vehicle system engineer. Your job is to read the VDR (Vehicle Digital Requirement) form and evaluate the quality of the VDR completion. Make your answer as brief as you can. If you understand what I said, reply only [UNDERSTAND].'), AIMessage(content='[UNDERSTOOD]')]**
Human: You can see duplication in memory of this query.
AI:[0m
[1m> Finished chain.[0m
| Does ConversationBufferMemory actually save conversation twice? | https://api.github.com/repos/langchain-ai/langchain/issues/15161/comments | 2 | 2023-12-26T07:21:01Z | 2024-01-02T06:47:11Z | https://github.com/langchain-ai/langchain/issues/15161 | 2,056,117,735 | 15,161 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i'm using openai function call agent , gpt llm offen gives bad tool parameters, i want to achieve this: pass certain params to all tools through through some path, before every tool get executed, i can check whether the llm produced params is right or directly use the certain params already get
### Suggestion:
_No response_ | Issue: i want to use langchain callbacks to pass a tool parameter to it? what should i do? | https://api.github.com/repos/langchain-ai/langchain/issues/15160/comments | 1 | 2023-12-26T06:56:58Z | 2024-04-02T16:07:09Z | https://github.com/langchain-ai/langchain/issues/15160 | 2,056,099,364 | 15,160 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? For now, I do not see any methods that I can use to retrieve the documents and also return me the similarity score. However, if use the vector store to run similarity search, I have the option to get the documents and corresponding scores. Isn't there a way to achieve the same thing via retriever?
### Suggestion:
_No response_ | Issue: When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? | https://api.github.com/repos/langchain-ai/langchain/issues/15158/comments | 4 | 2023-12-26T06:24:17Z | 2024-04-02T16:07:04Z | https://github.com/langchain-ai/langchain/issues/15158 | 2,056,076,604 | 15,158 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I've wondering that in this part of the code in order to define `cypher generation template` of langchain with neo4j graph database from Neo4j DB QA chain Documentation
```python
from langchain.prompts.prompt import PromptTemplate
CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Schema:
{schema}
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
Examples: Here are a few examples of generated Cypher statements for particular questions:
# How many people played in Top Gun?
MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors
The question is:
{question}"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE
)
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
cypher_prompt=CYPHER_GENERATION_PROMPT,
)
```
Just want to ask that what variables that `schema` and `question` in **input_variables** parameters in `PromptTemplate` refers to ?
### Idea or request for content:
Please explain what schema and question refers to, did schema from our connected neo4j database and question is a text we pass into `chain.run("text input")`. Since i'm a little bit confused with documentation itself and need some explanation. Maybe use an example from it to explain will be much understanable | DOC: Need some clarification on Neo4j DB QA chain documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15157/comments | 3 | 2023-12-26T04:36:18Z | 2024-04-02T16:06:59Z | https://github.com/langchain-ai/langchain/issues/15157 | 2,056,019,228 | 15,157 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.352
Langchain experimental Version: 0.0.47
Python : 3.10
Ubuntu : 22.04
Poetry is being used
**Code: `test.py`**
```python
import json
from langchain.schema import HumanMessage
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chat_models import ChatOllama
chat_model = ChatOllama(model="mistral:instruct")
json_schema = {
"title": "Person",
"description": "Identifying information about a person.",
"type": "object",
"properties": {
"name": {"title": "Name", "description": "The person's name", "type": "string"},
"age": {"title": "Age", "description": "The person's age", "type": "integer"},
"fav_food": {
"title": "Fav Food",
"description": "The person's favorite food",
"type": "string",
},
},
"required": ["name", "age"],
}
messages = [
HumanMessage(
content="Please tell me about a person using the following JSON schema:"
),
HumanMessage(content=json.dumps(json_schema, indent=2)),
HumanMessage(
content="Now, considering the schema, tell me about a person named John who is 35 years old and loves pizza."
),
]
chat_model_response = chat_model(messages)
```
**Error:**
```sh
Traceback (most recent call last):
File "test.py", line 35, in <module>
chat_model_response = chat_model(messages)
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 636, in __call__
generation = self.generate(
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 382, in generate
raise e
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 372, in generate
self._generate_with_cache(
File ".venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 528, in _generate_with_cache
return self._generate(
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 209, in _generate
final_chunk = self._chat_stream_with_aggregation(
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 168, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File ".venv/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 155, in _create_chat_stream
yield from self._create_stream(
File ".venv/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 198, in _create_stream
raise OllamaEndpointNotFoundError(
langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404.
```
checked if ollama is running on port 11434 it is working fine, but still seeing the issue.
@hwchase17 @agola11
Need some help on this.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run the file `test.py`
### Expected behavior
model should complete the predication without any issue | langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404 | https://api.github.com/repos/langchain-ai/langchain/issues/15147/comments | 9 | 2023-12-25T14:08:45Z | 2024-05-29T12:18:55Z | https://github.com/langchain-ai/langchain/issues/15147 | 2,055,708,933 | 15,147 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version 0.352
SystemMessage is ignored when I invoke AgentExecutor.run function. the code looks as below.
```
from typing import Tuple, Dict
from langchain.agents import initialize_agent, AgentType
from langchain.agents.agent import AgentExecutor
from langchain.agents.format_scratchpad.openai_functions import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.memory import ConversationBufferMemory
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools.render import format_tool_to_openai_function
from langchain_core.messages import SystemMessage
from elasticsearch_agent.config import cfg
from elasticsearch_agent.tools.index_data_tool import IndexShowDataTool
from elasticsearch_agent.tools.index_details_tool import IndexDetailsTool
from elasticsearch_agent.tools.index_search_tool import create_search_tool
from elasticsearch_agent.tools.list_indices_tool import ListIndicesTool
tools = [
ListIndicesTool(),
IndexShowDataTool(),
IndexDetailsTool(),
create_search_tool(),
]
def elastic_agent_factory() -> AgentExecutor:
system_msg = """
You are a helpful AI ElasticSearch Expert Assistant
**Always you will get the field names of the ElasticSearch index from the Elasticsearch DB as a first step.
You are provided with various tools to help the user to get information from an ElasticSearch index.
you will get the index name from the question. If not provided, show the list of available indices and ask the user to choose it.
You will generate required aggregation queries for any analytical questions asked.
You will use 'aggregations' field in response object for answering analytical queries.
Dont's:
Never assume index names or field names.
"""
agent_kwargs, memory = setup_memory()
agent_kwargs["system_message"] = SystemMessage(content=system_msg)
return initialize_agent(
tools,
cfg.llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=False,
agent_kwargs=agent_kwargs,
memory=memory
)
def setup_memory() -> Tuple[Dict, ConversationBufferMemory]:
"""
Sets up memory for the open ai functions agent.
:return a tuple with the agent keyword pairs and the conversation memory.
"""
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferMemory(memory_key="memory", return_messages=True)
return agent_kwargs, memory
if __name__ == "__main__":
agent_executor = elastic_agent_factory()
prompt = agent_executor.agent.prompt
print(prompt)
print(type(agent_executor.agent.prompt))
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
run the given code
### Expected behavior
The chain agent should consider using the system message and extra prompt message provided to it. | SystemMessage are not considered while creating AgentExecutor with OPENAI_FUNCTIONS | https://api.github.com/repos/langchain-ai/langchain/issues/15145/comments | 5 | 2023-12-25T12:11:14Z | 2024-04-01T16:06:55Z | https://github.com/langchain-ai/langchain/issues/15145 | 2,055,649,057 | 15,145 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders.parsers.audio import OpenAIWhisperParserLocal
whisper = OpenAIWhisperParserLocal(device="cuda")
```
This fails when cuda is requested but is not available and generates the following error:
`AttributeError: 'OpenAIWhisperParserLocal' object has no attribute 'lang_model'`
This is cause by the following logic: https://github.com/langchain-ai/langchain/blob/a2d30428237695f076060dec881bae0258123775/libs/community/langchain_community/document_loaders/parsers/audio.py#L176C18-L176C21
### Expected behavior
Provide a more clear error or fall back to CPU. | OpenAIWhisperParserLocal fails when specifying cuda device but cuda is not available | https://api.github.com/repos/langchain-ai/langchain/issues/15143/comments | 1 | 2023-12-25T09:53:52Z | 2024-04-01T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15143 | 2,055,569,018 | 15,143 |
[
"hwchase17",
"langchain"
] | ### System Info
wsl
conda 23.7.4 python 3.8.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
repo_id = "Qwen/Qwen-1_8B-Chat"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
```
output:
```
ValueError: Error raised by inference API: The repository for Qwen/Qwen-1_8B-Chat contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/Qwen/Qwen-1_8B-Chat.
Please pass the argument `trust_remote_code=True` to allow custom code to be run.
```
similar issue #6080 | HuggingFaceHub api can not pass trust_remote_code argument | https://api.github.com/repos/langchain-ai/langchain/issues/15141/comments | 1 | 2023-12-25T09:10:42Z | 2024-04-01T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15141 | 2,055,540,800 | 15,141 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In the current documentations the output of `Upstash Redis Cache` section in LLM Caching documentation seems wrong. The second run after caching is done has wrong output and wrong code and comments written in the code block.
### Idea or request for content:
Update the code block with appropriate comment and matching output to remove the confusion. | DOC: Wrong output in `Upstash Redis Cache` section of LLM Caching documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15139/comments | 1 | 2023-12-25T07:13:29Z | 2024-04-01T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15139 | 2,055,458,803 | 15,139 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain : 0.0.352
Python : 3.11.5
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. use streaming
```
llm_sm_ep = SagemakerEndpoint(
endpoint_name=endpoint_name,
client=client,
content_handler=content_handler,
model_kwargs=model_param,
endpoint_kwargs=endpoint_param,
streaming=True,
)
```
### Expected behavior
When I used TGI model, `invoke_endpoint_with_response_stream` of response does't have `outputs`. Instead it return `token` like below full response.
```
data:{"token":{"id":601,"text":" time","logprob":-0.10015869,"special":false},"generated_text":null,"details":null}
``` | Sagemaker Endpoint not working streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15138/comments | 1 | 2023-12-25T06:28:01Z | 2024-04-01T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15138 | 2,055,427,344 | 15,138 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently if one wants to use the RetryWithErrorOutputParser - we need to do the parsing manually instead of generating a chain that does it for us (including all the nice chain functions: batch, ainvoke, etc)
There are 2 issues:
1. The RetryWithOutputParser requires the prompt to be given to it as input so that it can do it's magic. It does this by implementing a `parse_with_prompt` function. Unfortunately this function is not plumbed all the way into the `BaseParser` so that when this is invoked as part of a regular chain it gives the `NotImplementedError: This OutputParser can only be called by the parse_with_prompt method.` exception.
2. The default output of the chatmodels is to return just the output or `AIMessages`. However in this case we need both the prompt and the output.
### Motivation
Currently we need to run the output parsing for the retry parsing manually. This tends to look something like this:
```
chain = chat_prompt | self.chat_model
output_batch = chain.batch(messages_batch,
config={"max_concurrency": 10,
"callbacks": [tracing_callback_handler]})
prompts_list = tracing_callback_handler.prompts
result_list = tracing_callback_handler.results
parsed_output_batch = []
for idx, output in enumerate(output_batch):
parsed_output = retry_parser.parse_with_prompt(output.content, prompts_list[idx])
parsed_output_batch.append(parsed_output)
```
In the above code the `tracing_callback_handler` is a custom callback handler that persists the prompt and results - which we end up using to give the retry_parser the prompt.
This is cumbersome and it would be awesome if this would just work with the chain itself like so
```
chain = chat_prompt | self.chat_model | retry_parser
output_batch = chain.batch(messages_batch,
config={"max_concurrency": 10,
"callbacks": [tracing_callback_handler]})
```
### Your contribution
If someone can validate that my understanding of the problem is correct - I can go ahead and create a PR for this. | RetryWithErrorOutputParser does not work with LLMChain because it does not implement the `parse` function | https://api.github.com/repos/langchain-ai/langchain/issues/15133/comments | 3 | 2023-12-24T21:26:43Z | 2024-05-06T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15133 | 2,055,216,057 | 15,133 |
[
"hwchase17",
"langchain"
] |
# How Adding a prompt template to conversational retrieval chain giving the code:
`template= """Use the following pieces of context to answer the question at the end.
If you don't know the answer,
just say that you don't know.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=db.as_retriever(),
memory=memory,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
`
`
ValidationError: 1 validation error for ConversationalRetrievalChain
chain_type_kwargs
extra fields not permitted (type=value_error.extra)`
How do I add the prompt template to the chain efficiently?
### Suggestion:
How do I add the prompt template to the chain efficiently? Please, I need help with this. | Adding Prompt template to ConversationalRetrievalChain.from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/15132/comments | 1 | 2023-12-24T21:26:16Z | 2024-03-31T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15132 | 2,055,216,000 | 15,132 |
[
"hwchase17",
"langchain"
] | ### System Info
windows
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The below code fails
import os
from operator import itemgetter
from dotenv import load_dotenv
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores.faiss import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
load_dotenv()
OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY')
loader = DirectoryLoader("/Users/joyeed/langchain_examples/langchain_examples/data/", glob='**/*.md')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
text = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
vectorstore = FAISS.from_documents(text, embeddings)
retriever = vectorstore.as_retriever()
prompt_template = ChatPromptTemplate.from_template(
"""
Write 2 {platform} posts about {topic}?
"""
)
model = ChatOpenAI(openai_api_key=OPENAI_API_KEY)
# Compose the chain for generating posts
chain = (
{"topic": RunnablePassthrough(), "platform": RunnablePassthrough(), "context": retriever}
| prompt_template
| model
| StrOutputParser()
)
# Invoke the chain to generate a post
output = chain.invoke({"topic": "baseball", "platform": "twitter"})
# Print the generated post
print(output)
I think it is failing because the invoke is expecting a string as an input, but earlier we were able to pass key/value pairs. It is failing in tiktoken/core.py in the below code, it expects a text here.
if match := _special_token_regex(disallowed_special).search(text):
raise_disallowed_special_token(match.group())
### Expected behavior
invoke should allow accepting JSON inputs | chain.invoke is no longer taking a json as input | https://api.github.com/repos/langchain-ai/langchain/issues/15131/comments | 1 | 2023-12-24T17:35:05Z | 2024-03-31T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15131 | 2,055,171,635 | 15,131 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.352, Windows 10, Python 3.11.6,
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm testing a couple of apps created from Langchain templates and I use dotenv and a .env file in the app's root folder ( default in docs: "my-app" )
I'm not able to work with Neo4J data added into it, when trying to run eg.:
**neo4j-advanced-rag-app\packages\neo4j-advanced-rag\ingest.py**
( My .env is in the neo4j-advanced-rag-app folder )
This is strange, because eg. Langsmith related env vars can be used, so I think the issue is not related to all env vars in .env file!
The last terminal error is:
```
Traceback (most recent call last):
File "d:\Projects\AI_testing\LangChain_test\Python_231026\neo4j-advanced-rag-app\packages\neo4j-advanced-rag\ingest.py", line 16, in <module>
graph = Neo4jGraph()
^^^^^^^^^^^^
File "D:\Projects\AI_testing\LangChain_test\Python_231026\langchain-venv\Lib\site-packages\langchain_community\graphs\neo4j_graph.py", line 65, in __init__
url = get_from_env("url", "NEO4J_URI", url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\AI_testing\LangChain_test\Python_231026\langchain-venv\Lib\site-packages\langchain_core\utils\env.py", line 41, in get_from_env
raise ValueError(
ValueError: Did not find url, please add an environment variable `NEO4J_URI` which contains it, or pass `url` as a named parameter.
```
### Expected behavior
I expect the app created from a template can use all the env vars in the .env file, which is placed into app root folder. | Template issue: Neo4J environmental variables in .env file not found | https://api.github.com/repos/langchain-ai/langchain/issues/15130/comments | 3 | 2023-12-24T14:59:45Z | 2024-03-31T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15130 | 2,055,130,570 | 15,130 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Environment
```
Edition Windows 11 Home
Version 22H2
Installed on 4/30/2023
OS build 22621.2861
Experience Windows Feature Experience Pack 1000.22681.1000.0
langchain package version: "0.0.212"
zod package version: "3.22.4"
typescript package version: "5.1.6"
```
Prompt
```
My prompt data with keys: {chat_history}, {currentPoint}, {language}, {topic} and Last AI message: {lastAiMessage} and User response: {message}| format: json
```
Creating model code
```
// LLM constructor
constructor(args: any[]) {
this.llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: 'gpt-3.5-turbo-1106',
modelKwargs: {
response_format: {
type: 'json_object',
},
},
});
this.answerScheme = LLMChain.getAnswerScheme();
this.formatInstructions = createParserFromSchema(this.answerScheme).getFormatInstructions();
const prompt = ChatPromptTemplate.fromMessages([new SystemMessage(args.prompt, { "json": true })]);
const memory = new ConversationSummaryMemory({
llm: this.llm,
memoryKey: 'chat_history',
inputKey: 'message',
});
this.chain = new ConversationChain({
llm: this.llm,
prompt,
memory,
verbose: true,
})
}
private static getAnswerScheme() {
return z.object({
answer: z.string(),
action: z.enum(['none', 'next']),
});
}
```
Send message code
```
async sendMessage(chainValues: ChainValues) {
chainValues['currentPoint'] = this.currentPoint;
chainValues['lastAiMessage'] = this.lastAiMessage ?? '';
try {
const modelKwargs = {
response_format: {
type: 'json_object',
},
};
const rawResponse = await this.chain.call({ ...chainValues, ...modelKwargs, format_instructions: this.formatInstructions });
const { answer, action } = this.answerScheme.parse(rawResponse);
this.lastAiMessage = answer;
this.__parseActionKeyword(action);
return answer;
}
catch (error) {
console.error('LLMChain ERROR:', error);
return "Something goes wrong.\n\n" + error;
}
}
```
LLM run with input
```
[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "My prompt data with keys: {chat_history}, {currentPoint}, {language}, {topic} and Last AI message: {lastAiMessage} and User response: {message}| format: json",
"additional_kwargs": {
"json": true
}
}
}
]
]
}
```
Error: `400 'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.`
### Suggestion:
_No response_ | Issue: LLMChain error. response_format json error with messages. Messages is array of array | https://api.github.com/repos/langchain-ai/langchain/issues/15125/comments | 4 | 2023-12-24T12:57:20Z | 2023-12-24T15:06:36Z | https://github.com/langchain-ai/langchain/issues/15125 | 2,055,093,069 | 15,125 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parameters
### Suggestion:
_No response_ | If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parametersIssue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15122/comments | 1 | 2023-12-24T07:32:59Z | 2024-03-31T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15122 | 2,055,013,662 | 15,122 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
what is RAG and how it's implemented as of now I completed exploring custom_prompt_template and want to know more about RAG?
### Suggestion:
_No response_ | Issue: what is RAG and how it's implemented? | https://api.github.com/repos/langchain-ai/langchain/issues/15116/comments | 5 | 2023-12-24T06:39:33Z | 2024-04-01T16:06:30Z | https://github.com/langchain-ai/langchain/issues/15116 | 2,055,002,722 | 15,116 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I suspect a potential issue where Chroma.from_documents might not be embedding and storing vectors for metadata in documents.
I have loaded five tabular documents using DataFrameLoader. However, when attempting to retrieve content based on similarity from the vector store, it appears that sentences in the metadata are not being utilized for matching. I don't see the documentation have clarify if this is the expected behavior or if I might be overlooking a specific argument or setting?
To illustrate, suppose I have a table with three fields: customer_question, agent_answer, and manager_note. If I query using the exact string from one of a manager_note, it surprisingly doesn't return the corresponding document at the top of the results.
**Is this a normal outcome?
Should I modify my table structure to include all relevant content in the page_content_column when setting up the DataFrameLoader?**
Here is the process
```
loader = DataFrameLoader(customer_q_a_001, page_content_column='customer_question')
docs = loader2.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=15)
docs = text_splitter.split_documents(docs )
# db = Chroma.from_documents(all_docs, embeddings, persist_directory="./chroma_db")
# db.persist()
db.similarity_search_with_score((query))
```
Library version:
langchain: 0.0.352
I would appreciate any insights or suggestions regarding this question.
### Suggestion:
_No response_ | Chroma.from_documents exclude metadata in embedding? [Question] | https://api.github.com/repos/langchain-ai/langchain/issues/15115/comments | 5 | 2023-12-24T06:13:37Z | 2024-03-31T16:06:25Z | https://github.com/langchain-ai/langchain/issues/15115 | 2,054,997,867 | 15,115 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be great to have adapters support in huggingface embedding class
### Motivation
Many really good embedding models have special adapters for retrieval, for example specter2 which is a leading embedding for scientific texts have many adapters, for example https://huggingface.co/allenai/specter2_aug2023refresh
and current huggingface embedding implementations does not allow using them
### Your contribution
so far I am just implementing it in ugly way in my projects, not sure if/when I will have time for proper PR | add support for embedding models with adapters | https://api.github.com/repos/langchain-ai/langchain/issues/15112/comments | 2 | 2023-12-24T01:18:05Z | 2024-04-03T16:08:34Z | https://github.com/langchain-ai/langchain/issues/15112 | 2,054,952,674 | 15,112 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add streaming support for Together AI Endpoints in Langchain. The official endpoint supports streaming with `stream_tokens` keyword, which should be not that hard to implement `_stream` method and add streaming support with the `streaming = True` flag
this is what the endpoint output when `stream_token` is set to `true`
```
data: {
"choices": [{"text": " the"}],
"request_id": "83a18448f8c030ab-SEA",
"token": {"engine": "", "id": 253, "logprob": 0, "special": false},
"id": "671a9e090c3fe06af8ab9445a46684298b6f5e5b458c4ff8a145bee456eb77cf",
}
data: {
"choices": [{"text": " French"}],
"request_id": "83a18448f8c030ab-SEA",
"token": {"engine": "", "id": 5112, "logprob": -0.8027344, "special": false},
"id": "671a9e090c3fe06af8ab9445a46684298b6f5e5b458c4ff8a145bee456eb77cf",
}
...
data: [DONE]
```
supports
### Motivation
together LLM integration does not support streaming although its endpoints are supported officially, adding streaming adds a huge benefit to user experience and quickly shows the model output generation
### Your contribution
implementing `_stream` method and processing the output of the API response, this can be done like this:
```python
payload = {
...,
"stream_tokens": True
}
response = requests.post(..., payload, stream=True)
for line in response.iter_lines():
....
yield GenerationChunk(
text = line["choices"][0]["text"],
...
) | [improvement] Add Streaming Support for Together AI | https://api.github.com/repos/langchain-ai/langchain/issues/15109/comments | 1 | 2023-12-23T19:48:33Z | 2024-03-30T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15109 | 2,054,881,350 | 15,109 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am using langchain.vectorstores.redis and langchain.chains.ConversationalRetrievalChain.from_llm
I would like to get the scores of the matching documents with my query.
I know you can filter with the `search_kwargs={"score_threshold": 0.8}`
But still I want to get the similarity scores in the output.
### Motivation
To be able to play with the similarity scores on my end and allow flexibility to the user
### Your contribution
The output should be a list (like now) of tupples (Doc, score). In fact this already exists in the similarity_search_with_relevance_scores in lanchain.schema.vectorstore so the implementation should be quite straightforward
Thanks! | Return similarity score ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15097/comments | 5 | 2023-12-23T11:56:23Z | 2024-04-04T16:08:21Z | https://github.com/langchain-ai/langchain/issues/15097 | 2,054,765,710 | 15,097 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to initialize an existing collection via:
store = PGVector(
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
I keep getting:
Exception has occurred: NoReferencedTableError
Foreign key associated with column 'langchain_pg_embedding.collection_id' could not find table 'langchain_pg_collection' with which to generate a foreign key to target column 'uuid'
I've used a docker installation for PGVector and I can confirm that the table langchain_pg_collection does exist with the key.
This is the view from PgAdmin
langchain_pg_collection

and
langchain_pg_embedding

So I'm not sure why its throwing the exception or how to resolve it
If its relevant I had to make this change inside pgvector
```
from sqlalchemy import MetaData
class CollectionStore(BaseModel):
"""Collection store."""
metadata = MetaData()
if not metadata.tables.get('langchain_pg_collection'):
__tablename__ = "langchain_pg_collection"
name = sqlalchemy.Column(sqlalchemy.String)
cmetadata = sqlalchemy.Column(JSON)
embeddings = relationship(
"EmbeddingStore",
back_populates="collection",
passive_deletes=True,
)
```
to resolve [aTable 'langchain_pg_collection' is already defined for this MetaData instance](https://github.com/langchain-ai/langchain/issues/14699)
### Suggestion:
_No response_ | Foreign key associated with column 'langchain_pg_embedding.collection_id' could not find table | https://api.github.com/repos/langchain-ai/langchain/issues/15096/comments | 1 | 2023-12-23T11:56:18Z | 2024-03-30T16:07:01Z | https://github.com/langchain-ai/langchain/issues/15096 | 2,054,765,699 | 15,096 |
[
"hwchase17",
"langchain"
] | ### Feature request
The safety settings are there in the **google_generativeai** library are are **not** there in the **langchain_google_genai** library
The safety settings is an basically array of dictionaries passed when sending the prompt
### Motivation
The problem with not having this is that when we use the ChatGoogleGenerativeAI model, if there is some kind of prompt which violate the basic safety settings then the model won't return with your answer
If we can change the safety settings and send it with the prompt to the model we could fix this issue
### Your contribution
I am currently reading the code of the library and will raise a PR if i could fix the issue | Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI | https://api.github.com/repos/langchain-ai/langchain/issues/15095/comments | 22 | 2023-12-23T09:00:07Z | 2024-08-02T10:50:19Z | https://github.com/langchain-ai/langchain/issues/15095 | 2,054,725,088 | 15,095 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I use langserve develop a chain , and expose as remote tool. my friend wants to call my chain in his agent, how to do it?
**Joke chain:**
```
#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langserve import add_routes
llm = ChatOpenAI(
openai_api_base=f"http://192.168.1.201:18001/v1",
openai_api_key="EMPTY",
model="gpt-3.5-turbo",
temperature=0.5,
top_p="0.3",
default_headers={"x-heliumos-appId": "general-inference"},
tiktoken_model_name="gpt-3.5-turbo",
verbose=True,
)
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple api server using Langchain's Runnable interfaces",
)
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(
app,
prompt | llm,
path="/joke",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
**Agent:**
```
from langchain.agents import initialize_agent, AgentType
from langchain_community.chat_models import ChatOpenAI
from langserve import RemoteRunnable
from langchain.tools import Tool
llm = ChatOpenAI(
openai_api_base=f"http://xxxx:xxx/v1",
openai_api_key="EMPTY",
model="gpt-3.5-turbo",
temperature=0.5,
top_p="0.3",
tiktoken_model_name="gpt-3.5-turbo",
verbose=True,
)
remote_tool = RemoteRunnable("http://xxx:xxx/joke/")
tools = [
Tool.from_function(
func=remote_tool.invoke,
name="joke",
description="用户要求讲笑话的时候使用该工具",
# coroutine= ... <- you can specify an async method if desired as well
),
]
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
result = agent.run("讲一个关于坐出租车的笑话")
print(result)
```
agent always got error: because not there is not valid input to remote tool.
### Expected behavior
no error | Agent how to call remote tool (exposed by langserve) | https://api.github.com/repos/langchain-ai/langchain/issues/15094/comments | 1 | 2023-12-23T08:50:23Z | 2024-03-30T16:06:56Z | https://github.com/langchain-ai/langchain/issues/15094 | 2,054,722,951 | 15,094 |
[
"hwchase17",
"langchain"
] | ### System Info
I'm using the latest version of langchain.
When my system prompt is longer than 23 lines, i get this error:
KeyError: "Input to ChatPromptTemplate is missing variable ''. Expected: ['', 'description'] Received: ['description']"
It's being generated from this snippet:
```
def generate_output(user_input: str) -> str:
'''This function will generate the output.scad file.'''
chain = chat_prompt | chat_model
print(chain)
# similarity_search(user_input)
llm_output = str(chain.invoke({"description": user_input})) ``` ( Error occurs on this line)
This error does not occur when my system prompt is shorter than 23 lines. Here is the code im using:
`
chat_model = ChatOpenAI(openai_api_key=api_key(), model_name="gpt-4-1106-preview", temperature=0.2, model_kwargs=
{"frequency_penalty": 0, "presence_penalty": 0, "top_p": 1})
System_Message = Systemprompt("hello.txt")
Human_Message = "generate python code to {description} "
print("hi")
print(Human_Message)
chat_prompt = ChatPromptTemplate.from_messages([
("system", System_Message),
("human", Human_Message),
]) `
Here is my SystermPrompt functions:
`def Systemprompt(file_path: str) -> str:
'''This function will return system prompt.'''
try:
with open(file_path, "r") as file:
text = file.read()
return text
except FileNotFoundError:
return FileNotFoundError
except IOError as e:
return IOError`
How can i fix this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_model = ChatOpenAI(openai_api_key=api_key(), model_name="gpt-4-1106-preview", temperature=0.2, model_kwargs=
{"frequency_penalty": 0, "presence_penalty": 0, "top_p": 1})
System_Message = Systemprompt("hello.txt")
Human_Message = "generate python code to {description} "
print("hi")
print(Human_Message)
chat_prompt = ChatPromptTemplate.from_messages([
("system", System_Message),
("human", Human_Message),
### Expected behavior
Expected behavious is that this shoudn't happen, and i should get python code
| Issue with ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15093/comments | 4 | 2023-12-23T08:09:22Z | 2024-03-31T16:06:10Z | https://github.com/langchain-ai/langchain/issues/15093 | 2,054,713,803 | 15,093 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""You are a chatbot designed to provide answers to User's Questions, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers in details to User's Question: ```{check} ``` which is delimited by triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points,then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{check} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate(
template=custom_prompt_template, input_variables=["check","context_text"]
)
formatted_prompt = custom_prompt.format()
return formatted_prompt
`
#below is the error I am getting
Traceback (most recent call last):
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/CustomBot/chatbot/views.py", line 366, in GetChatResponse
custom_message=generate_custom_prompt(chat_qa,query,name,not_uuid)
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 70, in generate_custom_prompt
custom_prompt = ChatPromptTemplate(
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1050, in pydantic.main.validate_model
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/prompts/chat.py", line 449, in validate_input_variables
messages = values["messages"]
KeyError: 'messages'
### Suggestion:
_No response_ | Issue: Getting error while using ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15089/comments | 6 | 2023-12-23T05:10:34Z | 2024-04-18T16:21:18Z | https://github.com/langchain-ai/langchain/issues/15089 | 2,054,676,213 | 15,089 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11
langchain 0.0.352
langchain-core 0.1.3
langchain-community 0.0.4 (doesn't work with neithwer `from langchain.llms import OpenAI` nor `langchain.chat_models import ChatOpenAI`)
langchain-community 0.0.2 (works as expected with `from langchain.llms import OpenAI` but it doesn't with `langchain.chat_models import ChatOpenAI`)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#### The following code works fine with `langchain-community 0.0.2`:
Plese refer to this [LangSmith run](https://smith.langchain.com/public/1c6c7960-e3b7-42fc-8835-6b78520e6580/r)
```import config
from langchain.vectorstores.redis import Redis
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.storage import RedisStore
from langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddings
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.llms import OpenAI
embed_store = RedisStore(redis_url=config.REDIS_URL, client_kwargs={
'db': 1}, namespace='embedding_cache')
underlying_embeddings = OpenAIEmbeddings()
embeddings = CacheBackedEmbeddings.from_bytes_store(
underlying_embeddings, embed_store, namespace=underlying_embeddings.model
)
metadata_field_info = [
AttributeInfo(
name="source",
description="The source URL or book title where the document comes from.",
type="string",
),
AttributeInfo(
name="title",
description="The title where the text was taken from. Use this attribute to filter the User Query, but don't filter for exact matches. For example: 'Qué dice el código de trabajo', the filter could be 'Código de Trabajo'; 'Qué dice la ley sobre el teletrabajo', the filter could be 'Teletrabajo'.",
type="string",
),
AttributeInfo(
name="doc_type",
description="Type of document classification to be used only as Filter for the User Query. Laws or Labor Code go under 'Legislación'. Company, Organizational or employer information go under 'Organización'. Company Policies go under 'Política'. Company internal procedures go under 'Procedimiento'.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords taken from the document to filter the query. Always use this attribute to filter the query when a specific article number is needed. For example: 'Qué dice el artículo 10 del código de trabajo', you must capitalize the words 'articulo' to 'ARTÍCULO', and filter 'ARTÍCULO 10'.",
type="string",
),
]
document_content_description = "Data source comprised of the entire contents of the Costa Rican Labor Code and other related Laws: 1) Código de trabajo; 2) Ley de protección al trabajador; 3) Ley de acoso sexual; 4) Ley de teletrabajo; y 5) Ley de Protección de Datos Personales."
llm = OpenAI(temperature=0.0)
rds_store = Redis.from_existing_index(
embeddings,
index_name=config.INDEX_NAME,
redis_url=config.REDIS_URL,
schema='./Redis_schema.yaml'
)
selfq_retriever = SelfQueryRetriever.from_llm(
llm,
rds_store,
document_content_description,
metadata_field_info,
enable_limit=False,
# verbose=True,
)
retriever = rds_store.as_retriever()
```
By just changing `from langchain.llms import OpenAI` to `from langchain.chat_models import ChatOpenAI` or by upgrading `langchain-community` to version 0.0.4, the query output is as follows and the retrieval doesn't work as intended:
```{
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"lc": 1,
"repr": "StructuredQuery(query='articulo 143', filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='title', value='Código de Trabajo'), limit=None)",
"type": "not_implemented"
}
```
Plese refer to this [LangSmith run](https://smith.langchain.com/public/2d4732f0-8712-4e84-9af2-5d13ffc6cb93/r) for the unsuccessful retrieval
### Expected behavior
This is the expected result:
```{
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"lc": 1,
"repr": "StructuredQuery(query=' ', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='title', value='Código de Trabajo'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='doc_type', value='Legislación'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='keywords', value='ARTÍCULO 143')]), limit=None)",
"type": "not_implemented"
}
``` | SelfQueryRetriever broken with latest langchain-community or using ChatOpenAI as llm | https://api.github.com/repos/langchain-ai/langchain/issues/15087/comments | 1 | 2023-12-23T02:55:37Z | 2024-03-30T16:06:46Z | https://github.com/langchain-ai/langchain/issues/15087 | 2,054,631,468 | 15,087 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
#### Issue Description
- **Overview**: The current documentation for the 'Return Source Documents' functionality seems to be outdated or incorrect. The provided code snippet results in errors when executed.
https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat#conversationalretrievalchain-with-question-answering-with-sources
- **Details**:
- The current code:
```python
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot({"question": query, "chat_history": chat_history})
```
produces the following error:
```
/python3.9/site-packages/langchain/memory/chat_memory.py", line 29, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
```
- **Examples**: This issue occurs when trying to use the 'Return Source Documents' as outlined in the current documentation.
#### Additional Information
- **Related Issue**: This documentation update is related to the issue raised in https://github.com/langchain-ai/langchain/issues/2256.
### Idea or request for content:
#### Suggested Fix
- Update the documentation with the correct code snippet:
```python
memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', output_key='answer', return_messages=True)
bot = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory, return_source_documents=True)
result = bot({"question": query,})
```
- This revision correctly handles the output and does not produce the aforementioned error. | DOC: Documentation Update Needed for 'Return Source Documents' Functionality | https://api.github.com/repos/langchain-ai/langchain/issues/15086/comments | 2 | 2023-12-23T02:21:36Z | 2024-03-30T16:06:41Z | https://github.com/langchain-ai/langchain/issues/15086 | 2,054,623,466 | 15,086 |
[
"hwchase17",
"langchain"
] | ### System Info
Im trying to implement this in sagemaker with bedrock claude v2
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock/chain.py
Here is my code
```
`import os
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
region = "us-east-1"
model_id="anthropic.claude-v2"
# Set LLM and embeddings
model = Bedrock(
model_id=model_id,
region_name=region,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1")
# Add to vectorDB
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=bedrock_embeddings
)
retriever = vectorstore.as_retriever()
# Get retriever from vectorstore
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
`
```
got unexpected error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[47], line 40
35 prompt = ChatPromptTemplate.from_template(template)
38 # RAG
39 chain = (
---> 40 RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
41 | prompt
42 | model
43 | StrOutputParser()
44 )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'langchain.schema.vectorstore.VectorStoreRetriever'>
Name: langchain
Version: 0.0.352
Name: langchain-core
Version: 0.1.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. !pip install langchain==0.0.352
2.
```
import os
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
region = "us-east-1"
model_id="anthropic.claude-v2"
# Set LLM and embeddings
model = Bedrock(
model_id=model_id,
region_name=region,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1")
# Add to vectorDB
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=bedrock_embeddings
)
retriever = vectorstore.as_retriever()
# Get retriever from vectorstore
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
```
### Expected behavior
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[47], line 40
35 prompt = ChatPromptTemplate.from_template(template)
38 # RAG
39 chain = (
---> 40 RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
41 | prompt
42 | model
43 | StrOutputParser()
44 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1937, in RunnableParallel.__init__(self, _RunnableParallel__steps, **kwargs)
1934 merged = {**__steps} if __steps is not None else {}
1935 merged.update(kwargs)
1936 super().__init__(
-> 1937 steps={key: coerce_to_runnable(r) for key, r in merged.items()}
1938 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1937, in <dictcomp>(.0)
1934 merged = {**__steps} if __steps is not None else {}
1935 merged.update(kwargs)
1936 super().__init__(
-> 1937 steps={key: coerce_to_runnable(r) for key, r in merged.items()}
1938 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:3232, in coerce_to_runnable(thing)
3230 return cast(Runnable[Input, Output], RunnableParallel(thing))
3231 else:
-> 3232 raise TypeError(
3233 f"Expected a Runnable, callable or dict."
3234 f"Instead got an unsupported type: {type(thing)}"
3235 )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: < | Can't use RunnablePassthrough | https://api.github.com/repos/langchain-ai/langchain/issues/15085/comments | 3 | 2023-12-23T00:38:06Z | 2024-03-30T16:06:36Z | https://github.com/langchain-ai/langchain/issues/15085 | 2,054,597,980 | 15,085 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
SQLDatabaseChain is throwing a TypeError when executing the .run or .invoke functions. All arguments or kwargs passed to the class are valid, but the error persists. I have trapped back to the database and there is a valid connection. My code and error are below:
```python
sqlite_db = SQLDatabase.from_uri(
"sqlite:///./sqlite/ibdss.sqlite3",
sample_rows_in_table_info=2,
include_tables=["colts"]
)
db_chain = SQLDatabaseChain.from_llm(
llm,
sqlite_db,
return_direct=True,
use_query_checker=True,
return_intermediate_steps=True,
verbose=True
)
sqlite_response = db_chain.invoke({"query": "How many generators have over 10,000 hours of operation?"})
```
The issue happens when using .run as well.
Error Output:
TypeError: must be real number, not str
The traceback points to the invocation of the chain as the error area.
### Suggestion:
_No response_ | SQLDatabaseChain raising TypeError exception with SQLite | https://api.github.com/repos/langchain-ai/langchain/issues/15077/comments | 11 | 2023-12-22T20:25:25Z | 2024-07-12T18:11:07Z | https://github.com/langchain-ai/langchain/issues/15077 | 2,054,482,659 | 15,077 |
[
"hwchase17",
"langchain"
] | ### System Info
I have observed that while using the command belonging to importing "langchain.llms (for example as in from langchain.llms import HuggingFaceHub") have no problem when I deploy my web app to multiple hosting sites like streamlit, Render, Heroku etc.
However, using the "langchain.memory" as in from langchain.memory import ConversationBufferMemory
and "langchain.prompts" as in from langchain.prompts import PromptTemplate
causes deployment to fail in all said hosting sites. They throw a module not found error for the langchain.memory and langchain.prompts.
Please note: that running in google colab or vscode does not have this error.
Also, python version and langchain versions are irrelevant to this problem since I have tested through a lot of variations.
Please help out. Thanks in advance
### Who can help?
@hwchase17
@agola11
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import soundfile as sf
from langchain.llms import HuggingFaceHub
# Below line gives error not in vscode or google colab but in deployment to render, heroku, streamlit and other hosting sites as well. I have not used ConversationBufferMemory in below code even, since the this line causes problem even in initial importing
from langchain.memory import ConversationBufferMemory
import speech_recognition as sr
# Initialize any API keys that are needed
import os
from flask import Flask, render_template, request, session, flash, get_flashed_messages
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "write your own api here"
app = Flask(__name__)
app.secret_key = '123'
@app.route("/LLMEXP", methods=['GET', 'POST'])
def llmexp():
if 'chat_messages' not in session:
session['chat_messages'] = []
conversation = HuggingFaceHub(repo_id="google/flan-t5-small", model_kwargs={"temperature": 0.1, "max_length": 256})
if request.method == 'POST':
if 'record_audio' in request.form:
# Record audio and convert to text
user_input = request.form['user_input']
else:
# User input from textarea
user_input = request.form['user_input']
if not user_input or user_input.isspace():
flash("Please provide a valid input.")
else:
user_message = {'role': 'user', 'content': f"User: {user_input}"}
session['chat_messages'].append(user_message)
response = conversation(user_input)
if not response.strip():
response = "I didn't understand the question."
#text_to_speech(response)
assistant_message = {'role': 'assistant', 'content': f"Bot: {response}"}
session['chat_messages'].append(assistant_message)
session.modified = True
return render_template("index.html", chat_messages=session['chat_messages'])
if __name__ == "__main__":
app.run(debug=True)
### Expected behavior
I expect that I do not receive any error related to "langchain.memory" or "langchain.prompts" error while deploying to a hosting site. | Bugs in importing when Deploying a LangChain web app to multiple hosting platforms! | https://api.github.com/repos/langchain-ai/langchain/issues/15074/comments | 1 | 2023-12-22T19:12:34Z | 2024-03-29T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15074 | 2,054,421,969 | 15,074 |
[
"hwchase17",
"langchain"
] | ### Feature request
Google's `gemini-pro` supports function calling. It would be nice to be able to use langchain to support function calling when using the `VertexAI` class similar to OpenAI and OpenAI's version of function calling: https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling
### Motivation
Here's a notebook where to access this functionality you have to use the `vertexai` library directly which means we lose the langchain standardization of input and output schemas: https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/function-calling/intro_function_calling.ipynb
### Your contribution
Possibly can help. | Feature request: Vertex AI Function Calling | https://api.github.com/repos/langchain-ai/langchain/issues/15073/comments | 1 | 2023-12-22T19:10:42Z | 2024-03-29T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15073 | 2,054,420,480 | 15,073 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.351
python==3.10
### Who can help?
@hwchase17 This should be an easy one to fix.
When using regex in the output parser of the StructuredChatAgent, the output parser cuts off the output at the first ending ``` it finds. For example, if my string was
```
"""```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```\n\nThis code initializes a `NonhydrostaticModel` on a 64x64x64 `RectilinearGrid` with an extent of 1x1x1 in each direction. The model is created with default settings, which you can customize according to your simulation needs."
}
```"""
```
Then the output parser would get """```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```
which would cause an error.
I have not throughly verified this solution but changing the regex in the structured chat output parser -
https://github.com/langchain-ai/langchain/blob/aad3d8bd47d7f5598156ff2bdcc8f736f24a7412/libs/langchain/langchain/agents/structured_chat/output_parser.py#L23
to pattern = re.compile(r"```json\s*\n(.*?)```(?=\s*```json|\Z)", re.DOTALL) seems to fix the problem.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pattern = re.compile(r"```(?:json\s+)?(\W.*?)```", re.DOTALL)
text="""```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```\n\nThis code initializes a `NonhydrostaticModel` on a 64x64x64 `RectilinearGrid` with an extent of 1x1x1 in each direction. The model is created with default settings, which you can customize according to your simulation needs."
}
```"""
action_match = pattern.search(text)
response = json.loads(action_match.group(1).strip(), strict=False)
### Expected behavior
I would expect the code to get the entire json blob block but it doesn't. | Structured Chat Output Parser doesn't work when model outputs a code block with ``` around the code block | https://api.github.com/repos/langchain-ai/langchain/issues/15069/comments | 3 | 2023-12-22T17:40:20Z | 2024-04-09T16:14:46Z | https://github.com/langchain-ai/langchain/issues/15069 | 2,054,285,645 | 15,069 |
[
"hwchase17",
"langchain"
] | in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated | send context of docs through Chroma().as_retriever multiple times in the same conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15062/comments | 1 | 2023-12-22T13:52:13Z | 2024-03-29T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15062 | 2,053,953,258 | 15,062 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15060
<div type='discussions-op-text'>
<sup>Originally posted by **ShehneelAhmedKhan** December 22, 2023</sup>
This is my code:
llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo")
db_chain = SQLDatabaseSequentialChain.from_llm(llm=llm, database=db, verbose=True, top_k=0) # Can use top_k if face error
return db_chain.run(message)
Output:

The answer is denying although the sqlresult gets the answer.
Note: This is happening only in this table, other tables are being answered correctly.
using langchain==0.0.175
Any suggestion?</div> | Using SQLDatabaseSequentialChain, discrepancy between Answer and SQLResult | https://api.github.com/repos/langchain-ai/langchain/issues/15061/comments | 2 | 2023-12-22T13:48:55Z | 2024-03-29T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15061 | 2,053,949,456 | 15,061 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i want force the model to call a specific function. but i didn't found how to ues tool_choice with AgentExecutor from doc.
please give me demo. Thanks
### Suggestion:
_No response_ | How to use tool_choice with initialize_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/15059/comments | 3 | 2023-12-22T13:20:20Z | 2024-03-29T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15059 | 2,053,915,647 | 15,059 |
[
"hwchase17",
"langchain"
] | ### Feature request
The `GoogleDriveLoader` currently supports 3 different ways to authenticate a user.
1. Via a Service Account File
2. Via a Token File
3. Via a Live server
All those ways work perfectly, but I'm missing a way to authenticate the user via an existing JWT. There is a workaround by saving the JWT into a token file and providing this file via the `token_path` parameter. But there should also be a way to provide exactly these credentials via a parameter.
### Motivation
Maybe there is a use case where multiple accounts are required, or the user wants to authenticate over a third-party website running on an external server (not localhost). I think it would be useful to have a parameter that allows the functionality to pass a JWT without having to store it first in a file.
### Your contribution
I'm willing to submit a PR. | Make it possible to give credentials directly via parameter on GoogleDriveLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15058/comments | 3 | 2023-12-22T12:59:29Z | 2024-06-30T23:27:12Z | https://github.com/langchain-ai/langchain/issues/15058 | 2,053,889,545 | 15,058 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi Guy, this world requires a Langchain framework written in the Rust language, and Python is not the future of AI.
### Suggestion:
_No response_ | this world requires a Langchain framework written in the Rust language | https://api.github.com/repos/langchain-ai/langchain/issues/15057/comments | 3 | 2023-12-22T11:01:51Z | 2024-03-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15057 | 2,053,758,780 | 15,057 |
[
"hwchase17",
"langchain"
] | ### Feature request
The [Snowflakeconnector](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-api#functions) supports authentication via browser. It would be nice if the [Langchain Snowflake Loader](https://python.langchain.com/docs/integrations/document_loaders/snowflake) also supports this feature
### Motivation
Basic authentication works in most cases. But there might be a use case where the user wants to use OAuth to log into his Snowflake instance.
### Your contribution
I'm willing to submit a PR | Add external browser authentication for Snowflake. | https://api.github.com/repos/langchain-ai/langchain/issues/15056/comments | 1 | 2023-12-22T10:43:44Z | 2024-03-29T16:08:10Z | https://github.com/langchain-ai/langchain/issues/15056 | 2,053,735,631 | 15,056 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I want to use the ContextualCompressionRetriever and wondering how the prompt looks like or if you can use it for non-english languages (e.g. German)?
I am using ContextualCompressionRetriever at the moment and realized that my LLM responses often switch to English, so I am assuming that the prompt for ContextualCompressionRetriever is in English language and the model gets confused which language to use.
Any recommendations?
### Suggestion:
_No response_ | ContextualCompressionRetriever for non-english languages | https://api.github.com/repos/langchain-ai/langchain/issues/15052/comments | 1 | 2023-12-22T08:16:48Z | 2024-03-29T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15052 | 2,053,554,200 | 15,052 |
[
"hwchase17",
"langchain"
] | ### System Info
when i use sql agent, i want get table desc , but agent can't work , i am sure REPICA.ICA_PERSON_DATA_ALL table is existed
Question: Describe the REPICA.ICA_PERSON_DATA_ALL table
Thought: I should query the schema of the REPICA.ICA_PERSON_DATA_ALL table to get information about its columns and data types.
Action: sql_db_schema
Action Input: REPICA.ICA_PERSON_DATA_ALL
Observation: Error: table_names {'REPICA.ICA_PERSON_DATA_ALL'} not found in database
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = ChatOpenAI(
openai_api_base=openai_api_base,
temperature=llm_temp,
openai_api_key=openai_api_key,
model_name=llm_model_name,
)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(llm = llm, toolkit=toolkit, prefix=prefix, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, top_k=10)
agent_executor.run("Describe the REPICA.ICA_PERSON_DATA_ALL table")
### Expected behavior
i need get REPICA.ICA_PERSON_DATA_ALL desc info | oracle db when use sql agent can't find table name[Error: table_names] | https://api.github.com/repos/langchain-ai/langchain/issues/15051/comments | 1 | 2023-12-22T07:38:33Z | 2024-03-29T16:08:00Z | https://github.com/langchain-ai/langchain/issues/15051 | 2,053,514,880 | 15,051 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
google-search-results 2.4.2
Windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Summary**
Firstly, I ask the LangChain agent using code below and get a good response.
`result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})`
However, I ask a new followup question by changing the query and get and error.
`result = agent_executor({"What other forms of acne treatment are there for male teenagers around the age of 12 years old ?"})`
`BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid.`
**Code Used**
```
model = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="35-16k",
temperature=0
)
# Define which tools the agent can use to answer user queries
SERPAPI_API_KEY='------'
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
tools= [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
# This is needed for both the memory and the prompt
memory_key = "history"
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import (
AgentTokenBufferMemory,
)
memory = AgentTokenBufferMemory(memory_key=memory_key, llm=model, max_token_limit = 13000)
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain.prompts import MessagesPlaceholder
from langchain_core.messages import SystemMessage
system_message = SystemMessage(
content=(
"You are an expert Doctor."
"Use the web search tool to gain insights from web search, summarise the results and provide an answer."
"Keep the source links during generation to produce inline citation."
"Cite search results using [${{number}}] notation. Only cite the most \
relevant results that answer the question accurately. Place these citations at the end \
of the sentence or paragraph that reference them - do not put them all at the end. If \
different results refer to different entities within the same name, write separate \
answers for each entity. If you want to cite multiple results for the same sentence, \
format it as `[${{number1}}] [${{number2}}]`. However, you should NEVER do this with the \
same number - if you want to cite `number1` multiple times for a sentence, only do \
`[${{number1}}]` not `[${{number1}}] [${{number1}}]`"
"From the retrieved results, generate an answer."
"At the end of the answer, show the source to the [${{number}}]. Follow this format, every [${{number}}] notation is followed by the corresponding source link "
"You should use bullet points in your answer for readability. Put citations where they apply \
rather than putting them all at the end."
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)
#The Agent
agent = OpenAIFunctionsAgent(llm=model, tools=tools, prompt=prompt)
#The Agent Executor
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
return_intermediate_steps=True,
)
result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})
```
**Output**
Shows good results
**Next** I run agen_executor with another question and this is where I get a bug.
result = agent_executor({"_new question_"})
Even if I just repeat the question ie use the initial one. I get this error.
**Error**
`---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312) raise e
[313](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:316) )
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:301) inputs,
[302](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:302) name=run_name,
[303](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:303) )
[304](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:304) try:
[305](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:308) else self._call(inputs)
...
(...)
[937](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:937) stream_cls=stream_cls,
[938](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:938) )
BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}`
### Expected behavior
Formatted normal output | Error Code 400. Can ask follow up questions with agent_executor | https://api.github.com/repos/langchain-ai/langchain/issues/15050/comments | 4 | 2023-12-22T07:06:47Z | 2024-04-11T16:17:10Z | https://github.com/langchain-ai/langchain/issues/15050 | 2,053,484,432 | 15,050 |
[
"hwchase17",
"langchain"
] | ### Feature request
Microsoft's new model called [Phi](https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/) seems interesting...
### Motivation
Performance of smaller models 2-3B params can compare to models with 7B params.
### Your contribution
i don't know enough to so, otherwise i would submit a PR... | add Microsoft/Phi support | https://api.github.com/repos/langchain-ai/langchain/issues/15049/comments | 1 | 2023-12-22T06:28:54Z | 2024-03-29T16:07:55Z | https://github.com/langchain-ai/langchain/issues/15049 | 2,053,449,472 | 15,049 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this?
### Suggestion:
_No response_ | When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this? | https://api.github.com/repos/langchain-ai/langchain/issues/15048/comments | 2 | 2023-12-22T06:18:00Z | 2024-03-29T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15048 | 2,053,440,880 | 15,048 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain==0.0.352
MacOS intel version
python==3.8.3
networkx==2.7.1
### Who can help?
@hwchase17 @agola11 I have an issue when initializing GraphQAChain using networkx graph.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I made a networkx graph and tried to initialize GraphQAChain but got the below error.
<img width="1010" alt="image" src="https://github.com/langchain-ai/langchain/assets/41593805/d4ba54bc-bf43-4303-8550-8ef75fb4b7e0">
<img width="1482" alt="image" src="https://github.com/langchain-ai/langchain/assets/41593805/c4286425-7df7-4c3a-bbc5-3e6e92a69786">
### Expected behavior
I would like to initialize GraphQAChain and chat with the network.
Please help! | GraphQAChain not working when using Networkx graphs !! | https://api.github.com/repos/langchain-ai/langchain/issues/15046/comments | 6 | 2023-12-22T03:28:50Z | 2024-07-27T15:00:45Z | https://github.com/langchain-ai/langchain/issues/15046 | 2,053,324,243 | 15,046 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.348
langchain-nvidia-trt 0.0.1rc0
Python 3.11
### Who can help?
@jdye64
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can set up the triton server as in the quick start guide here using a nemotron model:
https://github.com/fciannella/langchain-fciannella/blob/master/libs/partners/nvidia-trt/README.md
Then you can just try sending a request from the LangChain plugin and it will not work.
Same thing if you follow the more complex setup.
The input and output parameters need to be discovered by the client. One option is to use pytriton as a client, or just pull the parameters from the server using an API call to get the triton configuration?
### Expected behavior
The client should provide a list of the mandatory parameters in the error code in case there is a missing parameter in the request. | NVIDIA Triton+TRT-LLM connector needs to handle dynamic model parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15045/comments | 2 | 2023-12-22T02:17:48Z | 2024-06-08T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15045 | 2,053,274,172 | 15,045 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/2460f977c5c20073b41803c41fd08945be34cd60/libs/langchain/langchain/agents/output_parsers/openai_functions.py#L49
Eeven though you pass your customized function, gpt-3.5 will often return a function call which name is "python" and the arguments are not in json format.
I would suggest to check the function name here. (Idea from Open Interpreter Source Code)
Thanks,
ZD | BUG! The arguments of function calling returned by gpt 3.5 might not be a dict | https://api.github.com/repos/langchain-ai/langchain/issues/15043/comments | 3 | 2023-12-22T01:48:52Z | 2024-04-10T16:14:54Z | https://github.com/langchain-ai/langchain/issues/15043 | 2,053,256,211 | 15,043 |
[
"hwchase17",
"langchain"
] | ### System Info
The example found [here](https://python.langchain.com/docs/integrations/vectorstores/azuresearch) and in particular this code fragment
```
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
fails with a message that at the top is:
```
vector_search_configuration is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchField'> and will be ignored
semantic_settings is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchIndex'> and will be ignored
```
and culminates with:
```
HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Code: InvalidRequestParameter
Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition
Code: InvalidField
Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition
```
I am running with `python 3.10`, `openai 1.5.0`, `langchain 0.0.351` and `azure-search-documents 11.4.0`. If I revert to `azure-search-documents 11.4.0b8` the code works.
This appears to be related to the November 2023 Microsoft API change which introduced the concept of a "profile" which aggregates various vector search settings under one name. As a result of this API change, the old "vector_search_configuration" was deprecated and a new "vector_search_profile" was added, along with a new "profiles" object. It seems that the Langchain extension has not been updated for this change and expects the old "vector_search_configuration" property which doesn't exist on newer SDK releases.
See the discussion [here](https://stackoverflow.com/questions/77682544/vector-search-configuration-is-not-a-known-attribute-of-class-class-azure-sear/77694628#77694628).
### Who can help?
@hwchase17
@bas
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following sample code fragment from [here:(https://python.langchain.com/docs/integrations/vectorstores/azuresearch) fails.
```
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "YOUR_OPENAI_ENDPOINT"
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
model: str = "text-embedding-ada-002"
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
### Expected behavior
The code should run without failure and return a vector store. | AzureSearch Bug -- langchain.vectorstores.azuresearch | https://api.github.com/repos/langchain-ai/langchain/issues/15039/comments | 5 | 2023-12-21T23:45:19Z | 2024-06-01T00:07:40Z | https://github.com/langchain-ai/langchain/issues/15039 | 2,053,180,711 | 15,039 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Seeing an issue in my code that appeared out of nowhere, hoping for some support here. The error message I am seeing is `ValueError: Could not parse output: Answer to inquiry from OpenAI. Score: 90` from the output_parsers/regex.py (https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/output_parsers/regex.py#L35)
Langchain version: v0.0.257
LLM OpenAI: https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/llms/openai.py
Model: text-davinci-003
Chain type: map_rerank (https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/chains/combine_documents/map_rerank.py)
Vector store: Qdrant
Code to reproduce:
```python
import os
from pathlib import Path
from langchain import OpenAI, PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.memory import FileChatMessageHistory
import logging
from service import db
logging.basicConfig(level=logging.DEBUG)
def conversation(question):
collection_name = "qdrant_collection_name"
llm = OpenAI(model='text-davinci-003', openai_api_key=os.environ['OPENAI_API_KEY'], temperature=0.0)
combine_prompt_template = """
'DOCUMENTS:
{summaries}
QUESTION:
{question}
### Response:
"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm,
chain_type='map_rerank',
retriever=db.vectorstore(collection_name).as_retriever(),
return_source_documents=True,
reduce_k_below_max_tokens=True,
)
return qa_chain({"question": question})['answer']
if __name__ == "__main__":
print(conversation('How do I add sales tax to a transaction in avatax?'))
```
Also posting on Stackoverflow
### Suggestion:
N/A | Issue: ValueError: Could not parse output: | https://api.github.com/repos/langchain-ai/langchain/issues/15037/comments | 1 | 2023-12-21T23:20:12Z | 2024-03-28T16:08:43Z | https://github.com/langchain-ai/langchain/issues/15037 | 2,053,166,082 | 15,037 |
[
"hwchase17",
"langchain"
] | ## Describe the problem
When do inference (with the llm or chat model) we pass a empty list to POST request, when the "stop" attribute of `_create_stream` is not set.
This create a problem when using model, bc ollama override the stop sequence list of the model with the list we pass in the request
## Solution
I tried in local, and by deleting this [line](https://github.com/langchain-ai/langchain/blob/1b01ee0e3c7f0df5855c7440d471ddab4f0efc7e/libs/community/langchain_community/llms/ollama.py#L162C1-L162C1) , the stop list will be set by ollama when we don't have either a "global stop" (the one we define in the costructor of the ollama or chatollama class) or a stop sequence from the calling of the function.
Also ollama know support a new endpoint to retrive information about the model (using GET /api/show), but need parsing so Maybe it's a idea for the future...
Yes, i know
"if you are working with one model, just put the stop sequences and ez" but i think is very bad...
| Ollama integration: The stop sequence is empty when do inference | https://api.github.com/repos/langchain-ai/langchain/issues/15024/comments | 3 | 2023-12-21T19:04:00Z | 2024-03-28T16:08:37Z | https://github.com/langchain-ai/langchain/issues/15024 | 2,052,931,048 | 15,024 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am developing an application using Langchain with the following code
```
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
import pandas as pd
from langchain.llms import OpenAI
df = pd.read_csv("titanic.csv")
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)
agent.run("how many rows are there?")
```
I would like to know if the agent can return a DataFrame instead of the response as text. For example, if I ask it to add a column, it would return a DataFrame as a response with the added column.
### Motivation
A new feature would help in automation.
### Your contribution
What can I do to help with the project, count me in! | It's possible agent Langchain return a Pandas Dataframe? | https://api.github.com/repos/langchain-ai/langchain/issues/15020/comments | 1 | 2023-12-21T18:54:13Z | 2024-03-28T16:08:32Z | https://github.com/langchain-ai/langchain/issues/15020 | 2,052,920,653 | 15,020 |
[
"hwchase17",
"langchain"
] | ### System Info
I am using below packages.
Python 3.12.1
langchain 0.0.352
pydantic 2.5.2
openai 1.4.0
huggingface-hub 0.19.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
While executing the below code from deeplearning.ai
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf8')
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
### Expected behavior
Code should successfully execute | AttributeError while executing VectorstoreIndexCreator & DocArrayInMemorySearch | https://api.github.com/repos/langchain-ai/langchain/issues/15016/comments | 10 | 2023-12-21T16:34:48Z | 2024-04-05T16:07:25Z | https://github.com/langchain-ai/langchain/issues/15016 | 2,052,739,238 | 15,016 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
# below is my code
`def generate_custom_prompt(query):
# Create the custom prompt template
custom_prompt_template = f"""You are a chatbot designed to provide helpful answers to user questions. If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
User's Question: {query}
In your response, aim for clarity and conciseness. Provide information that directly addresses the user's query. If the answer requires additional details, feel free to ask clarifying questions.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
Your Helpful Answer:"""
# Create the PromptTemplate
custom_prompt = PromptTemplate(
template=custom_prompt_template, input_variables=["query"]
)
# Format the prompt
formatted_prompt = custom_prompt.format(query=query)
return formatted_prompt`
#below is the error i am getting
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/CustomBot/chatbot/views.py", line 360, in GetChatResponse
custom_message=generate_custom_prompt(query)
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 40, in generate_custom_prompt
formatted_prompt = custom_prompt.format(query=query)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/prompts/prompt.py", line 132, in format
return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
File "/usr/lib/python3.8/string.py", line 163, in format
return self.vformat(format_string, args, kwargs)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/utils/formatting.py", line 29, in vformat
return super().vformat(format_string, args, kwargs)
File "/usr/lib/python3.8/string.py", line 168, in vformat
self.check_unused_args(used_args, args, kwargs)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/utils/formatting.py", line 18, in check_unused_args
raise KeyError(extra)
KeyError: {'query'}
### Suggestion:
_No response_ | Issue: not getting output as per my prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15014/comments | 5 | 2023-12-21T15:28:03Z | 2024-04-18T16:34:57Z | https://github.com/langchain-ai/langchain/issues/15014 | 2,052,630,688 | 15,014 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am following the documentation on https://python.langchain.com/docs/modules/agents/, but as I only have access to an Azure deployment of OpenAI, there is a small deviance from the tutorial. When running the code:
`from langchain.chat_models import AzureChatOpenAI
from langchain.agents import tool
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
# Define the AzureChatOpenAI model
llm = AzureChatOpenAI(
azure_endpoint="https://ENDPOINT.openai.azure.com",
deployment_name="NAME",
api_key="KEY",
temperature=0.5
)
### Create tool
from langchain.agents import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
tools = [get_word_length]
### create prompt
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but bad at calculating lengths of words.",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
### llm with tool
from langchain.tools.render import format_tool_to_openai_function
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
#### create agent
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent.invoke({"input": "how many letters in the word educa?", "intermediate_steps": []})`
I get the following error message, which arises from agent.invoke(...):
openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Suggestion:
_No response_ | Error message when adhering to Agents Langchain documentation only substituting OpenAI with AzureOpenAI: "openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}" | https://api.github.com/repos/langchain-ai/langchain/issues/15012/comments | 2 | 2023-12-21T14:47:39Z | 2024-04-24T16:40:51Z | https://github.com/langchain-ai/langchain/issues/15012 | 2,052,564,532 | 15,012 |
[
"hwchase17",
"langchain"
] | ### System Info
Everything is latest
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("path/to/document")
pages = loader.load_and_split()
### Expected behavior
Documents should not have "/n" | In Pypdf loader "/n " is not removed before creating documents | https://api.github.com/repos/langchain-ai/langchain/issues/15011/comments | 7 | 2023-12-21T14:05:31Z | 2024-08-03T07:40:18Z | https://github.com/langchain-ai/langchain/issues/15011 | 2,052,488,253 | 15,011 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
To fit the prompt from openai to some other local LLM, what's the best way to update the prompts in langchain.chains.query_constructor.prompt while keeping the rest code the same?
### Suggestion:
_No response_ | Issue: update prompts for local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15008/comments | 8 | 2023-12-21T12:27:35Z | 2024-04-23T17:06:13Z | https://github.com/langchain-ai/langchain/issues/15008 | 2,052,333,901 | 15,008 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.