issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
langchain Version: 0.0.348
python Version: Python 3.9.18
OS: Mac OS M2 (Ventura 13.6.2)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Pr... | AWS bedrock Claude v2 SQLDatabaseChain produces comments before the SQL Query | https://api.github.com/repos/langchain-ai/langchain/issues/15283/comments | 20 | 2023-12-28T19:51:15Z | 2024-06-08T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15283 | 2,058,773,284 | 15,283 |
[
"hwchase17",
"langchain"
] | ### System Info
```
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
tools = [DuckDuckGoSearchRun()]
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
... | OpenAIAssistantRunnable stuck on execution with langchain tools | https://api.github.com/repos/langchain-ai/langchain/issues/15270/comments | 2 | 2023-12-28T13:33:35Z | 2023-12-28T17:46:23Z | https://github.com/langchain-ai/langchain/issues/15270 | 2,058,448,990 | 15,270 |
[
"hwchase17",
"langchain"
] | ### System Info
Python: 3.11
Langchain: 0.0.352
mistralai: 0.0.8
### Who can help?
@efriis
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt ... | [mistralai]: Don´t support stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/15269/comments | 2 | 2023-12-28T13:14:32Z | 2024-01-10T00:27:22Z | https://github.com/langchain-ai/langchain/issues/15269 | 2,058,428,380 | 15,269 |
[
"hwchase17",
"langchain"
] | ### System Info
Is there any way to manipulate the data in database like update, insert, delete through chatgpt chatbot with openai and langchain?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Mo... | Manipulating database using chatgpt | https://api.github.com/repos/langchain-ai/langchain/issues/15266/comments | 7 | 2023-12-28T12:24:15Z | 2024-05-10T03:22:41Z | https://github.com/langchain-ai/langchain/issues/15266 | 2,058,376,378 | 15,266 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
According to the documentation listed under the page: https://python.langchain.com/docs/modules/agents/how_to/add_memory_openai_functions, adding a `BaseChatMemory` as `memory` property to an `OpenAIFunctionAgent` should add "memory" to the agent.
**Example listed under the p... | DOC: Issue with the page titled "Add Memory to OpenAI Functions Agent | 🦜️🔗 Langchain" | https://api.github.com/repos/langchain-ai/langchain/issues/15262/comments | 2 | 2023-12-28T10:39:12Z | 2023-12-28T11:05:16Z | https://github.com/langchain-ai/langchain/issues/15262 | 2,058,277,920 | 15,262 |
[
"hwchase17",
"langchain"
] | ### Feature request
It should be possible to search a Chroma vectorstore for a particular Document by it's ID. Given that the Document object is required for the `update_document` method, this lack of functionality makes it difficult to update document metadata, which should be a fairly common use-case.
Currently, ... | Get Chroma vectorstore Document by `doc_id` for document / metadata updates. | https://api.github.com/repos/langchain-ai/langchain/issues/15261/comments | 1 | 2023-12-28T09:48:44Z | 2024-04-04T16:09:01Z | https://github.com/langchain-ai/langchain/issues/15261 | 2,058,224,878 | 15,261 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loader... | qdrant.amax_marginal_relevance_search have not results but qdrant.max_marginal_relevance_search hava results | https://api.github.com/repos/langchain-ai/langchain/issues/15256/comments | 1 | 2023-12-28T07:41:26Z | 2023-12-29T03:31:51Z | https://github.com/langchain-ai/langchain/issues/15256 | 2,058,104,532 | 15,256 |
[
"hwchase17",
"langchain"
] | ### System Info
Python: 3.10
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0.8,
max_tokens=60)
error occurs at openai.py, error message is: AttributeError: module 'openai' has no attribute 'OpenAI'
the re... | langchain 0.5.7 not match latest openai | https://api.github.com/repos/langchain-ai/langchain/issues/15255/comments | 1 | 2023-12-28T07:17:09Z | 2024-04-04T16:08:56Z | https://github.com/langchain-ai/langchain/issues/15255 | 2,058,083,922 | 15,255 |
[
"hwchase17",
"langchain"
] | ### Feature request
Similar to the way callbacks are implemented in BaseLLM the embedding class should also support callbacks.
### Motivation
When using embedding models in a RAG application it would be useful to track e.g. the number of tokens.
Callbacks can be used to log usage details to monitoring services (e... | Callbacks for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/15253/comments | 2 | 2023-12-28T06:29:24Z | 2024-06-11T16:07:18Z | https://github.com/langchain-ai/langchain/issues/15253 | 2,058,046,954 | 15,253 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
What should I do if I want to log the number of tokens shot with llm in chain via lcel?
### Suggestion:
lcel chain token usage tracking | Issue: lcel chain token usage tracking | https://api.github.com/repos/langchain-ai/langchain/issues/15249/comments | 3 | 2023-12-28T04:51:21Z | 2024-06-24T16:07:30Z | https://github.com/langchain-ai/langchain/issues/15249 | 2,057,986,272 | 15,249 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I do not understand how chains are built with the transfer of information between generations. here is an example of the code in the langchain [documentation](https://python.langchain.com/docs/expression_language/why):
```
from langchain_core.runnables import RunnablePassthrou... | DOC: langchain LCEL - transfer of information between generations | https://api.github.com/repos/langchain-ai/langchain/issues/15247/comments | 10 | 2023-12-28T04:06:17Z | 2024-04-05T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15247 | 2,057,963,845 | 15,247 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Based on the documentation and RFC standards referenced in the links:
- https://peps.python.org/pep-0604/
- https://www.blog.pythonlibrary.org/2021/09/11/python-3-10-simplifies-unions-in-type-annotations/
it's evident that the introduction of using | instead of 'union' for type an... | python 3.10 `|` union syntax compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15244/comments | 1 | 2023-12-28T02:53:57Z | 2023-12-28T06:06:43Z | https://github.com/langchain-ai/langchain/issues/15244 | 2,057,929,816 | 15,244 |
[
"hwchase17",
"langchain"
] | ### System Info
如何对langchain加载的chatglm-6b模型进行量化处理
### Who can help?
@hwchase17 @hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
-... | 如何对langchain加载的chatglm-6b模型进行量化处理 | https://api.github.com/repos/langchain-ai/langchain/issues/15243/comments | 3 | 2023-12-28T02:17:17Z | 2024-04-04T16:08:46Z | https://github.com/langchain-ai/langchain/issues/15243 | 2,057,912,633 | 15,243 |
[
"hwchase17",
"langchain"
] | ### System Info
I used the standard code example from the langchain documentation about Fireworks where I inserted my API key. That's the mistake I made:
```
[llm/start] [1:llm:Fireworks] Entering LLM run with input:
{
"prompts": [
"Name 3 sports."
]
}
[llm/error] [1:llm:Fireworks] [761ms] LLM run erro... | error when running the sample code from the langchain documentation about fireworks | https://api.github.com/repos/langchain-ai/langchain/issues/15239/comments | 1 | 2023-12-28T01:10:59Z | 2023-12-28T01:24:35Z | https://github.com/langchain-ai/langchain/issues/15239 | 2,057,882,953 | 15,239 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
hello everyone! Is it possible to use the OpenAI-compatible URL API from text-generation-webui with langchain? the langchain [documentation](https://python.langchain.com/docs/integrations/llms/textgen) says about localhost, but I don't have access to it, I tried to insert the link... | DOC: langchain plus OpenAI-compatible URL API equally error | https://api.github.com/repos/langchain-ai/langchain/issues/15237/comments | 6 | 2023-12-28T00:56:43Z | 2024-01-04T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15237 | 2,057,877,277 | 15,237 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am receiving this error 2 validation errors for ConversationalRetrievalChain
qa_template
extra fields not permitted (type=value_error.extra)
question_generator_chain_options
extra fields not permitted (type=value_error.extra) , for the following code :
```
retriever = v... | Issue: validation errors for ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15236/comments | 3 | 2023-12-28T00:30:51Z | 2024-04-04T16:08:41Z | https://github.com/langchain-ai/langchain/issues/15236 | 2,057,867,182 | 15,236 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.340
Python version: 3.11.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt ... | Executing Chain with HuggingFace Models using wrapper | https://api.github.com/repos/langchain-ai/langchain/issues/15235/comments | 1 | 2023-12-28T00:20:19Z | 2024-04-04T16:08:36Z | https://github.com/langchain-ai/langchain/issues/15235 | 2,057,862,832 | 15,235 |
[
"hwchase17",
"langchain"
] | ### System Info
python = "3.11"
langchain = "0.0.352"
cohere = "4.39"
mlflow = {extras = ["genai"], version = "2.9.2"}
### Who can help?
@harupy
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Cha... | MlflowEmbeddings: input_type argument is missing, required by Cohere embeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/15234/comments | 2 | 2023-12-27T23:59:40Z | 2024-03-21T20:47:30Z | https://github.com/langchain-ai/langchain/issues/15234 | 2,057,854,254 | 15,234 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I am currently following the document to use a hugigngface LLM as a chat model: https://python.langchain.com/docs/integrations/chat/huggingface
I have setup my Huggingface API and am using Option 3 (HuggingFaceHub) to instantiate an LLM.
After running this line: chat_model.... | HuggingFace Chat Wrapper - issue with HuggingFaceHub | https://api.github.com/repos/langchain-ai/langchain/issues/15232/comments | 4 | 2023-12-27T21:52:37Z | 2024-04-03T16:09:39Z | https://github.com/langchain-ai/langchain/issues/15232 | 2,057,799,134 | 15,232 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
@dosu-bot Currently im experiencing an old bug that was supposed to be fixed patches ago.
```
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib... | TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count' | https://api.github.com/repos/langchain-ai/langchain/issues/15228/comments | 1 | 2023-12-27T19:07:13Z | 2024-04-03T16:09:34Z | https://github.com/langchain-ai/langchain/issues/15228 | 2,057,694,401 | 15,228 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.document_loaders.parsers.pdf import PDFPlumberParser
def generate_embeddings(config: dict = None, urls = None, file_path = None, persist_directory=None):
if file_path:
parser = PDFPlumberParser()
data = parser.load(file_path)
processed_da... | Issue: issue with pdfplumber | https://api.github.com/repos/langchain-ai/langchain/issues/15227/comments | 7 | 2023-12-27T18:50:09Z | 2024-04-04T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15227 | 2,057,681,667 | 15,227 |
[
"hwchase17",
"langchain"
] | ### Feature request
As per documentation there's a package for Gemini support but this only works for Gemini API and doesn't work with Vertexai.
https://python.langchain.com/docs/integrations/platforms/google
However in the vertexai docs gemini is mentioned (for some reason gemini ultra ? ) even though when tried wi... | support gemini on vertexai | https://api.github.com/repos/langchain-ai/langchain/issues/15222/comments | 9 | 2023-12-27T17:07:05Z | 2024-04-24T16:47:21Z | https://github.com/langchain-ai/langchain/issues/15222 | 2,057,600,249 | 15,222 |
[
"hwchase17",
"langchain"
] | ### Feature request
I need a mechanism to allow more control over the ANN search performed for a given RAG chain. Consider the initial example:
```
retriever = vectorstore.as_retriever()
template = """You're a helpful assistant who is great at code generation. Don't give me any explanation or summary. I'll give you... | Enable manual override of vector search query for controlled RAG | https://api.github.com/repos/langchain-ai/langchain/issues/15221/comments | 1 | 2023-12-27T16:54:54Z | 2024-04-03T16:09:24Z | https://github.com/langchain-ai/langchain/issues/15221 | 2,057,589,837 | 15,221 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.341
OpenAI version: 1.3.5
Model: gpt-4-1106-preview
Python version:3.10.13
Platform: Celery worker in Docker Container
### Who can help?
@eyurtsev @hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
###... | Langchain agent not executing properly in Celery worker running as Docker container | https://api.github.com/repos/langchain-ai/langchain/issues/15220/comments | 9 | 2023-12-27T16:42:21Z | 2024-03-14T14:26:45Z | https://github.com/langchain-ai/langchain/issues/15220 | 2,057,579,438 | 15,220 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Like title, haven't found anything in the doc - docubot please help
### Idea or request for content:
having a proper document could help | how to create a custom chat model | https://api.github.com/repos/langchain-ai/langchain/issues/15214/comments | 2 | 2023-12-27T13:27:25Z | 2024-04-03T16:09:19Z | https://github.com/langchain-ai/langchain/issues/15214 | 2,057,373,883 | 15,214 |
[
"hwchase17",
"langchain"
] | ### System Info

I am plannign to add new param like "affeciton"
How could I set the query databody to fill up the params here?( Langserve setup !)
" | https://api.github.com/repos/langchain-ai/langchain/issues/15210/comments | 1 | 2023-12-27T11:46:08Z | 2024-04-03T16:09:09Z | https://github.com/langchain-ai/langchain/issues/15210 | 2,057,277,681 | 15,210 |
[
"hwchase17",
"langchain"
] | ### System Info
Baichuan Chat (with both Baichuan-Turbo and Baichuan-Turbo-192K models) has updated their APIs. There are breaking changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API but is still required in Langchain. Baichuan's Langchain integration needs to be updated to the latest version.
A... | Fix Baichuan's integration and introduce Baichuan-Turbo-192K API. | https://api.github.com/repos/langchain-ai/langchain/issues/15206/comments | 1 | 2023-12-27T10:21:21Z | 2024-04-03T16:09:04Z | https://github.com/langchain-ai/langchain/issues/15206 | 2,057,190,266 | 15,206 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain
with a callback handler for streaming responses back.
> qa_chain =ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarit... | Issue: Streaming Response contains the rephrased question in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15205/comments | 3 | 2023-12-27T10:20:47Z | 2024-04-03T16:08:59Z | https://github.com/langchain-ai/langchain/issues/15205 | 2,057,189,374 | 15,205 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: MacOS Sonoma
Python: 3.11.6
LangChain: 0.0.352
llama-cpp-python = 0.2.25
pydantic: 1.10.13 (I know that it is not the latest version, but version 1 is still officially supported)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own... | Pydantic forward ref issue when creating using LlamaCpp with grammar | https://api.github.com/repos/langchain-ai/langchain/issues/15204/comments | 1 | 2023-12-27T10:11:11Z | 2024-04-03T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15204 | 2,057,179,711 | 15,204 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/render.py
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/convert_to_openai.py
For backward compatibility purposes, should we proceed with a d... | Issue: Identical Content in Two Files | https://api.github.com/repos/langchain-ai/langchain/issues/15203/comments | 1 | 2023-12-27T09:49:59Z | 2024-04-03T16:08:49Z | https://github.com/langchain-ai/langchain/issues/15203 | 2,057,154,943 | 15,203 |
[
"hwchase17",
"langchain"
] | ### System Info
langchian=0.0.352
qianfan=0.2.4
When I tried the usage of agent in this [video](https://learn.deeplearning.ai/langchain/lesson/7/agents), I changed the model in it from ChatGpt-3.5-turbo to ERNIE-Bot, and the output of agent showed the following error:
```bash
> Entering new AgentExecutor cha... | "Could not parse LLM output" when using QianfanChatEndpoint in agent. | https://api.github.com/repos/langchain-ai/langchain/issues/15199/comments | 2 | 2023-12-27T08:49:02Z | 2024-04-04T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15199 | 2,057,093,818 | 15,199 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```python
conversational_qa_chain = (
_inputs | _context | ConfigurableTokenLimitProcessor(model="gpt_35_turbo").configurable_fields(
model=ConfigurableFieldSingleOption(
id="model",
name="model",
options={
"gpt_35_turbo": "gpt_35_tu... | Issue: lcel langserve with_fallbacks streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15195/comments | 4 | 2023-12-27T04:53:43Z | 2024-05-22T16:07:52Z | https://github.com/langchain-ai/langchain/issues/15195 | 2,056,910,699 | 15,195 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I apologize for the naive question.it's not about an error or a bug.
I'm trying to implement routing by following the guide here: https://python.langchain.com/docs/modules/chains/foundational/router
However, I can't figure out how to use RAG.
I tried changing the last code in the gu... | Issue: <Please tell me how to combine Routing and RAG in a chain.> | https://api.github.com/repos/langchain-ai/langchain/issues/15193/comments | 5 | 2023-12-27T04:29:43Z | 2024-04-16T16:20:16Z | https://github.com/langchain-ai/langchain/issues/15193 | 2,056,898,273 | 15,193 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
why CSVLoader can't load? error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 from langchain.document_loaders.csv_loader import CSVLoader
3... | Issue: <CSVLoader can't load> | https://api.github.com/repos/langchain-ai/langchain/issues/15192/comments | 9 | 2023-12-27T03:54:38Z | 2024-03-01T05:21:04Z | https://github.com/langchain-ai/langchain/issues/15192 | 2,056,881,303 | 15,192 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm going to make a chain through lcel and try to process an error through `with_fallbacks` at the end, but unlike before I put `with_fallbacks`, streaming is not possible and all responses go down at once. Can i process streaming using `with_fallbacks`?
### Suggestion:
lcel `with_fall... | Issue: lcel `with_fallbacks` streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15191/comments | 1 | 2023-12-27T03:40:56Z | 2023-12-27T04:53:55Z | https://github.com/langchain-ai/langchain/issues/15191 | 2,056,875,193 | 15,191 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If the rate limit of the api key is exceeded when developing a chain through lcel, I want to dynamically change another api key and retry to give the client a normal response, is there a way?
### Suggestion:
Dynamically catch an error in lcel, change the api key, and try again | Issue: openai api key rate limit error handing | https://api.github.com/repos/langchain-ai/langchain/issues/15190/comments | 2 | 2023-12-27T03:37:31Z | 2024-04-03T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15190 | 2,056,873,561 | 15,190 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I want to contribute to one of the libs and started a fork. Here are the steps I took:
I am trying to add a new feature but first need to experiment with it. I am unsure on how to get started writing some short scripts to use the libs.
1. I went into ```libs/experimental```... | DOC: How to write my own short scripts within a fork to test some code? | https://api.github.com/repos/langchain-ai/langchain/issues/15177/comments | 2 | 2023-12-26T18:42:18Z | 2024-05-04T08:50:34Z | https://github.com/langchain-ai/langchain/issues/15177 | 2,056,625,948 | 15,177 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: Windows
Python: 3.9.10
Langchain version: 0.0.352
openai version: 1.6.1
### Who can help?
@BeautyyuYanli
@baskaryan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompt... | pgvecto.rs: retriever filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/15173/comments | 2 | 2023-12-26T14:35:49Z | 2024-01-15T19:42:01Z | https://github.com/langchain-ai/langchain/issues/15173 | 2,056,466,694 | 15,173 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(cont... | Issue: Explain Memory and How it's implemented in my Case. | https://api.github.com/repos/langchain-ai/langchain/issues/15170/comments | 4 | 2023-12-26T12:45:59Z | 2023-12-27T05:34:44Z | https://github.com/langchain-ai/langchain/issues/15170 | 2,056,381,701 | 15,170 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I would like to build RAG based on Mistral 7B model
The model is already hosted, and I provide llm_url in the custom LLM setup
I am able to make a request and get a response from the URL using the `llm._call` method, however something is wrong with the callbacks in `RetrievalQA.from_... | Issue: Custom Mistral based LLM from API for RetrievalQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/15168/comments | 5 | 2023-12-26T11:56:09Z | 2024-06-26T12:00:33Z | https://github.com/langchain-ai/langchain/issues/15168 | 2,056,342,401 | 15,168 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Given a tool that generates a dataframe, how can I pass it through the chain?
```
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are a helpful a... | Issue: Pass additional data through AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15165/comments | 3 | 2023-12-26T10:47:02Z | 2024-06-19T08:30:56Z | https://github.com/langchain-ai/langchain/issues/15165 | 2,056,290,653 | 15,165 |
[
"hwchase17",
"langchain"
] | ### System Info
python3.10
langchain 0.0.333
### Who can help?
@hwchase17 @agola11 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt... | 【BUG】ConversationalRetrievalChain.from_llm and pass in chat_history, there is a problem with the callback. | https://api.github.com/repos/langchain-ai/langchain/issues/15164/comments | 2 | 2023-12-26T09:32:29Z | 2024-01-10T03:36:51Z | https://github.com/langchain-ai/langchain/issues/15164 | 2,056,226,273 | 15,164 |
[
"hwchase17",
"langchain"
] | is it correct using CharacterTextSplitter in Confluence
### Issue you'd like to raise.
confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = [... | Issue: How it can be splitted ? | https://api.github.com/repos/langchain-ai/langchain/issues/15162/comments | 1 | 2023-12-26T07:41:33Z | 2023-12-26T10:37:39Z | https://github.com/langchain-ai/langchain/issues/15162 | 2,056,133,474 | 15,162 |
[
"hwchase17",
"langchain"
] | ### System Info
When I set `verbose=True` when creating chains using ConversationBufferMemory as memory and **redirect** the output to a txt/log file, the return messages shows that the ConversationBufferMemory saves same round conversation twice. You can get the example in later part of this issue.
**This problem ... | Does ConversationBufferMemory actually save conversation twice? | https://api.github.com/repos/langchain-ai/langchain/issues/15161/comments | 2 | 2023-12-26T07:21:01Z | 2024-01-02T06:47:11Z | https://github.com/langchain-ai/langchain/issues/15161 | 2,056,117,735 | 15,161 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i'm using openai function call agent , gpt llm offen gives bad tool parameters, i want to achieve this: pass certain params to all tools through through some path, before every tool get executed, i can check whether the llm produced params is right or directly use the certain params ... | Issue: i want to use langchain callbacks to pass a tool parameter to it? what should i do? | https://api.github.com/repos/langchain-ai/langchain/issues/15160/comments | 1 | 2023-12-26T06:56:58Z | 2024-04-02T16:07:09Z | https://github.com/langchain-ai/langchain/issues/15160 | 2,056,099,364 | 15,160 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? For now, I do not see any methods that I can use to retrieve the documents and also return me the similarity score. However, if use the vector store to run similarity search, I have the opti... | Issue: When using Qdrant as retriever, how to retrieve the relevant documents with the similarity score? | https://api.github.com/repos/langchain-ai/langchain/issues/15158/comments | 4 | 2023-12-26T06:24:17Z | 2024-04-02T16:07:04Z | https://github.com/langchain-ai/langchain/issues/15158 | 2,056,076,604 | 15,158 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I've wondering that in this part of the code in order to define `cypher generation template` of langchain with neo4j graph database from Neo4j DB QA chain Documentation
```python
from langchain.prompts.prompt import PromptTemplate
CYPHER_GENERATION_TEMPLATE = """Task:Genera... | DOC: Need some clarification on Neo4j DB QA chain documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15157/comments | 3 | 2023-12-26T04:36:18Z | 2024-04-02T16:06:59Z | https://github.com/langchain-ai/langchain/issues/15157 | 2,056,019,228 | 15,157 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.352
Langchain experimental Version: 0.0.47
Python : 3.10
Ubuntu : 22.04
Poetry is being used
**Code: `test.py`**
```python
import json
from langchain.schema import HumanMessage
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.stre... | langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404 | https://api.github.com/repos/langchain-ai/langchain/issues/15147/comments | 9 | 2023-12-25T14:08:45Z | 2024-05-29T12:18:55Z | https://github.com/langchain-ai/langchain/issues/15147 | 2,055,708,933 | 15,147 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version 0.352
SystemMessage is ignored when I invoke AgentExecutor.run function. the code looks as below.
```
from typing import Tuple, Dict
from langchain.agents import initialize_agent, AgentType
from langchain.agents.agent import AgentExecutor
from langchain.agents.format_scr... | SystemMessage are not considered while creating AgentExecutor with OPENAI_FUNCTIONS | https://api.github.com/repos/langchain-ai/langchain/issues/15145/comments | 5 | 2023-12-25T12:11:14Z | 2024-04-01T16:06:55Z | https://github.com/langchain-ai/langchain/issues/15145 | 2,055,649,057 | 15,145 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates ... | OpenAIWhisperParserLocal fails when specifying cuda device but cuda is not available | https://api.github.com/repos/langchain-ai/langchain/issues/15143/comments | 1 | 2023-12-25T09:53:52Z | 2024-04-01T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15143 | 2,055,569,018 | 15,143 |
[
"hwchase17",
"langchain"
] | ### System Info
wsl
conda 23.7.4 python 3.8.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [... | HuggingFaceHub api can not pass trust_remote_code argument | https://api.github.com/repos/langchain-ai/langchain/issues/15141/comments | 1 | 2023-12-25T09:10:42Z | 2024-04-01T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15141 | 2,055,540,800 | 15,141 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In the current documentations the output of `Upstash Redis Cache` section in LLM Caching documentation seems wrong. The second run after caching is done has wrong output and wrong code and comments written in the code block.
### Idea or request for content:
Update the code block... | DOC: Wrong output in `Upstash Redis Cache` section of LLM Caching documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15139/comments | 1 | 2023-12-25T07:13:29Z | 2024-04-01T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15139 | 2,055,458,803 | 15,139 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain : 0.0.352
Python : 3.11.5
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output P... | Sagemaker Endpoint not working streaming | https://api.github.com/repos/langchain-ai/langchain/issues/15138/comments | 1 | 2023-12-25T06:28:01Z | 2024-04-01T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15138 | 2,055,427,344 | 15,138 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently if one wants to use the RetryWithErrorOutputParser - we need to do the parsing manually instead of generating a chain that does it for us (including all the nice chain functions: batch, ainvoke, etc)
There are 2 issues:
1. The RetryWithOutputParser requires the prompt to be given to i... | RetryWithErrorOutputParser does not work with LLMChain because it does not implement the `parse` function | https://api.github.com/repos/langchain-ai/langchain/issues/15133/comments | 3 | 2023-12-24T21:26:43Z | 2024-05-06T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15133 | 2,055,216,057 | 15,133 |
[
"hwchase17",
"langchain"
] |
# How Adding a prompt template to conversational retrieval chain giving the code:
`template= """Use the following pieces of context to answer the question at the end.
If you don't know the answer,
just say that you don't know.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemp... | Adding Prompt template to ConversationalRetrievalChain.from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/15132/comments | 1 | 2023-12-24T21:26:16Z | 2024-03-31T16:06:50Z | https://github.com/langchain-ai/langchain/issues/15132 | 2,055,216,000 | 15,132 |
[
"hwchase17",
"langchain"
] | ### System Info
windows
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Lo... | chain.invoke is no longer taking a json as input | https://api.github.com/repos/langchain-ai/langchain/issues/15131/comments | 1 | 2023-12-24T17:35:05Z | 2024-03-31T16:06:45Z | https://github.com/langchain-ai/langchain/issues/15131 | 2,055,171,635 | 15,131 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.352, Windows 10, Python 3.11.6,
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates /... | Template issue: Neo4J environmental variables in .env file not found | https://api.github.com/repos/langchain-ai/langchain/issues/15130/comments | 3 | 2023-12-24T14:59:45Z | 2024-03-31T16:06:40Z | https://github.com/langchain-ai/langchain/issues/15130 | 2,055,130,570 | 15,130 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Environment
```
Edition Windows 11 Home
Version 22H2
Installed on 4/30/2023
OS build 22621.2861
Experience Windows Feature Experience Pack 1000.22681.1000.0
langchain package version: "0.0.212"
zod package version: "3.22.4"
typescript package version: "5.1.6"
```
Prom... | Issue: LLMChain error. response_format json error with messages. Messages is array of array | https://api.github.com/repos/langchain-ai/langchain/issues/15125/comments | 4 | 2023-12-24T12:57:20Z | 2023-12-24T15:06:36Z | https://github.com/langchain-ai/langchain/issues/15125 | 2,055,093,069 | 15,125 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parameters
### Suggestion:
_No response_ | If my agent tool requires user to pass 2 parameters, and if these 2 parameters are not included in the user's question, how can I remind him to enter the parametersIssue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15122/comments | 1 | 2023-12-24T07:32:59Z | 2024-03-31T16:06:35Z | https://github.com/langchain-ai/langchain/issues/15122 | 2,055,013,662 | 15,122 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
what is RAG and how it's implemented as of now I completed exploring custom_prompt_template and want to know more about RAG?
### Suggestion:
_No response_ | Issue: what is RAG and how it's implemented? | https://api.github.com/repos/langchain-ai/langchain/issues/15116/comments | 5 | 2023-12-24T06:39:33Z | 2024-04-01T16:06:30Z | https://github.com/langchain-ai/langchain/issues/15116 | 2,055,002,722 | 15,116 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I suspect a potential issue where Chroma.from_documents might not be embedding and storing vectors for metadata in documents.
I have loaded five tabular documents using DataFrameLoader. However, when attempting to retrieve content based on similarity from the vector store, it appears ... | Chroma.from_documents exclude metadata in embedding? [Question] | https://api.github.com/repos/langchain-ai/langchain/issues/15115/comments | 5 | 2023-12-24T06:13:37Z | 2024-03-31T16:06:25Z | https://github.com/langchain-ai/langchain/issues/15115 | 2,054,997,867 | 15,115 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be great to have adapters support in huggingface embedding class
### Motivation
Many really good embedding models have special adapters for retrieval, for example specter2 which is a leading embedding for scientific texts have many adapters, for example https://huggingface.co/allenai/sp... | add support for embedding models with adapters | https://api.github.com/repos/langchain-ai/langchain/issues/15112/comments | 2 | 2023-12-24T01:18:05Z | 2024-04-03T16:08:34Z | https://github.com/langchain-ai/langchain/issues/15112 | 2,054,952,674 | 15,112 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add streaming support for Together AI Endpoints in Langchain. The official endpoint supports streaming with `stream_tokens` keyword, which should be not that hard to implement `_stream` method and add streaming support with the `streaming = True` flag
this is what the endpoint output when `stream_... | [improvement] Add Streaming Support for Together AI | https://api.github.com/repos/langchain-ai/langchain/issues/15109/comments | 1 | 2023-12-23T19:48:33Z | 2024-03-30T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15109 | 2,054,881,350 | 15,109 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am using langchain.vectorstores.redis and langchain.chains.ConversationalRetrievalChain.from_llm
I would like to get the scores of the matching documents with my query.
I know you can filter with the `search_kwargs={"score_threshold": 0.8}`
But still I want to get the similarity score... | Return similarity score ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15097/comments | 5 | 2023-12-23T11:56:23Z | 2024-04-04T16:08:21Z | https://github.com/langchain-ai/langchain/issues/15097 | 2,054,765,710 | 15,097 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to initialize an existing collection via:
store = PGVector(
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
I keep getting:
Exception has occurred: NoReferencedTableError
Foreign key associated wi... | Foreign key associated with column 'langchain_pg_embedding.collection_id' could not find table | https://api.github.com/repos/langchain-ai/langchain/issues/15096/comments | 1 | 2023-12-23T11:56:18Z | 2024-03-30T16:07:01Z | https://github.com/langchain-ai/langchain/issues/15096 | 2,054,765,699 | 15,096 |
[
"hwchase17",
"langchain"
] | ### Feature request
The safety settings are there in the **google_generativeai** library are are **not** there in the **langchain_google_genai** library
The safety settings is an basically array of dictionaries passed when sending the prompt
### Motivation
The problem with not having this is that when we use the Ch... | Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI | https://api.github.com/repos/langchain-ai/langchain/issues/15095/comments | 22 | 2023-12-23T09:00:07Z | 2024-08-02T10:50:19Z | https://github.com/langchain-ai/langchain/issues/15095 | 2,054,725,088 | 15,095 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document... | Agent how to call remote tool (exposed by langserve) | https://api.github.com/repos/langchain-ai/langchain/issues/15094/comments | 1 | 2023-12-23T08:50:23Z | 2024-03-30T16:06:56Z | https://github.com/langchain-ai/langchain/issues/15094 | 2,054,722,951 | 15,094 |
[
"hwchase17",
"langchain"
] | ### System Info
I'm using the latest version of langchain.
When my system prompt is longer than 23 lines, i get this error:
KeyError: "Input to ChatPromptTemplate is missing variable ''. Expected: ['', 'description'] Received: ['description']"
It's being generated from this snippet:
```
def generate_output(u... | Issue with ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15093/comments | 4 | 2023-12-23T08:09:22Z | 2024-03-31T16:06:10Z | https://github.com/langchain-ai/langchain/issues/15093 | 2,054,713,803 | 15,093 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
user_experienc... | Issue: Getting error while using ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/15089/comments | 6 | 2023-12-23T05:10:34Z | 2024-04-18T16:21:18Z | https://github.com/langchain-ai/langchain/issues/15089 | 2,054,676,213 | 15,089 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11
langchain 0.0.352
langchain-core 0.1.3
langchain-community 0.0.4 (doesn't work with neithwer `from langchain.llms import OpenAI` nor `langchain.chat_models import ChatOpenAI`)
langchain-community 0.0.2 (works as expected with `from langchain.llms import OpenAI` but it doesn't with `lang... | SelfQueryRetriever broken with latest langchain-community or using ChatOpenAI as llm | https://api.github.com/repos/langchain-ai/langchain/issues/15087/comments | 1 | 2023-12-23T02:55:37Z | 2024-03-30T16:06:46Z | https://github.com/langchain-ai/langchain/issues/15087 | 2,054,631,468 | 15,087 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
#### Issue Description
- **Overview**: The current documentation for the 'Return Source Documents' functionality seems to be outdated or incorrect. The provided code snippet results in errors when executed.
https://python.langchain.com/docs/integrations/providers/vectara/vec... | DOC: Documentation Update Needed for 'Return Source Documents' Functionality | https://api.github.com/repos/langchain-ai/langchain/issues/15086/comments | 2 | 2023-12-23T02:21:36Z | 2024-03-30T16:06:41Z | https://github.com/langchain-ai/langchain/issues/15086 | 2,054,623,466 | 15,086 |
[
"hwchase17",
"langchain"
] | ### System Info
Im trying to implement this in sagemaker with bedrock claude v2
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock/chain.py
Here is my code
```
`import os
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedro... | Can't use RunnablePassthrough | https://api.github.com/repos/langchain-ai/langchain/issues/15085/comments | 3 | 2023-12-23T00:38:06Z | 2024-03-30T16:06:36Z | https://github.com/langchain-ai/langchain/issues/15085 | 2,054,597,980 | 15,085 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
SQLDatabaseChain is throwing a TypeError when executing the .run or .invoke functions. All arguments or kwargs passed to the class are valid, but the error persists. I have trapped back to the database and there is a valid connection. My code and error are below:
```python
sqlite_... | SQLDatabaseChain raising TypeError exception with SQLite | https://api.github.com/repos/langchain-ai/langchain/issues/15077/comments | 11 | 2023-12-22T20:25:25Z | 2024-07-12T18:11:07Z | https://github.com/langchain-ai/langchain/issues/15077 | 2,054,482,659 | 15,077 |
[
"hwchase17",
"langchain"
] | ### System Info
I have observed that while using the command belonging to importing "langchain.llms (for example as in from langchain.llms import HuggingFaceHub") have no problem when I deploy my web app to multiple hosting sites like streamlit, Render, Heroku etc.
However, using the "langchain.memory" as in from l... | Bugs in importing when Deploying a LangChain web app to multiple hosting platforms! | https://api.github.com/repos/langchain-ai/langchain/issues/15074/comments | 1 | 2023-12-22T19:12:34Z | 2024-03-29T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15074 | 2,054,421,969 | 15,074 |
[
"hwchase17",
"langchain"
] | ### Feature request
Google's `gemini-pro` supports function calling. It would be nice to be able to use langchain to support function calling when using the `VertexAI` class similar to OpenAI and OpenAI's version of function calling: https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling
##... | Feature request: Vertex AI Function Calling | https://api.github.com/repos/langchain-ai/langchain/issues/15073/comments | 1 | 2023-12-22T19:10:42Z | 2024-03-29T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15073 | 2,054,420,480 | 15,073 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.351
python==3.10
### Who can help?
@hwchase17 This should be an easy one to fix.
When using regex in the output parser of the StructuredChatAgent, the output parser cuts off the output at the first ending ``` it finds. For example, if my string was
```
"""```json
{
"action... | Structured Chat Output Parser doesn't work when model outputs a code block with ``` around the code block | https://api.github.com/repos/langchain-ai/langchain/issues/15069/comments | 3 | 2023-12-22T17:40:20Z | 2024-04-09T16:14:46Z | https://github.com/langchain-ai/langchain/issues/15069 | 2,054,285,645 | 15,069 |
[
"hwchase17",
"langchain"
] | in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated | send context of docs through Chroma().as_retriever multiple times in the same conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15062/comments | 1 | 2023-12-22T13:52:13Z | 2024-03-29T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15062 | 2,053,953,258 | 15,062 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15060
<div type='discussions-op-text'>
<sup>Originally posted by **ShehneelAhmedKhan** December 22, 2023</sup>
This is my code:
llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo")
db_chain = SQLDatabaseSequentialChain.from_llm... | Using SQLDatabaseSequentialChain, discrepancy between Answer and SQLResult | https://api.github.com/repos/langchain-ai/langchain/issues/15061/comments | 2 | 2023-12-22T13:48:55Z | 2024-03-29T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15061 | 2,053,949,456 | 15,061 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i want force the model to call a specific function. but i didn't found how to ues tool_choice with AgentExecutor from doc.
please give me demo. Thanks
### Suggestion:
_No response_ | How to use tool_choice with initialize_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/15059/comments | 3 | 2023-12-22T13:20:20Z | 2024-03-29T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15059 | 2,053,915,647 | 15,059 |
[
"hwchase17",
"langchain"
] | ### Feature request
The `GoogleDriveLoader` currently supports 3 different ways to authenticate a user.
1. Via a Service Account File
2. Via a Token File
3. Via a Live server
All those ways work perfectly, but I'm missing a way to authenticate the user via an existing JWT. There is a workaround by saving the JWT... | Make it possible to give credentials directly via parameter on GoogleDriveLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15058/comments | 3 | 2023-12-22T12:59:29Z | 2024-06-30T23:27:12Z | https://github.com/langchain-ai/langchain/issues/15058 | 2,053,889,545 | 15,058 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi Guy, this world requires a Langchain framework written in the Rust language, and Python is not the future of AI.
### Suggestion:
_No response_ | this world requires a Langchain framework written in the Rust language | https://api.github.com/repos/langchain-ai/langchain/issues/15057/comments | 3 | 2023-12-22T11:01:51Z | 2024-03-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15057 | 2,053,758,780 | 15,057 |
[
"hwchase17",
"langchain"
] | ### Feature request
The [Snowflakeconnector](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-api#functions) supports authentication via browser. It would be nice if the [Langchain Snowflake Loader](https://python.langchain.com/docs/integrations/document_loaders/snowflake) also supports ... | Add external browser authentication for Snowflake. | https://api.github.com/repos/langchain-ai/langchain/issues/15056/comments | 1 | 2023-12-22T10:43:44Z | 2024-03-29T16:08:10Z | https://github.com/langchain-ai/langchain/issues/15056 | 2,053,735,631 | 15,056 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I want to use the ContextualCompressionRetriever and wondering how the prompt looks like or if you can use it for non-english languages (e.g. German)?
I am using ContextualCompressionRetriever at the moment and realized that my LLM responses often switch to English, so I am assum... | ContextualCompressionRetriever for non-english languages | https://api.github.com/repos/langchain-ai/langchain/issues/15052/comments | 1 | 2023-12-22T08:16:48Z | 2024-03-29T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15052 | 2,053,554,200 | 15,052 |
[
"hwchase17",
"langchain"
] | ### System Info
when i use sql agent, i want get table desc , but agent can't work , i am sure REPICA.ICA_PERSON_DATA_ALL table is existed
Question: Describe the REPICA.ICA_PERSON_DATA_ALL table
Thought: I should query the schema of the REPICA.ICA_PERSON_DATA_ALL table to get information about its columns and data... | oracle db when use sql agent can't find table name[Error: table_names] | https://api.github.com/repos/langchain-ai/langchain/issues/15051/comments | 1 | 2023-12-22T07:38:33Z | 2024-03-29T16:08:00Z | https://github.com/langchain-ai/langchain/issues/15051 | 2,053,514,880 | 15,051 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
google-search-results 2.4.2
Windows
### Who can help?
_No response_
### Information
- [ ] The official example noteb... | Error Code 400. Can ask follow up questions with agent_executor | https://api.github.com/repos/langchain-ai/langchain/issues/15050/comments | 4 | 2023-12-22T07:06:47Z | 2024-04-11T16:17:10Z | https://github.com/langchain-ai/langchain/issues/15050 | 2,053,484,432 | 15,050 |
[
"hwchase17",
"langchain"
] | ### Feature request
Microsoft's new model called [Phi](https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/) seems interesting...
### Motivation
Performance of smaller models 2-3B params can compare to models with 7B params.
### Your contribution
i don't know enough to... | add Microsoft/Phi support | https://api.github.com/repos/langchain-ai/langchain/issues/15049/comments | 1 | 2023-12-22T06:28:54Z | 2024-03-29T16:07:55Z | https://github.com/langchain-ai/langchain/issues/15049 | 2,053,449,472 | 15,049 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this?
### Suggestion:
_No response_ | When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this? | https://api.github.com/repos/langchain-ai/langchain/issues/15048/comments | 2 | 2023-12-22T06:18:00Z | 2024-03-29T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15048 | 2,053,440,880 | 15,048 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain==0.0.352
MacOS intel version
python==3.8.3
networkx==2.7.1
### Who can help?
@hwchase17 @agola11 I have an issue when initializing GraphQAChain using networkx graph.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X... | GraphQAChain not working when using Networkx graphs !! | https://api.github.com/repos/langchain-ai/langchain/issues/15046/comments | 6 | 2023-12-22T03:28:50Z | 2024-07-27T15:00:45Z | https://github.com/langchain-ai/langchain/issues/15046 | 2,053,324,243 | 15,046 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.348
langchain-nvidia-trt 0.0.1rc0
Python 3.11
### Who can help?
@jdye64
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Template... | NVIDIA Triton+TRT-LLM connector needs to handle dynamic model parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15045/comments | 2 | 2023-12-22T02:17:48Z | 2024-06-08T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15045 | 2,053,274,172 | 15,045 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/2460f977c5c20073b41803c41fd08945be34cd60/libs/langchain/langchain/agents/output_parsers/openai_functions.py#L49
Eeven though you pass your customized function, gpt-3.5 will often return a function call which name is "python" and the arguments are not in json format.
... | BUG! The arguments of function calling returned by gpt 3.5 might not be a dict | https://api.github.com/repos/langchain-ai/langchain/issues/15043/comments | 3 | 2023-12-22T01:48:52Z | 2024-04-10T16:14:54Z | https://github.com/langchain-ai/langchain/issues/15043 | 2,053,256,211 | 15,043 |
[
"hwchase17",
"langchain"
] | ### System Info
The example found [here](https://python.langchain.com/docs/integrations/vectorstores/azuresearch) and in particular this code fragment
```
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSear... | AzureSearch Bug -- langchain.vectorstores.azuresearch | https://api.github.com/repos/langchain-ai/langchain/issues/15039/comments | 5 | 2023-12-21T23:45:19Z | 2024-06-01T00:07:40Z | https://github.com/langchain-ai/langchain/issues/15039 | 2,053,180,711 | 15,039 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Seeing an issue in my code that appeared out of nowhere, hoping for some support here. The error message I am seeing is `ValueError: Could not parse output: Answer to inquiry from OpenAI. Score: 90` from the output_parsers/regex.py (https://github.com/langchain-ai/langchain/blob/v0.0.25... | Issue: ValueError: Could not parse output: | https://api.github.com/repos/langchain-ai/langchain/issues/15037/comments | 1 | 2023-12-21T23:20:12Z | 2024-03-28T16:08:43Z | https://github.com/langchain-ai/langchain/issues/15037 | 2,053,166,082 | 15,037 |
[
"hwchase17",
"langchain"
] | ## Describe the problem
When do inference (with the llm or chat model) we pass a empty list to POST request, when the "stop" attribute of `_create_stream` is not set.
This create a problem when using model, bc ollama override the stop sequence list of the model with the list we pass in the request
## Solution
I... | Ollama integration: The stop sequence is empty when do inference | https://api.github.com/repos/langchain-ai/langchain/issues/15024/comments | 3 | 2023-12-21T19:04:00Z | 2024-03-28T16:08:37Z | https://github.com/langchain-ai/langchain/issues/15024 | 2,052,931,048 | 15,024 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am developing an application using Langchain with the following code
```
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
import pandas as pd
from langch... | It's possible agent Langchain return a Pandas Dataframe? | https://api.github.com/repos/langchain-ai/langchain/issues/15020/comments | 1 | 2023-12-21T18:54:13Z | 2024-03-28T16:08:32Z | https://github.com/langchain-ai/langchain/issues/15020 | 2,052,920,653 | 15,020 |
[
"hwchase17",
"langchain"
] | ### System Info
I am using below packages.
Python 3.12.1
langchain 0.0.352
pydantic 2.5.2
openai 1.4.0
huggingface-hub 0.19.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embeddi... | AttributeError while executing VectorstoreIndexCreator & DocArrayInMemorySearch | https://api.github.com/repos/langchain-ai/langchain/issues/15016/comments | 10 | 2023-12-21T16:34:48Z | 2024-04-05T16:07:25Z | https://github.com/langchain-ai/langchain/issues/15016 | 2,052,739,238 | 15,016 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
# below is my code
`def generate_custom_prompt(query):
# Create the custom prompt template
custom_prompt_template = f"""You are a chatbot designed to provide helpful answers to user questions. If you encounter a question for which you don't know the answer, please respond... | Issue: not getting output as per my prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15014/comments | 5 | 2023-12-21T15:28:03Z | 2024-04-18T16:34:57Z | https://github.com/langchain-ai/langchain/issues/15014 | 2,052,630,688 | 15,014 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am following the documentation on https://python.langchain.com/docs/modules/agents/, but as I only have access to an Azure deployment of OpenAI, there is a small deviance from the tutorial. When running the code:
`from langchain.chat_models import AzureChatOpenAI
from langchain.age... | Error message when adhering to Agents Langchain documentation only substituting OpenAI with AzureOpenAI: "openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}" | https://api.github.com/repos/langchain-ai/langchain/issues/15012/comments | 2 | 2023-12-21T14:47:39Z | 2024-04-24T16:40:51Z | https://github.com/langchain-ai/langchain/issues/15012 | 2,052,564,532 | 15,012 |
[
"hwchase17",
"langchain"
] | ### System Info
Everything is latest
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Docum... | In Pypdf loader "/n " is not removed before creating documents | https://api.github.com/repos/langchain-ai/langchain/issues/15011/comments | 7 | 2023-12-21T14:05:31Z | 2024-08-03T07:40:18Z | https://github.com/langchain-ai/langchain/issues/15011 | 2,052,488,253 | 15,011 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
To fit the prompt from openai to some other local LLM, what's the best way to update the prompts in langchain.chains.query_constructor.prompt while keeping the rest code the same?
### Suggestion:
_No response_ | Issue: update prompts for local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15008/comments | 8 | 2023-12-21T12:27:35Z | 2024-04-23T17:06:13Z | https://github.com/langchain-ai/langchain/issues/15008 | 2,052,333,901 | 15,008 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.