issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
langchain v0.1.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
-... | Intercepting the output message in a callback handler before it is sent to the output parser | https://api.github.com/repos/langchain-ai/langchain/issues/15830/comments | 5 | 2024-01-10T17:27:36Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15830 | 2,074,843,378 | 15,830 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.1.0
python 3.10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] D... | `chain.get_graph()` doesn't play nicely with `chain.map()` or `list[str]` | https://api.github.com/repos/langchain-ai/langchain/issues/15828/comments | 1 | 2024-01-10T17:15:20Z | 2024-04-17T16:18:38Z | https://github.com/langchain-ai/langchain/issues/15828 | 2,074,820,818 | 15,828 |
[
"hwchase17",
"langchain"
] | ### System Info
python=3.11
langchain= latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
-... | KeyError: 'llm' in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/15819/comments | 4 | 2024-01-10T13:34:12Z | 2024-04-18T16:36:53Z | https://github.com/langchain-ai/langchain/issues/15819 | 2,074,391,510 | 15,819 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Can someone please help me pass llamaCPP instance into langchain's conversational retrieval chain that uses a retriever.
### Suggestion:
_No response_ | using LLamaCPP with conversational retrieval chain. | https://api.github.com/repos/langchain-ai/langchain/issues/15818/comments | 1 | 2024-01-10T13:29:03Z | 2024-04-17T16:16:51Z | https://github.com/langchain-ai/langchain/issues/15818 | 2,074,381,981 | 15,818 |
[
"hwchase17",
"langchain"
] | ### Feature request
The current document_loaders accept file path to process. But most of the time, especially if application deployed to somewhere, file is uploaded by user and not exist on file system.
Writing that in-memory bytes to disk and re-read is a unnecessary step.
It would be good to take BytesIO or... | document_loaders to support BytesIO or an interface for in-memory objects | https://api.github.com/repos/langchain-ai/langchain/issues/15815/comments | 6 | 2024-01-10T12:36:25Z | 2024-04-17T16:20:25Z | https://github.com/langchain-ai/langchain/issues/15815 | 2,074,285,594 | 15,815 |
[
"hwchase17",
"langchain"
] | ### System Info
LC version: 0.1.0
Platform: MacOS
Python version: 3.12.1
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prom... | AzureChatOpenAI: `Configuration key azure_deployment not found in client` | https://api.github.com/repos/langchain-ai/langchain/issues/15814/comments | 1 | 2024-01-10T12:30:52Z | 2024-04-17T16:27:44Z | https://github.com/langchain-ai/langchain/issues/15814 | 2,074,275,377 | 15,814 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
i have tried with several tests, even with the most basic e.g in the doc, nothing. Dissapointed because it got me superexcited at first:
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.schema import HumanMessage
model = OllamaFunctions(... | DOC: <https://python.langchain.com/docs/integrations/chat/ollama_functions 'DOC: ' prefix>ollamafunctions not working at all | https://api.github.com/repos/langchain-ai/langchain/issues/15808/comments | 2 | 2024-01-10T09:17:02Z | 2024-07-04T16:07:33Z | https://github.com/langchain-ai/langchain/issues/15808 | 2,073,927,465 | 15,808 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.chains import (
ConversationalRetrievalChain,
LLMChain
)
from langchain.callbacks.streaming_stdou... | Why can't I store and retrieve vectors? Please help me fix it. | https://api.github.com/repos/langchain-ai/langchain/issues/15806/comments | 1 | 2024-01-10T08:47:26Z | 2024-04-17T16:17:52Z | https://github.com/langchain-ai/langchain/issues/15806 | 2,073,877,371 | 15,806 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_cor... | Why can't you search vectors? | https://api.github.com/repos/langchain-ai/langchain/issues/15804/comments | 1 | 2024-01-10T07:35:35Z | 2024-04-17T16:22:20Z | https://github.com/langchain-ai/langchain/issues/15804 | 2,073,772,510 | 15,804 |
[
"hwchase17",
"langchain"
] | ### System Info
Issue with current documentation:
I was reading the documentation and in the modules/model_io/concepts page noticed a minor issue with the pagination navigation. Both the "Previous" and "Next" links currently point to the same page ('model_io'), which may lead to confusion for users.
![Screenshot... | DOC: modules/model_io/concepts in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15803/comments | 1 | 2024-01-10T07:33:55Z | 2024-04-17T16:17:13Z | https://github.com/langchain-ai/langchain/issues/15803 | 2,073,770,325 | 15,803 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_cor... | Why can't I pass 'res' to embedding_array and perform vector search? Also, please help me find out where else I might be going wrong | https://api.github.com/repos/langchain-ai/langchain/issues/15802/comments | 1 | 2024-01-10T06:35:58Z | 2024-04-17T16:25:14Z | https://github.com/langchain-ai/langchain/issues/15802 | 2,073,696,335 | 15,802 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.350
python==3.9.2rc1
### Who can help?
@agola11
Sample code
```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
response_schemas = [
ResponseSchema(name="result", description="answer to the user's q... | Encounter Error (KeyError: {'format_instructions'})while using StructuredOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/15801/comments | 2 | 2024-01-10T06:00:17Z | 2024-06-14T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15801 | 2,073,649,435 | 15,801 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Documen... | printing intermediate output from RAG chains | https://api.github.com/repos/langchain-ai/langchain/issues/15800/comments | 3 | 2024-01-10T05:52:27Z | 2024-01-11T01:16:50Z | https://github.com/langchain-ai/langchain/issues/15800 | 2,073,641,136 | 15,800 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Please look at the example below, I used ChatPromptTemplate to chat wit gpt, the output of the gpt always have a prefix "AI: ",how to remove it.
```python
def chat(self, messages):
history = [("system", SYSTEM)]
for message in messages:
if message["role... | Issue: How to use ChatPromptTemplate? | https://api.github.com/repos/langchain-ai/langchain/issues/15797/comments | 5 | 2024-01-10T05:00:37Z | 2024-07-10T16:05:40Z | https://github.com/langchain-ai/langchain/issues/15797 | 2,073,590,016 | 15,797 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am developing in a Colab environment and I have Typing_Extensions Issue
Package Version
-------------------------------- ---------------------
absl-py 1.4.0
aiohttp 3.9.1
aiosignal ... | Issue: ImportError in Langchain Community Library When Importing OpenAI Package Due to Typing_Extensions Issue | https://api.github.com/repos/langchain-ai/langchain/issues/15795/comments | 1 | 2024-01-10T03:30:46Z | 2024-04-17T16:33:20Z | https://github.com/langchain-ai/langchain/issues/15795 | 2,073,517,905 | 15,795 |
[
"hwchase17",
"langchain"
] | ### System Info
```
end_response = chain.run(
input=input["input"],
question=input["question"],
callbacks=[StreamingHandler()],
tags=tags,
)
```
```
StreamingHandler() is an extension of the langchain class `BaseCallback... | create_structured_output_chain doesn't invoke the given callback and on_llm_new_token with tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15790/comments | 2 | 2024-01-10T02:43:26Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15790 | 2,073,482,807 | 15,790 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Documen... | GoogleCloudEnterpriseSearchRetriever returned 'datastore not found' error even with the 'us' configurations | https://api.github.com/repos/langchain-ai/langchain/issues/15785/comments | 7 | 2024-01-10T00:05:52Z | 2024-01-22T23:17:32Z | https://github.com/langchain-ai/langchain/issues/15785 | 2,073,361,082 | 15,785 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am currently utilizing LangChain version 0.0.335 in my Fast API Python application. In the main.py file, the following code snippet is implemented:
main.py
```
streaming_model = ChatOpenAI(
model_name="gpt-4",
temperature=0.1,
ope... | Issue with LangChain v0.0.335 - Error in ChatOpenAI Callbacks Expected Runnable Instances | https://api.github.com/repos/langchain-ai/langchain/issues/15779/comments | 4 | 2024-01-09T21:13:36Z | 2024-03-02T01:26:21Z | https://github.com/langchain-ai/langchain/issues/15779 | 2,073,178,041 | 15,779 |
[
"hwchase17",
"langchain"
] | ### System Info
**Platform**: Ubuntu 22.04
**Python**: 3.10
**Langchain**:
langchain 0.1.0
langchain-community 0.0.10
langchain-core 0.1.8
langchain-openai 0.0.2
langsmith 0... | chain.batch() doesn't use config options properly (max concurrency) | https://api.github.com/repos/langchain-ai/langchain/issues/15767/comments | 9 | 2024-01-09T18:34:52Z | 2024-06-11T15:43:01Z | https://github.com/langchain-ai/langchain/issues/15767 | 2,072,940,890 | 15,767 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How can EnsembleRetriever be called asynchronously? I have a dataset with ~1k questions and I wish to find the documents that can best answer each of them. However, calling it sequentially takes a lot of time. Can I run the retriever in parallel for all rows (or chunks of it)? Or is ther... | Async with EnsembleRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/15764/comments | 6 | 2024-01-09T17:13:31Z | 2024-04-18T17:00:46Z | https://github.com/langchain-ai/langchain/issues/15764 | 2,072,810,448 | 15,764 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am developing a Streamlit application where I aim to stream the agent's responses to the UI. Previously, I was able to achieve this by utilizing chains with a simple call to ```chain.stream()```. However, after switching to agents, I cannot stream its response in the same way given tha... | Issue: Streaming agent's response to Streamlit UI | https://api.github.com/repos/langchain-ai/langchain/issues/15747/comments | 1 | 2024-01-09T13:06:25Z | 2024-01-09T14:42:49Z | https://github.com/langchain-ai/langchain/issues/15747 | 2,072,340,407 | 15,747 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Everything was working fine but now suddenly I'm receiving all sorts of LangChain Deprecation issues.
I installed the langchain_openai package and also installed langchain_community package too and replaced all the imports with the suggested ones in the error. It went well but now I'm s... | Issue: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated( | https://api.github.com/repos/langchain-ai/langchain/issues/15741/comments | 11 | 2024-01-09T10:53:09Z | 2024-04-14T20:26:29Z | https://github.com/langchain-ai/langchain/issues/15741 | 2,072,124,775 | 15,741 |
[
"hwchase17",
"langchain"
] | ### System Info
From pyproject.toml:
python=3.11.5
crewai = "0.1.6"
langchain = '==0.0.335'
openai = '==0.28.1'
unstructured = '==0.10.25'
pyowm = '3.3.0'
tools = "^0.1.9"
wikipedia = "1.4.0"
yfinance = "0.2.33"
sec-api = "1.0.17"
tiktoken = "0.5.2"
faiss-cpu = "1.7.4"
python-dotenv = "1.0.0"
### Who c... | Connection error caused failure to post http://localhost:1984/runs in LangSmith API. | https://api.github.com/repos/langchain-ai/langchain/issues/15739/comments | 2 | 2024-01-09T10:36:08Z | 2024-04-25T16:17:04Z | https://github.com/langchain-ai/langchain/issues/15739 | 2,072,094,761 | 15,739 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_commun... | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared | https://api.github.com/repos/langchain-ai/langchain/issues/15735/comments | 2 | 2024-01-09T08:59:07Z | 2024-04-16T16:20:31Z | https://github.com/langchain-ai/langchain/issues/15735 | 2,071,916,307 | 15,735 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain... | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared. | https://api.github.com/repos/langchain-ai/langchain/issues/15734/comments | 4 | 2024-01-09T08:39:38Z | 2024-01-09T08:48:55Z | https://github.com/langchain-ai/langchain/issues/15734 | 2,071,885,054 | 15,734 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I just installed langchain 0.1.0 and according to the documentation
https://api.python.langchain.com/en/latest/_modules/langchain_openai/chat_models/azure.html#
AzureChatOpenAI should be in langchain_openai.chat_models but its instead in langchain_community.chat_models
### ... | DOC: AzureChatOpenAI in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15733/comments | 1 | 2024-01-09T08:21:31Z | 2024-04-16T16:07:23Z | https://github.com/langchain-ai/langchain/issues/15733 | 2,071,858,324 | 15,733 |
[
"hwchase17",
"langchain"
] | ### System Info
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
the code to ask:
async for chunk in runnable.astream( #or call astream_log
question,
config
):
... | LangchainTracer.on_llm_error callback: IndexError('list index out of range') | https://api.github.com/repos/langchain-ai/langchain/issues/15732/comments | 3 | 2024-01-09T07:20:53Z | 2024-04-17T16:32:32Z | https://github.com/langchain-ai/langchain/issues/15732 | 2,071,773,809 | 15,732 |
[
"hwchase17",
"langchain"
] | ### System Info
- Langchain 0.1.0
- PHP 6 (a.k.a. Python 3.11.7)
- Windows 9 (a.k.a. Fedora 39)
<details><summary>requirements.txt</summary>
- aiohttp==3.9.1
- aiosignal==1.3.1
- annotated-types==0.6.0
- anyio==4.2.0
- argon2-cffi==23.1.0
- argon2-cffi-bindings==21.2.0
- arrow==1.3.0
- asgiref==3.7.2
- a... | DirectoryLoader use_multithreading inconsistent behavior between true and false (and issue with UnstructuredFileLoader and .json files) | https://api.github.com/repos/langchain-ai/langchain/issues/15731/comments | 2 | 2024-01-09T06:38:32Z | 2024-07-23T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15731 | 2,071,722,976 | 15,731 |
[
"hwchase17",
"langchain"
] | ### System Info
Ubuntu 20.04
I got this while reading a book pdf with extract_images=True.
[113](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:113... | ValueError: cannot reshape array of size 293 into shape (193,121,newaxis) | https://api.github.com/repos/langchain-ai/langchain/issues/15730/comments | 1 | 2024-01-09T05:52:40Z | 2024-01-09T14:40:30Z | https://github.com/langchain-ai/langchain/issues/15730 | 2,071,675,818 | 15,730 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've seen in the langchain documentation code for vector search in Neo4j which take `OpenAIEmbeddings()` as an object parameter in order to make an embedding for input query
```python
index_name = "vector" # default index name
store = Neo4jVector.from_existing_index(
OpenAIE... | Issue: mechanism of embedding parameters in Neo4j Vector object | https://api.github.com/repos/langchain-ai/langchain/issues/15729/comments | 1 | 2024-01-09T04:39:49Z | 2024-01-10T02:55:08Z | https://github.com/langchain-ai/langchain/issues/15729 | 2,071,616,292 | 15,729 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_commun... | Unable to retrieve my prompt when starting the conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15728/comments | 2 | 2024-01-09T03:13:25Z | 2024-01-09T14:39:02Z | https://github.com/langchain-ai/langchain/issues/15728 | 2,071,551,848 | 15,728 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i have question, is it possible to get two different type answer from one prompt ? i want to change my question from nlp to queries sql or it will return common answer from chatgpt, for example "show data purchasing" the answer will queries and if the question "show me rate usd today" it... | promt result | https://api.github.com/repos/langchain-ai/langchain/issues/15719/comments | 1 | 2024-01-08T19:53:03Z | 2024-01-08T19:53:28Z | https://github.com/langchain-ai/langchain/issues/15719 | 2,071,122,353 | 15,719 |
[
"hwchase17",
"langchain"
] | ### System Info
```➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
```
```
In [2]: langchain.__version__
Out[2]: '0.0.354'
```
```
In [4]: from langchain_core import __version__
In [5]: __version__
... | Extraction: create_extraction_chain_pydantic | https://api.github.com/repos/langchain-ai/langchain/issues/15715/comments | 3 | 2024-01-08T19:11:32Z | 2024-03-08T16:39:50Z | https://github.com/langchain-ai/langchain/issues/15715 | 2,071,064,930 | 15,715 |
[
"hwchase17",
"langchain"
] | ### Feature request
I'm trying to extend AgentExecutor with custom logic and I want to override how the agent perform actions.
What i'd really need is only to override the __aperform_agent_action_ function; however this function is defined in the __aiter_next_step_ function, making it necessary to override the whol... | Extract _aperform_agent_action from _aiter_next_step from AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15706/comments | 1 | 2024-01-08T14:12:40Z | 2024-01-24T02:22:10Z | https://github.com/langchain-ai/langchain/issues/15706 | 2,070,544,706 | 15,706 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Dear all
I have this pipeline
```python
translation_cache = ToJSON(key=key, out_dir=Path("results/sabadel/translation"))
translation_prompt = Prompt.from_yaml(Path("prompts/translate.yml"))
translation_chain = (
{
"transcription": lambda data: format_transcr... | DOC: Data Pipeline for humans | https://api.github.com/repos/langchain-ai/langchain/issues/15705/comments | 3 | 2024-01-08T14:10:39Z | 2024-01-09T14:42:08Z | https://github.com/langchain-ai/langchain/issues/15705 | 2,070,541,205 | 15,705 |
[
"hwchase17",
"langchain"
] | ### Feature request
Feature request
They provide a [python client](https://docs.mistral.ai/platform/endpoints/) to access the embedding model
### Motivation
It would be great if we added the new embedding service from Mistral!
### Your contribution
I can work on this and submit a PR | Add support for the Mistral AI Embedding Model | https://api.github.com/repos/langchain-ai/langchain/issues/15702/comments | 2 | 2024-01-08T12:35:54Z | 2024-04-16T16:15:00Z | https://github.com/langchain-ai/langchain/issues/15702 | 2,070,370,106 | 15,702 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi.
I am a newcomer to Langchain following the Quickstart tutorial in a Jupyter Notebook, using the setup recommended by the installation guide. I am following the OpenAI tutorial, rather than the local LLM version.
I followed the exact code in the docs by pasting the cells ... | DOC: Quickstart Code Fails for Retrieval Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15700/comments | 5 | 2024-01-08T10:23:26Z | 2024-01-08T15:54:43Z | https://github.com/langchain-ai/langchain/issues/15700 | 2,070,146,142 | 15,700 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.12
langchain 0.0.354
### Who can help?
@hwch
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] ... | TypeError: SlackGetChannel._run() got multiple values for argument 'run_manager' | https://api.github.com/repos/langchain-ai/langchain/issues/15698/comments | 2 | 2024-01-08T09:58:38Z | 2024-04-15T16:25:31Z | https://github.com/langchain-ai/langchain/issues/15698 | 2,070,099,650 | 15,698 |
[
"hwchase17",
"langchain"
] | ### System Info
Chroma 0.4.22
Langchain 0.0.354
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- ... | SelfQueryRetriever, ValueError: Expected where operand value to be a str, int, float, or list of those type | https://api.github.com/repos/langchain-ai/langchain/issues/15696/comments | 11 | 2024-01-08T09:48:39Z | 2024-06-10T14:52:24Z | https://github.com/langchain-ai/langchain/issues/15696 | 2,070,080,675 | 15,696 |
[
"hwchase17",
"langchain"
] | ### Feature request
- I want the local LLM (IlamaCpp) to maintain its context, which will significantly improve the efficiency of follow-up questions.
- Currently, the context of IlamaCpp is lost after the first call, necessitating the reprocessing of all tokens for any subsequent question.
- **Proposed Solution:** ... | Reuse KV-Cache with local LLM (IlamaCpp) instead of expensive reprocessing of all history tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15695/comments | 3 | 2024-01-08T09:47:45Z | 2024-03-23T22:37:54Z | https://github.com/langchain-ai/langchain/issues/15695 | 2,070,079,179 | 15,695 |
[
"hwchase17",
"langchain"
] | ### Feature request
Every time i create a milvus object, i load the collection, but there is no way to dynamically know the replica_number of the currently loaded collection, so there is a disadvantage that i have to hand over the different replica_number for each collection as an argument. Therefore, when creating ... | feat: add a flag that determines whether to load the milvus collection | https://api.github.com/repos/langchain-ai/langchain/issues/15694/comments | 1 | 2024-01-08T09:14:35Z | 2024-01-15T19:25:25Z | https://github.com/langchain-ai/langchain/issues/15694 | 2,070,024,246 | 15,694 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
<pre>
```
def generate_custom_prompt(query=None, name=None, not_uuid=None, chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(... | Issue: How to add chat history in prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15692/comments | 5 | 2024-01-08T08:54:42Z | 2024-04-15T16:20:34Z | https://github.com/langchain-ai/langchain/issues/15692 | 2,069,993,594 | 15,692 |
[
"hwchase17",
"langchain"
] | ### Feature request
I suggest supporting the Milvus vector database's new [Dynamic Schema](https://milvus.io/docs/dynamic_schema.md) feature.
### Motivation
According to Milvus:
> Dynamic schema enables users to insert entities with new fields into a Milvus collection without modifying the existing schema. This... | Add Dynamic Schema support for the Milvus vector store | https://api.github.com/repos/langchain-ai/langchain/issues/15690/comments | 3 | 2024-01-08T08:06:51Z | 2024-08-07T16:06:24Z | https://github.com/langchain-ai/langchain/issues/15690 | 2,069,926,013 | 15,690 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_commun... | Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15689/comments | 1 | 2024-01-08T07:57:56Z | 2024-04-15T16:24:00Z | https://github.com/langchain-ai/langchain/issues/15689 | 2,069,914,553 | 15,689 |
[
"hwchase17",
"langchain"
] | ### Feature request
from langchain_experimental.sql import SQLDatabaseChain
from langchain.sql_database import SQLDatabase
I'm using the above packages to connect the databricks database(SQLDatabse)and passing it to the model chain(SQLDatabaseChain) to generate the SQLQuery. But I want to close the connection of the... | No close() functionality in langchain.sql_database import SQLDatabase package | https://api.github.com/repos/langchain-ai/langchain/issues/15687/comments | 1 | 2024-01-08T07:38:59Z | 2024-04-15T16:15:25Z | https://github.com/langchain-ai/langchain/issues/15687 | 2,069,891,752 | 15,687 |
[
"hwchase17",
"langchain"
] | Hi,
I have built a RAG app with RetrievalQA and now wanted to try out a new approach. I am using an English LLM but the responses should be in German. E.g. if the user asks something in German "Hallo, wer bist du?", the user query should be translated to "Hello, who are you?" before feeding it into the RAG pipeline.... | Translate User Query and Model Response in RetrievalQA Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15686/comments | 1 | 2024-01-08T07:34:14Z | 2024-04-15T16:37:21Z | https://github.com/langchain-ai/langchain/issues/15686 | 2,069,885,942 | 15,686 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_comm... | Issue: <How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15685/comments | 2 | 2024-01-08T07:33:59Z | 2024-04-15T16:20:22Z | https://github.com/langchain-ai/langchain/issues/15685 | 2,069,885,649 | 15,685 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
User: Help me reset my password
Agent: Please provide your account number
User: My account is Axxx
Agent: SMS verification code has been sent, please provide SMS verification code
User: 091839
Agent: Account password has been reset to 123456
The Agent is responsible for resetting... | How to use tools for tasks that are dependent on each other | https://api.github.com/repos/langchain-ai/langchain/issues/15684/comments | 1 | 2024-01-08T07:14:13Z | 2024-04-15T16:15:21Z | https://github.com/langchain-ai/langchain/issues/15684 | 2,069,861,793 | 15,684 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.1.0
Python 3.10.12
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
-... | AgentExecutor.from_agent_and_tools(agent=agent, tools=tools) -> throws KeyError. | https://api.github.com/repos/langchain-ai/langchain/issues/15679/comments | 4 | 2024-01-08T05:19:09Z | 2024-01-08T05:43:30Z | https://github.com/langchain-ai/langchain/issues/15679 | 2,069,692,507 | 15,679 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I created an app using AzureOpenAI, and initially, the import statement worked fine:
```
from langchain.chat_models import AzureChatOpenAI
```
My original version details were:
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
openai==1.6.1
``... | class `AzureChatOpenAI` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use langchain_openai.AzureChatOpenAI instead. | https://api.github.com/repos/langchain-ai/langchain/issues/15674/comments | 2 | 2024-01-08T03:59:37Z | 2024-04-16T16:14:59Z | https://github.com/langchain-ai/langchain/issues/15674 | 2,069,592,782 | 15,674 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hi, I am trying to use ConversationalRetrievalChain with Azure Cognitive Search as retriever with streaming capabilities enabled. The code is not providing the output in a streaming manner. I would like to know if there is any such feature which is supported using Langchain combining Azure Cognit... | Support for ConversationalRetrievalChain with Azure Cognitive Search as retriever and Azure Open AI as LLM for Streaming Output | https://api.github.com/repos/langchain-ai/langchain/issues/15673/comments | 2 | 2024-01-08T03:42:19Z | 2024-04-15T16:44:18Z | https://github.com/langchain-ai/langchain/issues/15673 | 2,069,572,435 | 15,673 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.


When I deploy the server then it report the error like the above.
... | How to manager the new Variables:TypeError: unsupported operand type(s) for |: 'dict' and 'str' | https://api.github.com/repos/langchain-ai/langchain/issues/15672/comments | 5 | 2024-01-08T01:57:00Z | 2024-04-15T16:25:16Z | https://github.com/langchain-ai/langchain/issues/15672 | 2,069,449,221 | 15,672 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi, I am encountering this error when trying to import anything from the `langchain.embeddings` on Amazon linux AMI with python 3.9 and `langchain==0.0.350`
```python
Traceback (most recent call last):
File "/home/ec2-user/app/search/./app.py", line 9, in <module>
from search import make_... | ImportError: cannot import name '_is_openai_v1' | https://api.github.com/repos/langchain-ai/langchain/issues/15671/comments | 3 | 2024-01-08T01:46:22Z | 2024-01-08T15:49:42Z | https://github.com/langchain-ai/langchain/issues/15671 | 2,069,437,208 | 15,671 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
-... | multiple ResponseSchema | https://api.github.com/repos/langchain-ai/langchain/issues/15670/comments | 3 | 2024-01-08T01:02:36Z | 2024-01-16T00:48:55Z | https://github.com/langchain-ai/langchain/issues/15670 | 2,069,393,611 | 15,670 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
-... | Adding response_schemas to ChatPromptTemplate.from_messages prompt design | https://api.github.com/repos/langchain-ai/langchain/issues/15669/comments | 2 | 2024-01-07T23:58:01Z | 2024-01-08T00:59:54Z | https://github.com/langchain-ai/langchain/issues/15669 | 2,069,357,139 | 15,669 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain v0.0.354, Python v3.11, Chroma v0.4.22, Lark v1.1.8
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selec... | SelfQueryRetriever.from_llm raises following issue: ImportError: Cannot import lark, please install it with 'pip install lark'. | https://api.github.com/repos/langchain-ai/langchain/issues/15668/comments | 8 | 2024-01-07T23:44:54Z | 2024-05-15T04:41:38Z | https://github.com/langchain-ai/langchain/issues/15668 | 2,069,348,971 | 15,668 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be helpful if I can make a RAG chain to output whether it could find the answer from the reference or not as a boolean value.
### Motivation
From personal ideation.
### Your contribution
N/A | found checker for RAG chain | https://api.github.com/repos/langchain-ai/langchain/issues/15667/comments | 2 | 2024-01-07T23:32:22Z | 2024-07-12T16:03:13Z | https://github.com/langchain-ai/langchain/issues/15667 | 2,069,343,504 | 15,667 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
After upgrading to langchain 0.1.0, I received depreciation warnings and updated my imports to langchain_community which cleared that error, then received depreciation warnings about __call__ to Invoke:
The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0... | Issue: __call__ was deprecated use invoke instead warning persists after switching to invoke | https://api.github.com/repos/langchain-ai/langchain/issues/15665/comments | 2 | 2024-01-07T21:49:55Z | 2024-05-31T15:02:56Z | https://github.com/langchain-ai/langchain/issues/15665 | 2,069,304,783 | 15,665 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello LangChain community,
We're always happy to see more folks getting involved in contributing to the LangChain codebase.
This is a good first issue if you want to learn more about how to set up
for development in the LangChain codebase.
## Goal
Your contribution will make... | For New Contributors: Update Integration Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15664/comments | 30 | 2024-01-07T21:22:46Z | 2024-02-12T05:19:32Z | https://github.com/langchain-ai/langchain/issues/15664 | 2,069,295,306 | 15,664 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.354 (also tried with 0.1.0)
Python version: 3.9.18
yfinance version: 0.2.35
OS: Windows 10
### Who can help?
@hwchase17 , @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/... | using YahooFinanceNewsTool() results to KeyError: 'description' | https://api.github.com/repos/langchain-ai/langchain/issues/15656/comments | 1 | 2024-01-07T13:52:58Z | 2024-04-14T16:16:15Z | https://github.com/langchain-ai/langchain/issues/15656 | 2,069,139,043 | 15,656 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I want to use chatopenai as a tool, I need to add the agent chat_history or context into the tool, but the tool generally only accepts a string input, so how do I pass in other parameters
### Suggestion:
none | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15654/comments | 1 | 2024-01-07T10:35:43Z | 2024-01-07T10:55:01Z | https://github.com/langchain-ai/langchain/issues/15654 | 2,069,076,913 | 15,654 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
When clicking on redis and then when trying to re-direct to github for seeing the implementation that page is not found
from this [integration's page](https://integrations.langchain.com/memory)
;
const tools = [...];
const executor = await ini... | Creating a conversation agent with tools and history for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/15650/comments | 2 | 2024-01-07T06:16:24Z | 2024-01-07T15:12:27Z | https://github.com/langchain-ai/langchain/issues/15650 | 2,069,007,389 | 15,650 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a warning when I run my langchain code "how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbos... | Issue: how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn("" | https://api.github.com/repos/langchain-ai/langchain/issues/15647/comments | 3 | 2024-01-07T00:49:44Z | 2024-06-17T11:24:20Z | https://github.com/langchain-ai/langchain/issues/15647 | 2,068,909,302 | 15,647 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I had no issues running the langchain code before, but when I moved the callback_handler position, this warning appeared: "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use lang... | Issue: My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn(" | https://api.github.com/repos/langchain-ai/langchain/issues/15646/comments | 1 | 2024-01-07T00:42:11Z | 2024-01-07T00:48:07Z | https://github.com/langchain-ai/langchain/issues/15646 | 2,068,907,422 | 15,646 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.354, Python 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Pa... | ChromaDB ParentDocumentRetriever.get_relevant_documents not returning docs despite similarity_search returning matching docs | https://api.github.com/repos/langchain-ai/langchain/issues/15644/comments | 4 | 2024-01-06T22:51:01Z | 2024-01-07T00:56:13Z | https://github.com/langchain-ai/langchain/issues/15644 | 2,068,873,967 | 15,644 |
[
"hwchase17",
"langchain"
] | ### System Info
Using...
langchain==0.0.353
langchain-core==0.1.4
Seems to have broken from yesterday's merges?
```
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
--
2319 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/__init__.py", line 56, in <module>
2320 | ... | Broken imports | https://api.github.com/repos/langchain-ai/langchain/issues/15643/comments | 2 | 2024-01-06T21:23:27Z | 2024-01-06T21:45:16Z | https://github.com/langchain-ai/langchain/issues/15643 | 2,068,840,009 | 15,643 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain
### Who can help?
LangChain with Gemini Pro
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Docu... | ReadTimeout with Arabic pdf files | https://api.github.com/repos/langchain-ai/langchain/issues/15639/comments | 3 | 2024-01-06T19:35:55Z | 2024-04-13T16:12:05Z | https://github.com/langchain-ai/langchain/issues/15639 | 2,068,795,849 | 15,639 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have the following ChromaDB setup:
```python
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = None
try:
vectorstore = Chroma(persist_directory="storage/deploy/... | Issue: What docstore to use in ChromaDB that isn't in memory? | https://api.github.com/repos/langchain-ai/langchain/issues/15633/comments | 5 | 2024-01-06T10:53:10Z | 2024-03-07T10:29:16Z | https://github.com/langchain-ai/langchain/issues/15633 | 2,068,532,355 | 15,633 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/fireworks
Hi, I'm new Langchain with Fireworks.
I run this code in document 'ChatFireworks' and got an issue.
Environment : python 3.11, Window10
```Create a simple chain with memory
chain = (
RunnablePassthrough.... | DOC: langchain with Fireworks ai | https://api.github.com/repos/langchain-ai/langchain/issues/15632/comments | 4 | 2024-01-06T10:45:41Z | 2024-04-13T16:16:17Z | https://github.com/langchain-ai/langchain/issues/15632 | 2,068,529,844 | 15,632 |
[
"hwchase17",
"langchain"
] | ### System Info
`langchain==0.1.0`
`langchain-community==0.0.9`
`langchain-core==0.1.7`
`linux 20.04`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / ... | 'Unrecognized request argument supplied: functions' error when executing agent | following documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15628/comments | 2 | 2024-01-06T06:53:43Z | 2024-01-06T07:02:47Z | https://github.com/langchain-ai/langchain/issues/15628 | 2,068,438,255 | 15,628 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11, Langchain 0.0.354, ChromaDB v0.4.22
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selecto... | AttributeError: module 'chromadb' has no attribute 'config' | https://api.github.com/repos/langchain-ai/langchain/issues/15616/comments | 9 | 2024-01-06T00:06:53Z | 2024-02-23T13:36:44Z | https://github.com/langchain-ai/langchain/issues/15616 | 2,068,219,804 | 15,616 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain version: 0.0.354
LangChain Community version: 0.0.8
Platform: Apple M3 Pro chip on MacOS Sonoma (MacOS 14.2.1)
Python version: 3.11.7
### Who can help?
@baskaryan has the most recent commits on this section of the code, but it was for moving items to the `langchain_community` package. I'... | `UnstructuredFileLoader` shows `TypeError: expected str, bytes or os.PathLike object, not list` when a list of files is passed in | https://api.github.com/repos/langchain-ai/langchain/issues/15607/comments | 4 | 2024-01-05T22:14:50Z | 2024-01-24T03:37:38Z | https://github.com/langchain-ai/langchain/issues/15607 | 2,068,110,708 | 15,607 |
[
"hwchase17",
"langchain"
] | ### System Info
## System Info
**LangChain Version:** 0.0.354
**Platform:** MacOS Sonoma 14.2.1
**Python Version:** 3.11.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ]... | Playwright Browser Freezing | https://api.github.com/repos/langchain-ai/langchain/issues/15605/comments | 6 | 2024-01-05T20:59:22Z | 2024-07-06T11:44:10Z | https://github.com/langchain-ai/langchain/issues/15605 | 2,068,033,254 | 15,605 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15598
<div type='discussions-op-text'>
<sup>Originally posted by **MahdiJafari1** January 5, 2024</sup>
OpenAI deprecated its `text-davinci-003` completion model. I've updated the model to `gpt-3.5-turbo-instruct`. I am encountering an issue... | Issue with LangChain Misclassifying gpt-3.5-turbo-instruct as Chat Model | https://api.github.com/repos/langchain-ai/langchain/issues/15604/comments | 3 | 2024-01-05T20:55:57Z | 2024-01-06T18:29:34Z | https://github.com/langchain-ai/langchain/issues/15604 | 2,068,029,209 | 15,604 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.354, Windows 10,Python 3.11.5
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- ... | Issue with ConversationalRetrievalChain in LangChain - ValueError: Missing Input Keys | https://api.github.com/repos/langchain-ai/langchain/issues/15601/comments | 3 | 2024-01-05T20:09:10Z | 2024-04-15T16:25:10Z | https://github.com/langchain-ai/langchain/issues/15601 | 2,067,974,482 | 15,601 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
There is no way to view the old documentation on [the official site](https://python.langchain.com/). This makes it extremely difficult to develop. It seems as though every week there is another feature that is deleted, thus another page being deleted.
How is this acceptable? I... | DOC: Lack of Documentation Versioning on Langchain Website | https://api.github.com/repos/langchain-ai/langchain/issues/15597/comments | 2 | 2024-01-05T19:17:59Z | 2024-04-13T16:11:50Z | https://github.com/langchain-ai/langchain/issues/15597 | 2,067,913,011 | 15,597 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello Langchain team,
I have encountered an error while using `AgentTokenBufferMemory` and `RedisChatMessageHistory`. The problem occurs because the buffer is not removing old messages when new ones are added. This causes an issue with OpenAI as the context window exceeds the token limit. Upon inves... | AgentTokenBufferMemory does not remove old messages, leading to the "context_length_exceeded" error from OpenAI. | https://api.github.com/repos/langchain-ai/langchain/issues/15593/comments | 2 | 2024-01-05T18:25:18Z | 2024-05-01T16:06:03Z | https://github.com/langchain-ai/langchain/issues/15593 | 2,067,842,408 | 15,593 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be really great to enhance the VectorStoreRetriever class, by allowing additional (optional) search kwargs to be passed directly to the invoke method. Right now the input type of the invoke method is str, it would be interesting to be able to receive a custom object with "query" and "filte... | Enhance Flexibility in VectorStoreRetriever by Allowing Dynamic search args in invoke Method | https://api.github.com/repos/langchain-ai/langchain/issues/15590/comments | 3 | 2024-01-05T17:29:34Z | 2024-04-12T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15590 | 2,067,752,188 | 15,590 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11
langchain: latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
... | In a Chatbot to chat with SQL using openai and langchain, how to integrate the chatbot to make simple conversations | https://api.github.com/repos/langchain-ai/langchain/issues/15587/comments | 7 | 2024-01-05T14:26:07Z | 2024-04-15T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15587 | 2,067,440,671 | 15,587 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
-... | RAG chain response often includes "\n AI:" in front of actual response | https://api.github.com/repos/langchain-ai/langchain/issues/15586/comments | 4 | 2024-01-05T14:12:29Z | 2024-01-16T00:49:32Z | https://github.com/langchain-ai/langchain/issues/15586 | 2,067,418,916 | 15,586 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain = 0.0.354
This problem appears since commit 62d32bd
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prom... | Chroma as_retriever function with kwargs leads to unexpected keyword argument | https://api.github.com/repos/langchain-ai/langchain/issues/15585/comments | 7 | 2024-01-05T13:35:08Z | 2024-06-26T20:11:46Z | https://github.com/langchain-ai/langchain/issues/15585 | 2,067,361,958 | 15,585 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.11
Mac M1
Langchain Version: 0.0.353
openai Version: 0.28.0
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates... | Clickhouse SQL Database Agent | https://api.github.com/repos/langchain-ai/langchain/issues/15584/comments | 2 | 2024-01-05T13:31:32Z | 2024-01-06T07:38:01Z | https://github.com/langchain-ai/langchain/issues/15584 | 2,067,357,036 | 15,584 |
[
"hwchase17",
"langchain"
] | Hi,
I am using langchain and llama-cpp-python to do some QA on a text file. When using the llama-2-13b-chat quantized model from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_K_M.gguf). I am able to create a RetrievalQA chain passing the vectorstore and prompt, but w... | Issue: ipykernel kernel crashes when using llama-2-13b model | https://api.github.com/repos/langchain-ai/langchain/issues/15583/comments | 1 | 2024-01-05T13:08:21Z | 2024-04-12T16:12:43Z | https://github.com/langchain-ai/langchain/issues/15583 | 2,067,325,836 | 15,583 |
[
"hwchase17",
"langchain"
] | ### System Info
In the langchain_community/vectorstores/azuresearch.py on line 656 the filed name is used explicitly, which leads to an error if the index does not have the mentioned filed. The suggestion is to replace
`json.loads(result["metadata").get("key"), ""),`
with `json.loads(result[FIELDS_METADATA]).get("... | metadata is not properly processed when the field does not exists | https://api.github.com/repos/langchain-ai/langchain/issues/15581/comments | 1 | 2024-01-05T11:39:32Z | 2024-01-07T01:05:01Z | https://github.com/langchain-ai/langchain/issues/15581 | 2,067,198,112 | 15,581 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / P... | ValueError: variable chat_history should be a list of base messages, got [HumanMessage(content='input message'), "output response"] | https://api.github.com/repos/langchain-ai/langchain/issues/15580/comments | 1 | 2024-01-05T11:34:20Z | 2024-01-05T13:53:05Z | https://github.com/langchain-ai/langchain/issues/15580 | 2,067,191,288 | 15,580 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/5701
<div type='discussions-op-text'>
<sup>Originally posted by **rdhillbb** June 5, 2023</sup>
Newbie here.
I found an issue while importing 'VectorstoreIndexCreator'
ImportError: cannot import name 'URL' from 'sqlalchemy' (/Users/tst... | Cannot import name 'URL' from 'sqlalchemy' | https://api.github.com/repos/langchain-ai/langchain/issues/15579/comments | 5 | 2024-01-05T11:32:28Z | 2024-05-13T16:09:17Z | https://github.com/langchain-ai/langchain/issues/15579 | 2,067,188,914 | 15,579 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retr... | Issue:How can I resolve memory with conversation retreival chain error? | https://api.github.com/repos/langchain-ai/langchain/issues/15577/comments | 1 | 2024-01-05T10:33:22Z | 2024-04-12T16:18:52Z | https://github.com/langchain-ai/langchain/issues/15577 | 2,067,106,521 | 15,577 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
we are trying to use pypdf to get the text out from the pdf use the chunk for embedding(details are there in attached code snippet. while using, i have installed all the required packages. its working fine in my local(windows 10). same code snippet and requirement.txt if i... | Issue: Pypdf extract_image=True is not working on docker(production) | https://api.github.com/repos/langchain-ai/langchain/issues/15576/comments | 8 | 2024-01-05T09:42:50Z | 2024-06-12T07:02:00Z | https://github.com/langchain-ai/langchain/issues/15576 | 2,067,029,408 | 15,576 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
` if file_path.lower().endswith(".xlsx") or file_path.lower().endswith(".xls"):
loader = UnstructuredExcelLoader(file_path, mode="elements")
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap... | Issue: Not able to get the expected answers when asking answer of other column corresponding to other column | https://api.github.com/repos/langchain-ai/langchain/issues/15573/comments | 4 | 2024-01-05T08:49:04Z | 2024-04-12T16:16:29Z | https://github.com/langchain-ai/langchain/issues/15573 | 2,066,955,284 | 15,573 |
[
"hwchase17",
"langchain"
] | ### System Info
I want to use the news-api tool, and I have these setting for api key:
```
os.environ["NEWS_API_KEY"] = "9ed***"
tools = load_tools(["news-api"], llm=llm, news_api_key="9ed****", memory=memory)
```
But when the action is activated, the error is:
```
Action: Search
https://newsapi.org... | How to set api key for news-api? | https://api.github.com/repos/langchain-ai/langchain/issues/15572/comments | 2 | 2024-01-05T08:21:08Z | 2024-04-12T22:37:11Z | https://github.com/langchain-ai/langchain/issues/15572 | 2,066,921,236 | 15,572 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11
Langchain 0.0.354
Windows 11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Promp... | SQLDatabaseToolkit - httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol | https://api.github.com/repos/langchain-ai/langchain/issues/15567/comments | 1 | 2024-01-05T04:54:04Z | 2024-04-12T16:19:05Z | https://github.com/langchain-ai/langchain/issues/15567 | 2,066,727,007 | 15,567 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain:0.0.353
platform:windows10
python:3.10
I am a beginner in Langchain. I am a I want to use ConversationTokenBufferMemory to manually save the context, but an error occurred. My code is as follows
`import os
from lc.api_key import OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = OPENAI_API... | ValueError: too many values to unpack (expected 2) | https://api.github.com/repos/langchain-ai/langchain/issues/15564/comments | 1 | 2024-01-05T03:05:15Z | 2024-04-12T16:16:44Z | https://github.com/langchain-ai/langchain/issues/15564 | 2,066,656,658 | 15,564 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm using Langchain 0.0.354 and ChatOpenAI. I want to use parameter "n" in OpenAI API to return "n" completions. However, ChatOpenAI always returns a single output. Ultimately, I would like to build my chain using LCEL as follows: `chain = prompt | ChatOpenAI (n=10) | MyCustomParser`. Ca... | Issue: How to use "n" completions with LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15560/comments | 1 | 2024-01-04T21:31:34Z | 2024-04-11T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15560 | 2,066,364,259 | 15,560 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am trying to run the below:
> import requests
> import json
> from langchain.agents import AgentType, initialize_agent, load_tools
> from langchain_community.llms import OpenAI
> from langchain.chat_models import ChatOpenAI
>
>
> llm = ChatOpenAI(temperature=0,model= 'gpt-3... | Authentican error | https://api.github.com/repos/langchain-ai/langchain/issues/15555/comments | 1 | 2024-01-04T19:48:30Z | 2024-04-11T16:22:16Z | https://github.com/langchain-ai/langchain/issues/15555 | 2,066,241,880 | 15,555 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```
from langchain.vectorstores.pgvector import PGVector
db = PGVector.from_documents(
documents= docs,
embedding = embeddings,
collection_name= "blog_posts",
distance_strategy = DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING
)
```
This code ... | How to specify a custom schema in PGVector.from_documents? | https://api.github.com/repos/langchain-ai/langchain/issues/15553/comments | 2 | 2024-01-04T19:08:46Z | 2024-06-16T16:07:39Z | https://github.com/langchain-ai/langchain/issues/15553 | 2,066,194,527 | 15,553 |
[
"hwchase17",
"langchain"
] | ### System Info
Error stack
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File <command-3066972537097411>, line 1
----> 1 issue_recommendation(
2 review_title="Terrible",
3 revie... | TypeError: expected string or buffer | https://api.github.com/repos/langchain-ai/langchain/issues/15552/comments | 2 | 2024-01-04T19:02:22Z | 2024-06-08T16:08:45Z | https://github.com/langchain-ai/langchain/issues/15552 | 2,066,185,557 | 15,552 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a large Agent with lots of memory/observations but initializing takes to much time. Is there a way to save the memory and load it again? Whats the best way to achieve this? Ideally I would like to reuse the vector store for memory and then for each new user save/load the memory/co... | Question: Using TimeWeightedVectorStoreRetriever and GenerativeAgentMemory is there a way to save the memory and load it again? | https://api.github.com/repos/langchain-ai/langchain/issues/15549/comments | 3 | 2024-01-04T16:39:30Z | 2024-01-04T16:59:31Z | https://github.com/langchain-ai/langchain/issues/15549 | 2,065,981,904 | 15,549 |
[
"hwchase17",
"langchain"
] | ### System Info
I try to load pdf in from langchain.document_loaders import PyPDFDirectoryLoader
got error this WARNING:pypdf._reader:incorrect startxref pointer(3)
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFDirectoryLo... | WARNING:pypdf._reader:incorrect startxref pointer(3) | https://api.github.com/repos/langchain-ai/langchain/issues/15548/comments | 4 | 2024-01-04T16:25:57Z | 2024-04-12T16:12:41Z | https://github.com/langchain-ai/langchain/issues/15548 | 2,065,959,358 | 15,548 |
[
"hwchase17",
"langchain"
] | ### System Info
python==3.10
langchain==0.0.326
langdetect==1.0.9
langsmith==0.0.54
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / P... | GraphCypherQAChain doesn't support returning source documents with `return_source_documents` param like the `BaseQAWithSourcesChain` chains | https://api.github.com/repos/langchain-ai/langchain/issues/15543/comments | 3 | 2024-01-04T14:32:14Z | 2024-04-17T16:33:13Z | https://github.com/langchain-ai/langchain/issues/15543 | 2,065,766,207 | 15,543 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.