issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-bigquery = "^3.14.1"
google-api-core = "^2.15.0"
google-cloud-core = "^2.4.1"
grpcio = "^1.60.0"
grpcio-tools = "^1.60.0"
langchain-google-genai = "^0.0.5"
langchain-core = "^0.1.5"
google-cloud-aiplatform = "^1.38.1"
langchain-community = "^0.0.8"
### Who can help?
when test the... | INFORMATION_SCHEMA.VECTOR_INDEXES was not found in google big-query | https://api.github.com/repos/langchain-ai/langchain/issues/15538/comments | 5 | 2024-01-04T11:51:00Z | 2024-04-16T16:18:40Z | https://github.com/langchain-ai/langchain/issues/15538 | 2,065,516,520 | 15,538 |
[
"hwchase17",
"langchain"
] | ### Feature request
When using asynchronous loading the `RecursiveUrlLoader`, it would be nice to be able to set a limit for the number of parallel HTTP requests when scraping a website.
Right now, when using async loading it is very likely to get errors like the following:
```
04-01-24 12:02:53 [WARNING] recurs... | feat: limit the number of concurrent requests in the RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15536/comments | 1 | 2024-01-04T11:08:15Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15536 | 2,065,453,597 | 15,536 |
[
"hwchase17",
"langchain"
] | ### System Info
Python version: 3.9.7
Langchain version: 0.0.352
Argilla version: 1.20.0
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / P... | ArgillaCallback doesn't properly set DEFAULT_API_KEY | https://api.github.com/repos/langchain-ai/langchain/issues/15531/comments | 1 | 2024-01-04T09:53:47Z | 2024-04-11T16:18:21Z | https://github.com/langchain-ai/langchain/issues/15531 | 2,065,338,280 | 15,531 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.351
langchain-community 0.0.4
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Tem... | The ChatHuggingFace package cannot be found. "from langchain_community.chat_models.huggingface import ChatHuggingFace",Has ChatHuggingFace changed paths? | https://api.github.com/repos/langchain-ai/langchain/issues/15530/comments | 3 | 2024-01-04T09:24:18Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15530 | 2,065,294,377 | 15,530 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code where I am implementing Memory with Prompt Template
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_... | Issue: document_variable_name context was not found in llm_chain input_variables: | https://api.github.com/repos/langchain-ai/langchain/issues/15528/comments | 4 | 2024-01-04T08:18:07Z | 2024-06-08T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15528 | 2,065,203,802 | 15,528 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory... | Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15527/comments | 4 | 2024-01-04T08:14:43Z | 2024-04-11T16:14:05Z | https://github.com/langchain-ai/langchain/issues/15527 | 2,065,199,886 | 15,527 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.352 and 0.0.353
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Docume... | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/usr/local/lib/python3.11/site-packages/langchain_core/tracers/context.py | https://api.github.com/repos/langchain-ai/langchain/issues/15526/comments | 5 | 2024-01-04T08:08:19Z | 2024-01-04T19:50:14Z | https://github.com/langchain-ai/langchain/issues/15526 | 2,065,192,495 | 15,526 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memo... | Issue: <"How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15525/comments | 2 | 2024-01-04T07:51:08Z | 2024-05-20T16:08:11Z | https://github.com/langchain-ai/langchain/issues/15525 | 2,065,169,209 | 15,525 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.9.13
langchain==0.0.316
langchain-community==0.0.1
langchain-core==0.0.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts /... | AsyncChromiumloader gives attribute error : COBOL | https://api.github.com/repos/langchain-ai/langchain/issues/15524/comments | 6 | 2024-01-04T07:46:03Z | 2024-02-22T00:38:56Z | https://github.com/langchain-ai/langchain/issues/15524 | 2,065,163,019 | 15,524 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/7a93356cbc5d89cc0f7dd746d8f1bb52666fd0f1/libs/community/langchain_community/document_loaders/chromium.py#L78C40-L78C44
Hello,
I encountered a RuntimeError when running the code that uses the AsyncChromiumLoader class. The error message is asyncio.run() cannot be call... | RuntimeError when calling asyncio.run() from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/15523/comments | 6 | 2024-01-04T07:35:01Z | 2024-06-15T16:06:51Z | https://github.com/langchain-ai/langchain/issues/15523 | 2,065,149,238 | 15,523 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened?
### Suggestion:
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | https://api.github.com/repos/langchain-ai/langchain/issues/15522/comments | 1 | 2024-01-04T06:29:52Z | 2024-04-11T16:20:13Z | https://github.com/langchain-ai/langchain/issues/15522 | 2,065,084,378 | 15,522 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have been trying to use mixtral-7B as LLM agent with langchain. Agent has been provided a PythonREPL tool for any kind of code execution.
While providing the output, it either runs into agent timeout error or else it provide wrong answer (but in correct format). By further analys... | Issue: Unable to use custom parser to parse the (intermediate) LLM chain output | https://api.github.com/repos/langchain-ai/langchain/issues/15521/comments | 1 | 2024-01-04T05:18:04Z | 2024-04-11T16:14:02Z | https://github.com/langchain-ai/langchain/issues/15521 | 2,065,015,603 | 15,521 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version 0.0.353.
python : 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Outp... | text_splitter module not found in langchain version 0.0.353. | https://api.github.com/repos/langchain-ai/langchain/issues/15520/comments | 2 | 2024-01-04T05:02:15Z | 2024-04-28T16:25:28Z | https://github.com/langchain-ai/langchain/issues/15520 | 2,065,004,679 | 15,520 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone I'm trying to use the langchain LCEL for the autogen script assembly pipeline. The first point that I'm trying to implement is for AI to determine which roles are needed in the autogen group chat to solve the user's task. I'm trying to parse the neural network's response t... | Issue: LCEL output parser error | https://api.github.com/repos/langchain-ai/langchain/issues/15518/comments | 7 | 2024-01-04T02:26:08Z | 2024-02-05T17:20:35Z | https://github.com/langchain-ai/langchain/issues/15518 | 2,064,893,373 | 15,518 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Problem Statement:
I am currently working on tracking token consumption for asynchronous chain calls in my application. I am utilizing the AsyncIteratorCallbackHandler and its aiter() method to stream tokens to my client. However, I am facing challenges in determining how to track the t... | Issue: Tracking Token Consumption for Async Chain Calls | https://api.github.com/repos/langchain-ai/langchain/issues/15517/comments | 1 | 2024-01-04T01:43:22Z | 2024-04-11T16:07:51Z | https://github.com/langchain-ai/langchain/issues/15517 | 2,064,865,668 | 15,517 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.354
text_generation==0.6.1
python:3.10-slim
### Who can help?
@agola11 @hwaking
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates ... | HuggingFaceTextGenInference Streaming does not output | https://api.github.com/repos/langchain-ai/langchain/issues/15516/comments | 8 | 2024-01-04T01:13:21Z | 2024-01-23T00:01:11Z | https://github.com/langchain-ai/langchain/issues/15516 | 2,064,847,688 | 15,516 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10 & Ubuntu 22.04
langchain==0.0.354
langchain-community==0.0.8
langchain-core==0.1.5
Python 3.10.13
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/textgen.py
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official ... | Langchain-Community LLM TextGen has wrong API endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/15512/comments | 2 | 2024-01-04T00:12:34Z | 2024-04-11T16:15:15Z | https://github.com/langchain-ai/langchain/issues/15512 | 2,064,809,247 | 15,512 |
[
"hwchase17",
"langchain"
] | In cookbook 3 for multimodal retrieval, `limit = 6` is set while retireving documents but the number of returned documents is always 4, regardless of the asked query or the value of `limit`. How can I retrieve `top_k` documents in this code? [Specific line is here](https://github.com/langchain-ai/langchain/blob/02f9c76... | Is The Limit Parameter Used to Retrieve Top_k? | https://api.github.com/repos/langchain-ai/langchain/issues/15511/comments | 1 | 2024-01-03T23:56:40Z | 2024-04-11T16:16:13Z | https://github.com/langchain-ai/langchain/issues/15511 | 2,064,798,810 | 15,511 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've been playing around with the multimodal notebooks introduced in the [docs here.](https://blog.langchain.dev/semi-structured-multi-modal-rag/). However, the number of retrieved documents for every query is always 4. Specifically, for [cookbook 3](https://github.com/langchain-ai/langc... | Can't Specify Top-K retrieved Documents in Multimodal Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/15510/comments | 3 | 2024-01-03T23:52:17Z | 2024-06-19T03:27:35Z | https://github.com/langchain-ai/langchain/issues/15510 | 2,064,793,905 | 15,510 |
[
"hwchase17",
"langchain"
] | ### System Info
Full traceback:
```
File "/src/app.py", line 9, in <module>
from langchain.chains import ConversationalRetrievalChain
File "/venv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 20, in <module>
from langchain.chains.api.base import APIChain
File "/venv/lib/python3... | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' | https://api.github.com/repos/langchain-ai/langchain/issues/15508/comments | 6 | 2024-01-03T23:11:47Z | 2024-01-04T20:29:53Z | https://github.com/langchain-ai/langchain/issues/15508 | 2,064,767,804 | 15,508 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Console Output:
```
[chain/start] [1:chain:LLMChain] Entering Chain run with input:
{
"system_message": "Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break chara... | Hugging Face LLM returns empty response for LLMChain via FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/15506/comments | 1 | 2024-01-03T22:44:36Z | 2024-04-10T16:17:17Z | https://github.com/langchain-ai/langchain/issues/15506 | 2,064,744,737 | 15,506 |
[
"hwchase17",
"langchain"
] | ### Feature request
Able to `persist` between batch when the embedding is between built:
```python
db = Chroma.from_documents(
documents=documents, embedding=embeddings, persist_directory=persist_directory)
db.persist()
return db
```
would be nice to be :
```python
db = Chroma.from_docum... | Embeddings - Persist between batches | https://api.github.com/repos/langchain-ai/langchain/issues/15504/comments | 1 | 2024-01-03T21:53:13Z | 2024-04-10T16:16:45Z | https://github.com/langchain-ai/langchain/issues/15504 | 2,064,695,031 | 15,504 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.354
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loader... | Fix HuggingFaceHub LLM Integration | https://api.github.com/repos/langchain-ai/langchain/issues/15500/comments | 1 | 2024-01-03T20:17:12Z | 2024-04-10T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15500 | 2,064,594,842 | 15,500 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am learning how to use Pinecone properly with LangChain and OpenAI Embedding. I built an application which can allow user upload PDFs and ask questions about the PDFs. In the application I used Pinecone as the vector database and store embeddings in Pinecone. However, I w... | Issue: Embedding with Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/15497/comments | 11 | 2024-01-03T19:40:35Z | 2024-01-24T14:57:45Z | https://github.com/langchain-ai/langchain/issues/15497 | 2,064,553,332 | 15,497 |
[
"hwchase17",
"langchain"
] | ### Feature request
Provide a method to create the HNSW for the PGVector vectorstore
### Motivation
There is a similar method implemented for PGEmbedding but the embedding extension will be deprecated
### Your contribution
https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw
https://github.com... | PGVector method for HNSW | https://api.github.com/repos/langchain-ai/langchain/issues/15496/comments | 5 | 2024-01-03T19:31:06Z | 2024-07-08T16:04:56Z | https://github.com/langchain-ai/langchain/issues/15496 | 2,064,543,064 | 15,496 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi all,
I am trying to create an app that can act upon a natural language prompt to interact with the Monday API and then successfully carry out the relevant action.
The code I currently have is as follows:
> MondayDocs = """
>
>
> The below is Monday API documentation... | Issue: No connection adaptors were found | https://api.github.com/repos/langchain-ai/langchain/issues/15494/comments | 9 | 2024-01-03T18:58:45Z | 2024-04-11T16:14:00Z | https://github.com/langchain-ai/langchain/issues/15494 | 2,064,506,326 | 15,494 |
[
"hwchase17",
"langchain"
] | ### System Info
``` bash
bash-4.2# pip freeze | grep langchain
langchain==0.0.353
langchain-community==0.0.8
langchain-core==0.1.5
bash-4.2# python --version
Python 3.10.13
bash-4.2# uname -a
Linux 5b9ca59024db 6.1.61-85.141.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 00:39:18 UTC 2023 x86_64 x86_64 ... | `langchain-core` cannot import name 'tracing_enabled' | https://api.github.com/repos/langchain-ai/langchain/issues/15491/comments | 7 | 2024-01-03T17:49:06Z | 2024-02-21T14:27:36Z | https://github.com/langchain-ai/langchain/issues/15491 | 2,064,424,956 | 15,491 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've scoured the internet trying to find an example that would use a custom model (Mistral) with ```HuggingFaceTextGenInference``` which uses ```LLMChain``` to return a **streaming** response via ```fastapi```.
Does anyone have a working example?
### Suggestion:
_No response_ | HELP!: Example of using HuggingFaceTextGenInference, llmchain, and fastapi | https://api.github.com/repos/langchain-ai/langchain/issues/15487/comments | 5 | 2024-01-03T16:55:10Z | 2024-04-10T16:16:31Z | https://github.com/langchain-ai/langchain/issues/15487 | 2,064,352,002 | 15,487 |
[
"hwchase17",
"langchain"
] | ### System Info
**Platform:**
Linux Ubuntu 22.04.1
**Python:**
3.10.12
**Langchain:**
- langchain 0.0.353
- langchain-community 0.0.7
- langchain-core 0.1.5
- langsmith 0.0.77
### Who can help?
@hwchase17
### Information
- [ ] The official exam... | ModuleNotFoundError at import langchain.chains | https://api.github.com/repos/langchain-ai/langchain/issues/15484/comments | 16 | 2024-01-03T15:56:43Z | 2024-04-11T16:15:09Z | https://github.com/langchain-ai/langchain/issues/15484 | 2,064,266,659 | 15,484 |
[
"hwchase17",
"langchain"
] | ### Feature request
Chatglm3 has added many new features compared to previous chatglm and chatglm2, which is particularly useful for users. So there will definitely be more user demands in using Langchain to build a knowledge base, and it's unclear how long it will take for the community to adapt.
chatglm3 github p... | How long can I use Langchain to call the chatglm3 API | https://api.github.com/repos/langchain-ai/langchain/issues/15479/comments | 2 | 2024-01-03T15:09:25Z | 2024-04-17T16:18:32Z | https://github.com/langchain-ai/langchain/issues/15479 | 2,064,195,347 | 15,479 |
[
"hwchase17",
"langchain"
] | ### Feature request
DynamoDBChatMessageHistory class is missing a TTL feature that would allow for history to automatically expire and be deleted by AWS DynamoDB service.
### Motivation
While implementing a chat history using DynamoDBChatMessageHistory, I encoutered a growing history session table. Since AWS DynamoD... | Add TTL support for DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/15477/comments | 2 | 2024-01-03T14:29:38Z | 2024-01-30T15:50:29Z | https://github.com/langchain-ai/langchain/issues/15477 | 2,064,133,232 | 15,477 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How do I use SQLDatabaseChain with a big db schema?
I'm using:
`db = SQLDatabase.from_uri(f"postgresql://localhost:5432/test")
db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
`
And all models complain about context window. A smaller database schema works with some mo... | Issue: Large database schema too big for context window using SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15476/comments | 3 | 2024-01-03T14:05:12Z | 2024-04-11T16:13:57Z | https://github.com/langchain-ai/langchain/issues/15476 | 2,064,094,863 | 15,476 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Dear all,
Happy new year!
Due to work legacy stuff, I am still forced to use this library. I am wondering if there is a way to pass multiple prompt template (system, human and ai) instead of just one
Hopefully my man Dosubot can help!
Thanks you so much
Cheers,... | Using multiple templates as starter for LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15475/comments | 2 | 2024-01-03T13:08:44Z | 2024-04-30T16:14:42Z | https://github.com/langchain-ai/langchain/issues/15475 | 2,064,012,239 | 15,475 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`document_variable_name="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
model = ChatGoogleGenerativeAI(model="gemini-pro",google_api_key=GOOGLE_API_KEY,temperature=0.2,convert_system_message_to_human=True)
template = f"""Use the following pieces of contex... | Issue: Issue while implementing RAG with Gemini LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15474/comments | 5 | 2024-01-03T11:35:01Z | 2024-04-29T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15474 | 2,063,844,827 | 15,474 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.11
langchain==0.0.350
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X... | S3FileLoader doesn't provide the way of passing extra (unstructured_kwargs) parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15472/comments | 2 | 2024-01-03T11:04:15Z | 2024-03-27T22:03:50Z | https://github.com/langchain-ai/langchain/issues/15472 | 2,063,787,050 | 15,472 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hello
The best practice is using comments inside the class/function,
however, comments/TODOs have been placed before in many cases,
Splitting it from the class/function hides some free text that can help understand the purpose and intention of class/function and may cause misinterpreted co... | in libs/langchain/langchain/text_splitter.py comments before class/function will be splitter from class/function itself | https://api.github.com/repos/langchain-ai/langchain/issues/15471/comments | 3 | 2024-01-03T10:45:00Z | 2024-04-10T16:13:31Z | https://github.com/langchain-ai/langchain/issues/15471 | 2,063,750,924 | 15,471 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add document loader for CHM (Microsoft Compiled HTML Help) documents, possibly using pychm.
### Motivation
A lot of Windows applications provide documentation in the form of CHM files. Being able to directly load those into the language model, would greatly simplify the workflow of ingesting docu... | Support CHM files | https://api.github.com/repos/langchain-ai/langchain/issues/15469/comments | 2 | 2024-01-03T09:57:30Z | 2024-01-07T17:28:54Z | https://github.com/langchain-ai/langchain/issues/15469 | 2,063,656,337 | 15,469 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
The following is the code I ran, referring to the official documentation. It looks like prompt_template is not effective and there is no explicit call of prompt_template in the code. How should I customize a prompt for initialize_agent.
```python
prompt_template = """
Translate th... | Issue: How to customize prompt | https://api.github.com/repos/langchain-ai/langchain/issues/15467/comments | 3 | 2024-01-03T09:38:42Z | 2024-04-10T16:08:12Z | https://github.com/langchain-ai/langchain/issues/15467 | 2,063,619,530 | 15,467 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
`context_text="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
print(context_text,"======================")
question = "Describe the Multi-head attention layer in detail?"
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-0... | Issue:While implementing Gemini, getting error while using prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15466/comments | 2 | 2024-01-03T09:36:43Z | 2024-04-10T16:13:27Z | https://github.com/langchain-ai/langchain/issues/15466 | 2,063,615,767 | 15,466 |
[
"hwchase17",
"langchain"
] | ### System Info
Mint 20.3
Python 3.11
Conda 23.9
Pip 23.3.2
Setuptools 69.0.3
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prom... | "Multiple top-level packages discovered in a flat-layout" when installing from source | https://api.github.com/repos/langchain-ai/langchain/issues/15465/comments | 2 | 2024-01-03T09:35:31Z | 2024-01-03T09:37:30Z | https://github.com/langchain-ai/langchain/issues/15465 | 2,063,613,427 | 15,465 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
let me clearify the issue below:
we are using vectordb conversational retriever. we can look into the attached code for more clarity. My issue is i have 4 documents stored in 2 different index. **End-user can select multiple document lets say 3 or 4 document, and ask qu... | Issue: Passing list of index while retreiver in opensearch vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/15458/comments | 1 | 2024-01-03T08:12:56Z | 2024-04-10T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15458 | 2,063,465,461 | 15,458 |
[
"hwchase17",
"langchain"
] | I am trying to build a langchain SQL database agent where I want to query only one view for now. I have mentioned the view name in the System Prompt and I have passed view_support=True to the SQLDatabase constructor. When I run the query agent is trying to find out the tables instead views. I am sensing that agent has ... | Langchain SQL Database Agent failed to find the view name in the MS SQL database. | https://api.github.com/repos/langchain-ai/langchain/issues/15457/comments | 5 | 2024-01-03T08:03:31Z | 2024-04-10T16:17:08Z | https://github.com/langchain-ai/langchain/issues/15457 | 2,063,449,745 | 15,457 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to know if there is an implementation to use ConversationChains, Agents with NVIDIA's Nemo Rails.
Thanks
### Suggestion:
_No response_ | Issues using Nemo rails with ConversationChains and Agents. | https://api.github.com/repos/langchain-ai/langchain/issues/15456/comments | 1 | 2024-01-03T07:30:33Z | 2024-04-10T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15456 | 2,063,398,062 | 15,456 |
[
"hwchase17",
"langchain"
] | ### Feature request
Allow using [bind parameters](https://docs.sqlalchemy.org/en/20/core/connections.html#sqlalchemy.engine.Connection.execute.params.parameters) in SQLDatabase's [run method](https://github.com/langchain-ai/langchain/blob/65afc13b8b53a1ca41a1a3998dad9eb8d83ca917/libs/community/langchain_community/util... | Allow bind variables in SQLDatabase queries | https://api.github.com/repos/langchain-ai/langchain/issues/15449/comments | 1 | 2024-01-03T05:26:19Z | 2024-04-10T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15449 | 2,063,266,287 | 15,449 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
langchain-community 0.0.7
langchain-core 0.1.4
langchain-experimental 0.0.47
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linuxd
openai 0.28.0
### Who can help?
_No response_
### Information
- [ ] The offi... | KeyError: 'usage' | https://api.github.com/repos/langchain-ai/langchain/issues/15448/comments | 3 | 2024-01-03T04:03:31Z | 2024-04-10T16:15:30Z | https://github.com/langchain-ai/langchain/issues/15448 | 2,063,217,500 | 15,448 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm using an llm chain and would like to stream its output. I'm writing a function that takes it's token. I can get the tokens one by one, but how can I know if this token is the last token of the response?
### Suggestion:
_No response_ | Issue: streaming output. | https://api.github.com/repos/langchain-ai/langchain/issues/15445/comments | 1 | 2024-01-03T03:29:06Z | 2024-04-10T16:12:53Z | https://github.com/langchain-ai/langchain/issues/15445 | 2,063,195,674 | 15,445 |
[
"hwchase17",
"langchain"
] | ### Feature request
If in-memory replica is increased in milvus, `replica_number` should be set when loading collection in langchain, but the default setting is 1 and cannot be changed.
> pymilvus.exceptions.MilvusException: <MilvusException: (code=1100, message=failed to load collection: can't change the replica ... | milvus replica number factorization | https://api.github.com/repos/langchain-ai/langchain/issues/15442/comments | 1 | 2024-01-03T02:58:01Z | 2024-01-05T04:07:25Z | https://github.com/langchain-ai/langchain/issues/15442 | 2,063,178,107 | 15,442 |
[
"hwchase17",
"langchain"
] | ### Feature request
Gemini API is not available in Canada, but i believe it is available through `vertexai.preview.generative_models` in pre-GA mode.
Would it be possible to add a feature using the Vertex AI SDK instead of Gemini API, which i assume is what it is using?
### Motivation
Canada access to Gemini thro... | Accessing Gemini though Vertex AI SDK | https://api.github.com/repos/langchain-ai/langchain/issues/15431/comments | 3 | 2024-01-02T21:52:47Z | 2024-01-03T17:20:09Z | https://github.com/langchain-ai/langchain/issues/15431 | 2,062,998,345 | 15,431 |
[
"hwchase17",
"langchain"
] | ### System Info
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
```
Python `3.12.1` running inside Docker from `python:3.12-bookworm` on Linux.
### Who can help?
I have a FastAPI app that streams the output of an LLM. The app uses `langchain.chat_models.ChatOpenAI` at runtime, but duri... | FakeStreamingListLLM.astream() yields strings while ChatOpenAI yields AIMessageChunk | https://api.github.com/repos/langchain-ai/langchain/issues/15426/comments | 3 | 2024-01-02T18:44:18Z | 2024-04-10T16:13:01Z | https://github.com/langchain-ai/langchain/issues/15426 | 2,062,806,705 | 15,426 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10
Python 3.11.5
langchain==0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] O... | Filter conditions are discarded when using multiple filter conditions in similarity_search_with_relevance_scores | https://api.github.com/repos/langchain-ai/langchain/issues/15417/comments | 2 | 2024-01-02T17:35:24Z | 2024-08-06T10:38:34Z | https://github.com/langchain-ai/langchain/issues/15417 | 2,062,736,735 | 15,417 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am creating a simple PDF reading application where I want to push my embeddings into Pinecone.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from... | Issue: Pinecone Embeddings - Error | https://api.github.com/repos/langchain-ai/langchain/issues/15407/comments | 7 | 2024-01-02T15:33:53Z | 2024-01-03T19:31:42Z | https://github.com/langchain-ai/langchain/issues/15407 | 2,062,584,661 | 15,407 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11
langchain:latest
### Who can help?
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sq... | How to get the complete output as answer | https://api.github.com/repos/langchain-ai/langchain/issues/15404/comments | 3 | 2024-01-02T14:01:51Z | 2024-04-10T16:12:39Z | https://github.com/langchain-ai/langchain/issues/15404 | 2,062,463,881 | 15,404 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi! I am trying to replicate this tutorial https://python.langchain.com/docs/integrations/toolkits/playwright on Colab using the same code, only difference is i am using `ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)` instead of `ChatAnthropic(temperature=0)`.
When i run the ag... | Issue: not able to replicate documentation results | https://api.github.com/repos/langchain-ai/langchain/issues/15403/comments | 1 | 2024-01-02T13:55:08Z | 2024-04-09T16:15:03Z | https://github.com/langchain-ai/langchain/issues/15403 | 2,062,455,269 | 15,403 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain - 0.0.350
Python - 3.11
chromadb - 0.3.23
OS - Win 10
### Who can help?
@hwchase17
@eyur
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates ... | Warning message- No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction || Chroma db | https://api.github.com/repos/langchain-ai/langchain/issues/15400/comments | 1 | 2024-01-02T10:09:11Z | 2024-04-09T16:14:50Z | https://github.com/langchain-ai/langchain/issues/15400 | 2,062,195,571 | 15,400 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Links within the documentation fail to open when the URL lacks a trailing '/'. Notably, the links on the page indicated in the example URL below and other links across various pages result in `page not found` errors when the URL lacks a trailing '/'. It's crucial to clarify that b... | DOC: Resolve URL Navigation Issues - Trailing Slash Discrepancy | https://api.github.com/repos/langchain-ai/langchain/issues/15399/comments | 1 | 2024-01-02T09:54:31Z | 2024-04-09T16:14:33Z | https://github.com/langchain-ai/langchain/issues/15399 | 2,062,178,490 | 15,399 |
[
"hwchase17",
"langchain"
] | ### System Info
/Users/sunwenke/miniconda3/envs/langchain/bin/python /Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py
Traceback (most recent call last):
File "/Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py", line 8, in <module>
entity_store = SQLiteEntityStore(db_file="/Us... | 这个貌似是一个bug, 修改之后就好了 | https://api.github.com/repos/langchain-ai/langchain/issues/15396/comments | 1 | 2024-01-02T08:55:07Z | 2024-04-10T16:08:18Z | https://github.com/langchain-ai/langchain/issues/15396 | 2,062,114,468 | 15,396 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi Langchain Gurus,
I am trying to use SQLDatabaseChain to query and answer questions on postgresql table. so far the code that I have written uses following hugging face pipeline
```
model_name ='tiiuae/falcon-7b-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe... | Issue: Unable to use SQLDatabaseChain with Falcon 7b Instruct for quering the postgresql database. | https://api.github.com/repos/langchain-ai/langchain/issues/15395/comments | 1 | 2024-01-02T08:54:23Z | 2024-04-09T16:09:38Z | https://github.com/langchain-ai/langchain/issues/15395 | 2,062,113,700 | 15,395 |
[
"hwchase17",
"langchain"
] | ### System Info
```
pip show langchain_community
Name: langchain-community
Version: 0.0.3
```
```
python --version
Python 3.10.12
```
```
pip show langchain_core
Name: langchain-core
Version: 0.1.1
```
```
pip show pydantic
Name: pydantic
Version: 2.5.1
```
### Who can help?
_No response_
##... | similarity_search get 2 validation errors for DocArrayDoc | https://api.github.com/repos/langchain-ai/langchain/issues/15394/comments | 2 | 2024-01-02T07:44:12Z | 2024-04-11T16:13:54Z | https://github.com/langchain-ai/langchain/issues/15394 | 2,062,050,508 | 15,394 |
[
"hwchase17",
"langchain"
] | ### System Info
Platform: `Mac M1`
Python: `3.8.18`
Lanchain: `0.0.350`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Tem... | [Filter] Unable to filter dates with MongoDB | https://api.github.com/repos/langchain-ai/langchain/issues/15391/comments | 5 | 2024-01-02T07:03:26Z | 2024-04-09T16:13:23Z | https://github.com/langchain-ai/langchain/issues/15391 | 2,062,017,256 | 15,391 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If i set the in-memory replica of milvus to 3 and then run the following code on langchain, the following error occurs.
```python3
vector_db = Milvus.from_documents(
docs,
embeddings,
collection_name=collection,
)
```
> pymilvus.exceptions.Milvu... | Issue: milvus collectoin load replica number | https://api.github.com/repos/langchain-ai/langchain/issues/15390/comments | 2 | 2024-01-02T06:49:44Z | 2024-04-09T16:13:24Z | https://github.com/langchain-ai/langchain/issues/15390 | 2,062,006,876 | 15,390 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`from langchain.chains.question_answering import load_qa_chain
template = """
{Your_Prompt}
CONTEXT:
{context}
QUESTION:
{query}
CHAT HISTORY:
{chat_history}
ANSWER:
"""
prompt = PromptTemplate(input_variables=["chat_history", "query", "context"], template=template)
... | Issue:Issue regarding Memory implementation | https://api.github.com/repos/langchain-ai/langchain/issues/15388/comments | 3 | 2024-01-02T06:34:26Z | 2024-04-09T16:13:52Z | https://github.com/langchain-ai/langchain/issues/15388 | 2,061,996,098 | 15,388 |
[
"hwchase17",
"langchain"
] | ### System Info
```
langchain==0.0.335
```
```
python 3.11
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
-... | langchain is not `pyinstaller` friendly due to dependency on external files, e.g. `llm_summarization_checker` | https://api.github.com/repos/langchain-ai/langchain/issues/15386/comments | 2 | 2024-01-02T03:17:26Z | 2024-01-02T04:09:36Z | https://github.com/langchain-ai/langchain/issues/15386 | 2,061,900,973 | 15,386 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.351
boto3==1.34.3
Python version: 3.11.7
### Who can help?
I use DynamoDBChatMessageHistory as the conversation_history, seems duplicate Human messages saved to DynamoDB table every time, AI message saved once.
Here is the duplicate messages:
 to
if "webPages" in sea... | Bing Search Tool has key value error | https://api.github.com/repos/langchain-ai/langchain/issues/15384/comments | 1 | 2024-01-02T01:05:50Z | 2024-01-02T23:25:02Z | https://github.com/langchain-ai/langchain/issues/15384 | 2,061,856,348 | 15,384 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
from langchain.vectorstores.neo4j_vector import Neo4jVector
ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector'
### Idea or request for content:
The existing documentation for the Neo4J Vector Index incorrectly indicates the use of "Neo4jVector" from th... | ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector' | https://api.github.com/repos/langchain-ai/langchain/issues/15383/comments | 2 | 2024-01-01T22:32:24Z | 2024-01-02T03:25:12Z | https://github.com/langchain-ai/langchain/issues/15383 | 2,061,806,618 | 15,383 |
[
"hwchase17",
"langchain"
] | ### Feature request
To add Async Client support to MongoDB Vector Stores
### Motivation
Currently, Langhain works very well with the PyMongo Client but not with the Async Clients like Motor it throws error, probably I hope it may not be implemented yet?
Reference:
PyMongo Working Docs - https://python.langchai... | [MongoDB] Async Support for Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/15377/comments | 8 | 2024-01-01T12:48:06Z | 2024-05-02T07:53:49Z | https://github.com/langchain-ai/langchain/issues/15377 | 2,061,541,643 | 15,377 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.353
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ]... | Playwright Utilities Removed In 0.0.353, Documentation Not Updated | https://api.github.com/repos/langchain-ai/langchain/issues/15372/comments | 3 | 2024-01-01T02:18:08Z | 2024-04-08T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15372 | 2,061,256,601 | 15,372 |
[
"hwchase17",
"langchain"
] | ### System Info
- Python 3.12.1
- MacOS 14.2.1
- langchain-cli 0.0.20 from pip OR langchain-* from git master branch (commit-ish [26f84b7](https://github.com/langchain-ai/langchain/commit/26f84b74d0f7dc4d2211a1a62d47eec36cb1d726)) -- can reproduce with latest code: langchain 0.0.353, langchain-cli 0.0.20, langchain-... | RAG crash: TypeError: Type is not JSON serializable: numpy.ndarray | https://api.github.com/repos/langchain-ai/langchain/issues/15371/comments | 9 | 2024-01-01T01:46:45Z | 2024-06-08T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15371 | 2,061,243,335 | 15,371 |
[
"hwchase17",
"langchain"
] | ### System Info
MacOS 14.0, Jupiter with Python 3.11.6
(base) ➜ llm-env pip show langchain
Name: langchain
Version: 0.0.353
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /lib/python3.11... | ValueError: 'lib/python3.11/site-packages/langchain/agents/agent_toolkits' is not in the subpath of 'lib/python3.11/site-packages/langchain_core' OR one path is relative and the other is absolute. | https://api.github.com/repos/langchain-ai/langchain/issues/15370/comments | 3 | 2024-01-01T01:32:47Z | 2024-02-07T23:07:23Z | https://github.com/langchain-ai/langchain/issues/15370 | 2,061,236,158 | 15,370 |
[
"hwchase17",
"langchain"
] | ### Feature request
The ollama integration assumes that all models are served on "localhost:11434", if the ollama service is hosted on a different machine, the integration will fail.
Can we add an environment variable that if present overrides this url, so the correct url for the ollama server can be set.
### M... | Ability to set ollama serve url | https://api.github.com/repos/langchain-ai/langchain/issues/15365/comments | 4 | 2023-12-31T20:05:56Z | 2024-06-08T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15365 | 2,061,158,064 | 15,365 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
pygpt4all 1.1.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parse... | ImportError: cannot import name 'GPT4ALL' from 'langchain.llms' | https://api.github.com/repos/langchain-ai/langchain/issues/15362/comments | 3 | 2023-12-31T19:14:23Z | 2024-04-09T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15362 | 2,061,148,468 | 15,362 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.9
LangChain 0.0.339
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- ... | SQLDatabase returns result without column names. | https://api.github.com/repos/langchain-ai/langchain/issues/15360/comments | 1 | 2023-12-31T17:17:04Z | 2024-04-07T16:07:44Z | https://github.com/langchain-ai/langchain/issues/15360 | 2,061,123,042 | 15,360 |
[
"hwchase17",
"langchain"
] | ### System Info
- LangChain: 0.0.353
- System: Ubuntu 22.04
- Python: 3.10.12
### Information
I run the code in the quickstart part of the [document](https://python.langchain.com/docs/get_started/quickstart#agent), code:
```python
from langchain.chat_models import ChatOpenAI
from langchain import hub
fro... | AttributeError: 'VectorStoreRetriever' object has no attribute 'args_schema' | https://api.github.com/repos/langchain-ai/langchain/issues/15359/comments | 2 | 2023-12-31T15:17:25Z | 2024-04-10T16:15:34Z | https://github.com/langchain-ai/langchain/issues/15359 | 2,061,090,976 | 15,359 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat ... | SingleFileFacebookMessengerChatLoader fails when the chat contains non-text contents such as stickers and photos. | https://api.github.com/repos/langchain-ai/langchain/issues/15356/comments | 3 | 2023-12-31T09:31:07Z | 2024-01-02T14:36:02Z | https://github.com/langchain-ai/langchain/issues/15356 | 2,061,000,149 | 15,356 |
[
"hwchase17",
"langchain"
] | ### System Info
azure-search-documents==11.4.0b8
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / ... | AzureSearch semantic_hybrid_search fails due to hardcoding of metadata fields | https://api.github.com/repos/langchain-ai/langchain/issues/15355/comments | 1 | 2023-12-31T08:43:04Z | 2024-04-07T16:07:34Z | https://github.com/langchain-ai/langchain/issues/15355 | 2,060,988,370 | 15,355 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How can I make output templates in langchain? That is, for example, I throw a request for AI to write a joke, but with strict adherence to the template [set-up, punchline] and therefore get as a result:
```
Set-up: ...
Punchline: ...
```
and nothing more
### Suggestion:
_No respon... | Issue: output templates in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/15350/comments | 1 | 2023-12-31T00:18:00Z | 2024-04-07T16:07:29Z | https://github.com/langchain-ai/langchain/issues/15350 | 2,060,892,236 | 15,350 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.0.353
Python 3.10.12
System Ubuntu 22.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selector... | Chromadb connection error | https://api.github.com/repos/langchain-ai/langchain/issues/15348/comments | 3 | 2023-12-30T18:38:19Z | 2023-12-31T12:17:59Z | https://github.com/langchain-ai/langchain/issues/15348 | 2,060,823,804 | 15,348 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The [documentation](https://python.langchain.com/docs/use_cases/summarization) describes the different options for summarizing a text, for longer texts the 'map_reduce' option is suggested. It is mentioned further under 'Go deeper' that it is possible to use different LLMs via the... | DOC: Summarization 'map_reduce' - Can't load tokenizer for 'gpt2' | https://api.github.com/repos/langchain-ai/langchain/issues/15347/comments | 11 | 2023-12-30T17:44:16Z | 2024-06-12T15:24:45Z | https://github.com/langchain-ai/langchain/issues/15347 | 2,060,810,975 | 15,347 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Help me understand how I can save the intermediate data of chain execution results?

### Suggestion:
_No response_ | Issue: <Saving intermediate variable chains ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15345/comments | 2 | 2023-12-30T15:47:22Z | 2024-04-06T16:06:32Z | https://github.com/langchain-ai/langchain/issues/15345 | 2,060,781,653 | 15,345 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
A few days back, I was referring to the [Prompt templates](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/) page which now shows: "**Page Not Found**"
### Idea or request for content:
I understand that LangChain is an evolving framework undergoing c... | DOC: Prompt Templates "Page Not Found" | https://api.github.com/repos/langchain-ai/langchain/issues/15342/comments | 3 | 2023-12-30T11:14:48Z | 2024-04-14T16:13:36Z | https://github.com/langchain-ai/langchain/issues/15342 | 2,060,716,887 | 15,342 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.353
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ]... | _OllamaCommon contains top_p with int-restriction | https://api.github.com/repos/langchain-ai/langchain/issues/15341/comments | 1 | 2023-12-30T10:29:06Z | 2024-01-15T19:59:40Z | https://github.com/langchain-ai/langchain/issues/15341 | 2,060,706,496 | 15,341 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`below is my code for generating custom prompt which takes context and user query and we pass it into model:
def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_docume... | Issue: Not getting desired output while implementing memory | https://api.github.com/repos/langchain-ai/langchain/issues/15339/comments | 7 | 2023-12-30T04:32:17Z | 2024-04-06T16:06:27Z | https://github.com/langchain-ai/langchain/issues/15339 | 2,060,626,887 | 15,339 |
[
"hwchase17",
"langchain"
] | ### System Info
New versions
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Load... | This model's maximum context length is 4097 tokens, however you requested 4177 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15333/comments | 1 | 2023-12-29T23:25:32Z | 2024-04-05T16:08:50Z | https://github.com/langchain-ai/langchain/issues/15333 | 2,060,459,074 | 15,333 |
[
"hwchase17",
"langchain"
] | ### System Info
I've been trying to create a self query retriever so that I can look at metadata field info. This issue comes up. Should I be using another vector store to make this work? I can only really work with FAISS. I cannot use ChromaDB since my Python environment is limited to a previous version.
### Who c... | Self query retriever with Vector Store type <class 'langchain_community.vectorstores.faiss.FAISS'> not supported. | https://api.github.com/repos/langchain-ai/langchain/issues/15331/comments | 4 | 2023-12-29T22:05:18Z | 2024-01-11T22:59:30Z | https://github.com/langchain-ai/langchain/issues/15331 | 2,060,431,327 | 15,331 |
[
"hwchase17",
"langchain"
] | ### Feature request
This proposal requests the integration of the latest OpenAI models, specifically gpt-4-1106-preview, into the existing framework of [relevant GitHub project, e.g., LangChain]. The newer models offer significantly larger context windows, which are crucial for complex SQL querying and other advanced ... | Integration with OpenAI's Latest Models and API Compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15328/comments | 5 | 2023-12-29T20:33:36Z | 2024-04-11T17:54:09Z | https://github.com/langchain-ai/langchain/issues/15328 | 2,060,386,330 | 15,328 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
how to use embeddings in langchain with fireworks?(I need it for RAG) It's just that the documentation only talks about OpenAIEmbeddings
https://python.langchain.com/docs/modules/data_connection/text_embedding/
### Idea or request for content:
RAG with fireworks API | DOC: how to use embeddings in langchain with fireworks? | https://api.github.com/repos/langchain-ai/langchain/issues/15325/comments | 1 | 2023-12-29T19:38:49Z | 2024-04-05T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15325 | 2,060,357,840 | 15,325 |
[
"hwchase17",
"langchain"
] | ### System Info
"langchain": "^0.0.211",
MacOS Sonoma 14.2
Next.js 14.0.4
### Who can help?
@agola11
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Tem... | Issue when running a simple ChatOllama prompt in Next.js/TypeScript: "Error: Single '}' in template." | https://api.github.com/repos/langchain-ai/langchain/issues/15318/comments | 2 | 2023-12-29T15:48:07Z | 2023-12-29T16:03:41Z | https://github.com/langchain-ai/langchain/issues/15318 | 2,060,210,050 | 15,318 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have built a custom LLM Agent by following the Documentation provided. The custom agent contains multiple tools, one of them is the "LLMMathChain" which is giving me ValueError, cause my agent is passing "None" as an Action Input. I want to handle that error. So that my chatbot doesn't... | Issue: Error Handling in Tools used in custom agents | https://api.github.com/repos/langchain-ai/langchain/issues/15317/comments | 1 | 2023-12-29T12:44:32Z | 2024-04-05T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15317 | 2,059,715,813 | 15,317 |
[
"hwchase17",
"langchain"
] | ### Feature request
Presently, JSON can be utilized to enable the multimodal capability of GPT-4 series models within ChatOpenAI and OpenAI. However, this functionality lacks portability.
### Motivation
Using multimodal approaches lacks portability, and GPT-4 isn't the sole model employing multimodal capabilities. T... | Add common mulit model support | https://api.github.com/repos/langchain-ai/langchain/issues/15316/comments | 3 | 2023-12-29T12:42:22Z | 2024-04-08T16:08:22Z | https://github.com/langchain-ai/langchain/issues/15316 | 2,059,700,790 | 15,316 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code, How can I implement Conversation Chain along with ConversationSummaryMemory in my code
`def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOp... | Issue: How can I implement Conversation Chain along with ConversationSummaryMemory | https://api.github.com/repos/langchain-ai/langchain/issues/15315/comments | 1 | 2023-12-29T11:23:25Z | 2024-04-05T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15315 | 2,059,344,749 | 15,315 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am trying to add a specific prompt template to my ConversationalRetrievalChain. This is my current code:
> PROMPT_TEMPLATE = """
Act as the policies interactive Bot that gives advice on the Company policies, Travel policies, and Information security policies for the company.
Do ... | Issue: document_variable_name context was not found in llm_chain input_variables | https://api.github.com/repos/langchain-ai/langchain/issues/15314/comments | 1 | 2023-12-29T10:42:37Z | 2024-04-05T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15314 | 2,059,302,480 | 15,314 |
[
"hwchase17",
"langchain"
] | ### System Info
lc: 0.0.352, os: ubuntu 22, python 3.10
### Who can help?
### Description
I am encountering a significant performance issue when using Qdrant with HuggingfaceEmbeddings in a CPU-only environment, specifically within a FastAPI endpoint. The process is notably slow, particularly at the `aadd_documen... | Slow aadd_documents using Qdrant and HuggingfaceEmbeddings on CPU | https://api.github.com/repos/langchain-ai/langchain/issues/15310/comments | 1 | 2023-12-29T09:45:06Z | 2024-04-05T16:08:14Z | https://github.com/langchain-ai/langchain/issues/15310 | 2,059,251,491 | 15,310 |
[
"hwchase17",
"langchain"
] | null | b | https://api.github.com/repos/langchain-ai/langchain/issues/15307/comments | 2 | 2023-12-29T08:30:47Z | 2023-12-29T08:37:37Z | https://github.com/langchain-ai/langchain/issues/15307 | 2,059,195,701 | 15,307 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.340
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
... | The retrieval cannot be given the document correctly | https://api.github.com/repos/langchain-ai/langchain/issues/15306/comments | 4 | 2023-12-29T08:00:29Z | 2024-04-08T16:08:17Z | https://github.com/langchain-ai/langchain/issues/15306 | 2,059,175,187 | 15,306 |
[
"hwchase17",
"langchain"
] | Hi @dosu-bot,
This is my code
```
import langchain
from langchain.cache import SQLAlchemyCache, Emb
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, Text
from urllib.parse import quote_plus
from langchain.llms import OpenAI
Base = dec... | How do i use similarity caching in my code? | https://api.github.com/repos/langchain-ai/langchain/issues/15304/comments | 1 | 2023-12-29T07:36:10Z | 2024-04-05T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15304 | 2,059,159,495 | 15,304 |
[
"hwchase17",
"langchain"
] | Hi @dosu-bot.
Below is my code,
```
from langchain.cache import SQLAlchemyCache
from sqlalchemy import create_engine
engine = create_engine("mssql+pyodbc://JUPYTER\SQLEXPRESS/my_database?driver=ODBC+Driver+17+for+SQL Server")
set_llm_cache(SQLAlchemyCache(engine))
memory = ConversationBufferWindowMemory(k... | Cache not getting saved in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15303/comments | 1 | 2023-12-29T06:30:14Z | 2024-04-05T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15303 | 2,059,118,347 | 15,303 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi,
I'm new to this, so I apologize if my lack of in-depth understanding to how this library works caused to me raise a false alarm. Im trying to an ocr on pdf image using the UnstructuredPDFLoader, Im passing the following args:
`
loader = UnstructuredPDFLoader(file_path="myfile.pdf", mode="elem... | Using chipper model with hi_res strategy gives an error | https://api.github.com/repos/langchain-ai/langchain/issues/15300/comments | 2 | 2023-12-29T02:33:48Z | 2024-04-05T16:07:54Z | https://github.com/langchain-ai/langchain/issues/15300 | 2,059,008,076 | 15,300 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain = "^0.0.352"
@agola11
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
... | Cannot specify asyn clienct for OpenAIAssistantRunnable | https://api.github.com/repos/langchain-ai/langchain/issues/15299/comments | 1 | 2023-12-29T02:29:20Z | 2024-01-29T20:19:49Z | https://github.com/langchain-ai/langchain/issues/15299 | 2,059,006,360 | 15,299 |
[
"hwchase17",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.352
Name: openai
Version: 1.6.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Pr... | Azure function not working - openai error with latest builds | https://api.github.com/repos/langchain-ai/langchain/issues/15289/comments | 3 | 2023-12-28T22:42:25Z | 2023-12-30T12:46:52Z | https://github.com/langchain-ai/langchain/issues/15289 | 2,058,918,716 | 15,289 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.348
Python 3.9.18
Mac OS M2 (Ventura 13.6.2)
AWS Bedrock Titan text express, Claude v2
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Em... | Incorrect Snowflake SQL dialect in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15285/comments | 12 | 2023-12-28T21:26:16Z | 2024-04-22T16:31:04Z | https://github.com/langchain-ai/langchain/issues/15285 | 2,058,832,286 | 15,285 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.