issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
Version 0.0.266, all
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Documen... | Input variables in PipelinePromptTemplate's final_prompt are not extracted as input variables | https://api.github.com/repos/langchain-ai/langchain/issues/9423/comments | 5 | 2023-08-17T20:54:45Z | 2024-03-20T16:05:23Z | https://github.com/langchain-ai/langchain/issues/9423 | 1,855,698,506 | 9,423 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I do not have access to huggingface.co in my environment, but I do have the Instructor model (hkunlp/instructor-large) saved locally. How do I utilize the langchain function HuggingFaceInstructEmbeddings to point to a local model?
I tried the below code but received an error:
```
... | Issue: How can I load text embeddings from a local model? | https://api.github.com/repos/langchain-ai/langchain/issues/9421/comments | 4 | 2023-08-17T20:12:41Z | 2024-03-25T13:59:21Z | https://github.com/langchain-ai/langchain/issues/9421 | 1,855,641,091 | 9,421 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10
Python 3.9.7
langchain 0.0.236
### Who can help?
@hwchase17 @agola11 I have problem to make work together MultiPromptChain and AgentExecutor. Problem actually is trivial MultiPromptChain.destination_chains has type of Mapping[str, LLMChain] and AgentExecutor did not fit in this definitio... | AgentExecutor not working with MultiPromptChain | https://api.github.com/repos/langchain-ai/langchain/issues/9416/comments | 5 | 2023-08-17T19:01:41Z | 2024-02-12T16:15:24Z | https://github.com/langchain-ai/langchain/issues/9416 | 1,855,547,600 | 9,416 |
[
"hwchase17",
"langchain"
] |
---------------------------------------------------------------------------
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
--------------------------------------------------------------------------
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:257: UserWar... | while importing the llama_index occurred error that | https://api.github.com/repos/langchain-ai/langchain/issues/9412/comments | 4 | 2023-08-17T18:07:13Z | 2023-11-24T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9412 | 1,855,473,639 | 9,412 |
[
"hwchase17",
"langchain"
] | While trying to import langchain in Jupyter Notebook getting this error.
> PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
>
> For further information vis... | Issue: Can't import Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9409/comments | 6 | 2023-08-17T17:44:33Z | 2024-06-03T07:12:37Z | https://github.com/langchain-ai/langchain/issues/9409 | 1,855,445,177 | 9,409 |
[
"hwchase17",
"langchain"
] | ### System Info
In the Async mode, SequentialChain implementation seems to run the same callbacks over and over since it is re-using the same callbacks object.
Langchain version: 0.0.264
The implementation of this aysnc route differs from the sync route and sync approach follows the right pattern of generatin... | SequentialChain runs the same callbacks over and over in async mode | https://api.github.com/repos/langchain-ai/langchain/issues/9401/comments | 2 | 2023-08-17T15:18:21Z | 2023-09-25T09:32:55Z | https://github.com/langchain-ai/langchain/issues/9401 | 1,855,216,580 | 9,401 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to use a `ConversationalRetrievalChain` along with a `ConversationBufferMemory` and `return_source_documents` set to `True`. The problem is that, under this setting, I get an error when I call the overall chain.
```
from langchain.chains import ConversationalRetrievalCha... | ConversationalRetrievalChain doesn't work along with memory and return_source_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9394/comments | 6 | 2023-08-17T14:40:07Z | 2024-02-15T16:10:25Z | https://github.com/langchain-ai/langchain/issues/9394 | 1,855,144,833 | 9,394 |
[
"hwchase17",
"langchain"
] | ### System Info
I'm running the executor off a flask backend, and when the answer from the llm to my number is POSTED, strangely, the agent begins to talk to itself, with seemingly no human message. I store all the messages within the executor's memory, and this is the error that it throws:
2023-08-17 14:05:05.1226... | initialize_agent with zero_shot_react_description, talks to itself, produces conversational buffer memory issues | https://api.github.com/repos/langchain-ai/langchain/issues/9393/comments | 2 | 2023-08-17T14:37:49Z | 2023-11-23T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9393 | 1,855,140,764 | 9,393 |
[
"hwchase17",
"langchain"
] | ### System Info
@hwchase17
@agola11
Trying to implement https://python.langchain.com/docs/guides/fallbacks into our current environment that is using LLMChain, but when I pass in the fallback llm into the LLMChain, it throws the following error:
`**ValidationError: 1 validation error for LLMChain**
llm
Ca... | Fallbacks with LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/9391/comments | 2 | 2023-08-17T14:20:21Z | 2023-09-25T15:45:08Z | https://github.com/langchain-ai/langchain/issues/9391 | 1,855,109,090 | 9,391 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using Agent with OpenAI.Functions and have a Structured Tool:
```
class SearchSchema(BaseModel):
"""Inputs for get_current_stock_price"""
country_filter: Optional[str] = Field(default="United Kingdom", description="The country for events")
class EventsAPIWrapper(Base... | Issue: Reduced query in a agent tool _run method | https://api.github.com/repos/langchain-ai/langchain/issues/9389/comments | 4 | 2023-08-17T13:48:00Z | 2023-11-23T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9389 | 1,855,042,529 | 9,389 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/use_cases/more/code_writing/pal
### Idea or request for content:
fix the import | Import reference to the Palchain is broken | https://api.github.com/repos/langchain-ai/langchain/issues/9386/comments | 2 | 2023-08-17T12:50:43Z | 2023-11-23T16:05:36Z | https://github.com/langchain-ai/langchain/issues/9386 | 1,854,941,934 | 9,386 |
[
"hwchase17",
"langchain"
] | ### Feature request
Support Max marginal relevance on the clientside. Other vectorstores use the langchain lib to do re-ranking on clientside. add `fetch_k` to gather number of candidates to be retrieved. honour `k` to return only x number of documents.
### Motivation
Support a diverse set of results
### Your co... | ElasticsearchStore: Support max_marginal_relevance | https://api.github.com/repos/langchain-ai/langchain/issues/9384/comments | 2 | 2023-08-17T11:39:26Z | 2023-10-17T07:46:48Z | https://github.com/langchain-ai/langchain/issues/9384 | 1,854,830,927 | 9,384 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm learning LangChain and I believe is that an issue of the Agent but I'm not sure.
The model I'm using is the `llama-2-7b-chat-hf`
```python
from langchain.llms import HuggingFaceTextGenInference
from langchain.agents import load_tools, initialize_agent
from langchain.agents... | Observation: Invalid or incomplete response using HF TGI | https://api.github.com/repos/langchain-ai/langchain/issues/9381/comments | 5 | 2023-08-17T10:24:14Z | 2024-03-10T23:23:03Z | https://github.com/langchain-ai/langchain/issues/9381 | 1,854,717,708 | 9,381 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hello! I'm currently developing using LangChain and Chroma and I've stumbled upon this line:
`View full [docs](https://docs.trychroma.com/reference/Collection) at docs. To access these methods directly, you can do ._collection_.method()`
Instead, you have to do `._collection.m... | DOC: Little error in the Chroma integration documentation | https://api.github.com/repos/langchain-ai/langchain/issues/9379/comments | 2 | 2023-08-17T09:45:44Z | 2023-11-23T16:05:40Z | https://github.com/langchain-ai/langchain/issues/9379 | 1,854,651,803 | 9,379 |
[
"hwchase17",
"langchain"
] | ### System Info
### Description:
I am using the `StructuredTool` function to register a custom tool, and I've encountered a problem with nested Pydantic Models in the `args_schema` parameter.
### Problem:
When registering a function with a nested Pydantic Model in the `args_schema`, only the first outer layer of ... | Nested Pydantic Model for `args_schema` in Tool Registration is not Recognized | https://api.github.com/repos/langchain-ai/langchain/issues/9375/comments | 5 | 2023-08-17T09:16:22Z | 2024-02-16T16:09:06Z | https://github.com/langchain-ai/langchain/issues/9375 | 1,854,594,754 | 9,375 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi everybody, I am using langchain and I want to use new feature "function_calling" of openai. Actually, my app worked when I add functions and function_call params into function apredict_message. But I want to use stream, currently when I add function_calling into my call (with stream =... | Streaming with function calling feature | https://api.github.com/repos/langchain-ai/langchain/issues/9374/comments | 3 | 2023-08-17T09:13:48Z | 2024-02-14T16:11:53Z | https://github.com/langchain-ai/langchain/issues/9374 | 1,854,590,489 | 9,374 |
[
"hwchase17",
"langchain"
] | ### Feature request
Actually, all prompt are in english. To generate a summary or answer a question in another language, all the templates need to be modified.
The modification can be :
```
prompt = PromptTemplate(
template=re.sub(
"CONCISE SUMMARY:",
"CONCISE SUMMAR... | Add {language} in all template | https://api.github.com/repos/langchain-ai/langchain/issues/9369/comments | 2 | 2023-08-17T07:50:21Z | 2023-11-24T09:06:07Z | https://github.com/langchain-ai/langchain/issues/9369 | 1,854,462,216 | 9,369 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.245
model: vicuna-13b-v1.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Pr... | SelfQueryRetriever gives error for some queries | https://api.github.com/repos/langchain-ai/langchain/issues/9368/comments | 23 | 2023-08-17T07:18:59Z | 2024-07-20T07:50:04Z | https://github.com/langchain-ai/langchain/issues/9368 | 1,854,414,691 | 9,368 |
[
"hwchase17",
"langchain"
] | ### Feature request
Support retry policy for ErnieBotChat
### Motivation
ErnieBotChat currently not support retry policy, it will failed when reach quotas.
### Your contribution
I will submit a PR | Support retry policy for ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/9366/comments | 2 | 2023-08-17T06:56:19Z | 2023-10-17T00:57:52Z | https://github.com/langchain-ai/langchain/issues/9366 | 1,854,379,288 | 9,366 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for RAG based chatbot and I am using custom retriever to get the relevant chunks.
```
custom_retriever = FilteredRetriever(
vectorstore=vectorstore.as_retriever(search_kwargs={"k": 5, "filter": {}}, search_type="mmr"),
categ... | Help: How to stop the chain if no chunks are retrieved? | https://api.github.com/repos/langchain-ai/langchain/issues/9364/comments | 2 | 2023-08-17T06:13:09Z | 2023-11-23T16:05:50Z | https://github.com/langchain-ai/langchain/issues/9364 | 1,854,319,004 | 9,364 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
`/usr/local/lib/python3.9/dist-packages/langchain/vectorstores/elastic_vector_search.py:135: UserWarning: ElasticVectorSearch will be removed in a future release. See Elasticsearch integration docs on how to upgrade.`
### Idea or request for content:
_No response_ | DOC: Where i can find Elasticsearch integration docs? | https://api.github.com/repos/langchain-ai/langchain/issues/9363/comments | 3 | 2023-08-17T05:59:17Z | 2023-11-20T00:22:11Z | https://github.com/langchain-ai/langchain/issues/9363 | 1,854,305,205 | 9,363 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
code
-------------
raw_documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
db = Chroma.from_documents(documents, OpenAIEmbeddings())
... | why the vector database of vectorstores must load document ? | https://api.github.com/repos/langchain-ai/langchain/issues/9357/comments | 3 | 2023-08-17T04:38:27Z | 2023-08-18T02:37:15Z | https://github.com/langchain-ai/langchain/issues/9357 | 1,854,234,764 | 9,357 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10
LangChain v0.0.266
### Who can help?
@eyurtsev
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Outpu... | Error when trying to use ConversationBufferMemory with LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/9352/comments | 1 | 2023-08-17T01:40:57Z | 2023-08-17T23:00:44Z | https://github.com/langchain-ai/langchain/issues/9352 | 1,854,106,510 | 9,352 |
[
"hwchase17",
"langchain"
] | ### System Info
This code is exactly as in the documentation.
```
import os
from dotenv import load_dotenv
from langchain.agents import create_csv_agent
from langchain.llms import OpenAI
load_dotenv()
agent = create_csv_agent(OpenAI(temperature=0),
'train.csv',
... | CSV Agent Issue | https://api.github.com/repos/langchain-ai/langchain/issues/9351/comments | 2 | 2023-08-17T01:32:18Z | 2023-11-23T16:05:55Z | https://github.com/langchain-ai/langchain/issues/9351 | 1,854,101,112 | 9,351 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Is it possible to use a customized dictionary for langchain retriever to search for document?
For example, there is document talking about "circuit". In my organization, people use special keyword "A" which means "circuit". However, this "A" == "circuit" is not a common relationship in ... | Use custom dictionary for retriever | https://api.github.com/repos/langchain-ai/langchain/issues/9350/comments | 2 | 2023-08-17T01:31:41Z | 2023-11-23T16:06:00Z | https://github.com/langchain-ai/langchain/issues/9350 | 1,854,100,686 | 9,350 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add the parameter exclude_output_keys to VectorStoreRetrieverMemory and exclude output keys in method _form_documents similar to the case of input keys. This would enable the use of VectorStoreRetrieverMemory as read-only memory.
https://github.com/langchain-ai/langchain/blob/2e8733cf54d3cd24cf6... | Add exclude_output_keys to VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/9347/comments | 1 | 2023-08-17T00:43:51Z | 2023-11-23T16:06:05Z | https://github.com/langchain-ai/langchain/issues/9347 | 1,854,070,492 | 9,347 |
[
"hwchase17",
"langchain"
] | ### Bug
LocalFileStore tries to treat Document as byte
```
store = LocalFileStore(get_project_relative_path("doc_store"))
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
retriever = ParentDocumentRetriever(vectorstore... | Type error in ParentDocumentRetriever using LocalFileStore | https://api.github.com/repos/langchain-ai/langchain/issues/9345/comments | 24 | 2023-08-16T22:36:06Z | 2024-07-25T13:19:35Z | https://github.com/langchain-ai/langchain/issues/9345 | 1,853,988,704 | 9,345 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm running llama2-13b with the below parameters, and I'm trying to summarize pdf (linked [here](https://akoustis.com/wp-content/uploads/2022/06/Single-Crystal-AlScN-on-Silicon-XBAW%E2%84%A2RF-Filter-Technology-for-Wide-Bandwidth-High-Frequency-5G-and-Wi-Fi-Applications.pdf))
```
"pr... | How to fix "Token indices sequence length is longer than the specified maximum sequence length for this model"? | https://api.github.com/repos/langchain-ai/langchain/issues/9341/comments | 3 | 2023-08-16T21:33:48Z | 2023-12-08T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9341 | 1,853,934,808 | 9,341 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to do the query based on my location. For example, what is the restaurant near me? I have Python code using SerpAPIWrapper and GoogleSearchAPIWrapper but none of the Api works based on location. below is my snippet code.
GoogleSearchAPIWrapper:
`
latitude = 37.36... | Issue: Search query based on the geoLocation using GoogleSearchAPIWrapper or SerpAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/9330/comments | 4 | 2023-08-16T18:28:37Z | 2023-11-23T16:06:10Z | https://github.com/langchain-ai/langchain/issues/9330 | 1,853,718,640 | 9,330 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I'm attempting to use Meilisearch as a vector store with Chat Models, however I'm a little confused about how to use the following code to send the input from Meilisearch to Chat Models. Below is the code which you gave
```
from langchain.vectorstores import Meilisearch
from ... | How to utilize the vector store for direct text while using Chat Models? | https://api.github.com/repos/langchain-ai/langchain/issues/9326/comments | 2 | 2023-08-16T15:37:35Z | 2023-11-22T16:05:59Z | https://github.com/langchain-ai/langchain/issues/9326 | 1,853,487,280 | 9,326 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add support for `max_marginal_relevance_search` to `pgvector` vector stores
### Motivation
Would like to be able to do `max_marginal_relevance_search` over `pgvector` vector stores
### Your contribution
N/A | pgvector support for max_marginal_relevance_search | https://api.github.com/repos/langchain-ai/langchain/issues/9325/comments | 1 | 2023-08-16T15:20:33Z | 2023-09-17T21:38:32Z | https://github.com/langchain-ai/langchain/issues/9325 | 1,853,458,017 | 9,325 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Instead of embeddings, I'm using chat models with RecursiveTextSplitter because they simply return the response without any further justification. The input PDF files that I'm providing have a 36k input length. It takes a long time to return the output when using straight code. Ca... | Is there any option to store direct text to VectorDB to get faster response? | https://api.github.com/repos/langchain-ai/langchain/issues/9324/comments | 9 | 2023-08-16T15:09:40Z | 2023-12-19T00:49:13Z | https://github.com/langchain-ai/langchain/issues/9324 | 1,853,439,627 | 9,324 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.6
Langchain 0.0.220
### Who can help?
@3
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Docu... | Inference parameters for Bedrock anthropic model showing problems with max_tokens_for_sample parameter | https://api.github.com/repos/langchain-ai/langchain/issues/9319/comments | 7 | 2023-08-16T14:25:02Z | 2023-10-22T04:44:28Z | https://github.com/langchain-ai/langchain/issues/9319 | 1,853,359,140 | 9,319 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I developed a piece of code that will read data from a pdf file, send it to chat models, and then return the result. What to do if the input length is too long when using chat models? I attempted embeddings, but the solutions are simple and not as good as Chat models, in my opin... | How to use RecursiveTextSplitter for Chat Models like OpenAI and LLama? | https://api.github.com/repos/langchain-ai/langchain/issues/9316/comments | 6 | 2023-08-16T13:56:22Z | 2023-11-26T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9316 | 1,853,305,541 | 9,316 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version 0.0.266, using python3
### Who can help?
@eyurtsev
@ago
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Out... | 'delete' function is not implemented in PGVector | https://api.github.com/repos/langchain-ai/langchain/issues/9312/comments | 3 | 2023-08-16T13:11:26Z | 2023-11-22T16:06:09Z | https://github.com/langchain-ai/langchain/issues/9312 | 1,853,216,544 | 9,312 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.266
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document L... | Use original_question if SelfQueryRetriever if the filter is empty | https://api.github.com/repos/langchain-ai/langchain/issues/9310/comments | 5 | 2023-08-16T12:51:20Z | 2024-03-26T16:05:31Z | https://github.com/langchain-ai/langchain/issues/9310 | 1,853,181,280 | 9,310 |
[
"hwchase17",
"langchain"
] | ### System Info
Latest Python and LangChain version.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors... | Huggingface TextGen Inference streaming example not working | https://api.github.com/repos/langchain-ai/langchain/issues/9308/comments | 2 | 2023-08-16T12:47:24Z | 2023-08-16T13:15:28Z | https://github.com/langchain-ai/langchain/issues/9308 | 1,853,173,863 | 9,308 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain `v0.0.264`
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [... | `create_csv_agent` fails due to arguments from pandas not being valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/9307/comments | 2 | 2023-08-16T12:46:43Z | 2023-11-22T16:06:14Z | https://github.com/langchain-ai/langchain/issues/9307 | 1,853,172,407 | 9,307 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.12
Langchain 0.0.266
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
-... | RecursiveCharacterTextSplitter splits even if text is smaller than chunk size | https://api.github.com/repos/langchain-ai/langchain/issues/9305/comments | 3 | 2023-08-16T11:51:15Z | 2023-10-11T16:46:11Z | https://github.com/langchain-ai/langchain/issues/9305 | 1,853,080,158 | 9,305 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How to resolve [This model's maximum context length is 4097 tokens] ?
If Graph DB have many nodes, can not send those tokens to OpenAI by one time.
### Suggestion:
_No response_ | Graph DB QA chain | https://api.github.com/repos/langchain-ai/langchain/issues/9303/comments | 6 | 2023-08-16T09:37:54Z | 2024-01-02T21:17:04Z | https://github.com/langchain-ai/langchain/issues/9303 | 1,852,872,362 | 9,303 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain/0.0.258, Python 3.10.10
### Who can help?
@hw
@issam9
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parser... | With HuggingFaceEmbeddings, embedding of individual documents in the vectorstore can influence each other | https://api.github.com/repos/langchain-ai/langchain/issues/9301/comments | 4 | 2023-08-16T09:19:03Z | 2024-03-13T19:59:19Z | https://github.com/langchain-ai/langchain/issues/9301 | 1,852,841,614 | 9,301 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello, I have a question, I didn't find it from the official documentation, I want to know, if I have another llm now, he provides me with a calling api, how can I do it, can it be used like openai, such as This https://python.langchain.com/docs/use_cases/sql in the official document, I ... | other llm api | https://api.github.com/repos/langchain-ai/langchain/issues/9299/comments | 4 | 2023-08-16T08:56:57Z | 2023-11-27T16:07:27Z | https://github.com/langchain-ai/langchain/issues/9299 | 1,852,805,480 | 9,299 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-uIkxFSWUeCDpCsfzD5XWYLZ7 on tokens per min. Limit: 1000000 / min. Current: 837303 / min. ... | Issue: embedding rate limit error | https://api.github.com/repos/langchain-ai/langchain/issues/9298/comments | 5 | 2023-08-16T08:42:15Z | 2024-03-13T19:59:21Z | https://github.com/langchain-ai/langchain/issues/9298 | 1,852,782,550 | 9,298 |
[
"hwchase17",
"langchain"
] | ### System Info
When I run a vector search in Azure Cognitive Search using AzureSearch it fails saying, "The 'value' property of the vector query can't be null or an empty array." (full error at the bottom) My code hasn't changed from last week when it used to work. I've got version 11.4.0b6 of azure-search-documents ... | Error running vector search in Azure Cognitive Search - The 'value' property of the vector query can't be null or an empty array. | https://api.github.com/repos/langchain-ai/langchain/issues/9297/comments | 2 | 2023-08-16T08:13:32Z | 2023-08-18T00:16:12Z | https://github.com/langchain-ai/langchain/issues/9297 | 1,852,738,816 | 9,297 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
In LangChain 0.0.240, the `ApifyWrapper` class was removed. This caused a breaking change and broke any code using this class.
It was deleted in this PR https://github.com/langchain-ai/langchain/pull/8106 @hwchase17
Could you please outline the reason it was removed?
Other issu... | Issue: `ApifyWrapper` was removed from the codebase breaking user's code | https://api.github.com/repos/langchain-ai/langchain/issues/9294/comments | 3 | 2023-08-16T07:20:48Z | 2023-08-31T23:00:59Z | https://github.com/langchain-ai/langchain/issues/9294 | 1,852,660,023 | 9,294 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be nice to have a fake vectorstore to make testing retriaval chains easier.
### Motivation
testing retriaval chains
### Your contribution
yes, I'm happy to help. | Fake vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/9292/comments | 2 | 2023-08-16T05:48:38Z | 2023-11-22T16:06:29Z | https://github.com/langchain-ai/langchain/issues/9292 | 1,852,555,549 | 9,292 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.247
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers... | Object of type SystemMessage is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/9288/comments | 4 | 2023-08-16T03:59:44Z | 2024-02-13T16:13:58Z | https://github.com/langchain-ai/langchain/issues/9288 | 1,852,465,014 | 9,288 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.265, Mac M2 Pro Hardware. Python 3.10.0
### Who can help?
@hwchase17
@ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt S... | KeyError: 'summary_text' when trying HuggingFaceEndpoint with task = "summarization" even when the VALID_TASKS include it | https://api.github.com/repos/langchain-ai/langchain/issues/9286/comments | 2 | 2023-08-16T03:24:56Z | 2023-11-22T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9286 | 1,852,442,074 | 9,286 |
[
"hwchase17",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.265
Python 3.10.7
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- ... | Typo in Variable Name in script_chain.run() | https://api.github.com/repos/langchain-ai/langchain/issues/9285/comments | 1 | 2023-08-16T02:40:14Z | 2023-11-22T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9285 | 1,852,412,897 | 9,285 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/e986afa13a1de73f403eebe05bd4b25781c12788/libs/langchain/langchain/chains/combine_documents/stuff.py#L96C76-L96C76
If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping th... | If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping the error throw. | https://api.github.com/repos/langchain-ai/langchain/issues/9284/comments | 1 | 2023-08-16T02:16:41Z | 2023-08-16T02:19:58Z | https://github.com/langchain-ai/langchain/issues/9284 | 1,852,397,983 | 9,284 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
See this page: https://python.langchain.com/docs/use_cases/multi_modal/image_agent

The `from langchain import OpenAI` class was not extracted into the `API Reference:` secti... | DOC: missed items in the `API Reference:` auto-generated section | https://api.github.com/repos/langchain-ai/langchain/issues/9282/comments | 2 | 2023-08-16T00:40:19Z | 2023-11-15T16:41:49Z | https://github.com/langchain-ai/langchain/issues/9282 | 1,852,329,561 | 9,282 |
[
"hwchase17",
"langchain"
] | ### System Info
I'm using AWS Sagemaker Jumpstart model for Llama2 13b: meta-textgeneration-llama-2-13b-f
On running a Langchain summarize chain with chain_type="map_reduce" I get the below error. Other chain types (refine, stuff) work without issues. I do not have access to https://huggingface.co/ from my environm... | How do fix GPT2 Tokenizer error in Langchain map_reduce (LLama2)? | https://api.github.com/repos/langchain-ai/langchain/issues/9273/comments | 6 | 2023-08-15T21:01:48Z | 2024-01-01T18:11:07Z | https://github.com/langchain-ai/langchain/issues/9273 | 1,852,123,223 | 9,273 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
llm = ChatOpenAI(temperature=0, model_name = args.ModelName, verbose = True, streaming = True, callbacks = [MyCallbackHandler(new_payload)])
class StaticSearchTool(BaseTool):
name = "Search_QA_System"
description = "Use this tool to answer cuurent events
# return_dir... | Stream the response of the custom tools in the agent | https://api.github.com/repos/langchain-ai/langchain/issues/9271/comments | 3 | 2023-08-15T20:48:35Z | 2023-11-22T16:06:44Z | https://github.com/langchain-ai/langchain/issues/9271 | 1,852,108,322 | 9,271 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have an issue when trying to set 'gpt-3.5-turbo' as a model to create embeddings.
When using the 'gpt-3.5-turbo' to create LLM, everything works fine:
`llm = OpenAI(model_name='gpt-3.5-turbo', temperature=0, openai_api_key=OPENAI_API_KEY, max_tokens=512)`
Also creating embeddi... | Issue: openai.error.PermissionError: You are not allowed to generate embeddings from this model | https://api.github.com/repos/langchain-ai/langchain/issues/9270/comments | 9 | 2023-08-15T20:31:54Z | 2024-05-17T14:38:10Z | https://github.com/langchain-ai/langchain/issues/9270 | 1,852,086,899 | 9,270 |
[
"hwchase17",
"langchain"
] | I am trying to create a `ConversationalRetrievalChain` with memory, `return_source_document=True` and a custom retriever which returns content and url of the document. I am able to generate the right response when I call the chain for the first time. But when I call it again with memory I am getting error Missing some ... | Missing some input keys: {'context'} when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9265/comments | 7 | 2023-08-15T19:31:45Z | 2024-03-12T23:10:54Z | https://github.com/langchain-ai/langchain/issues/9265 | 1,852,007,448 | 9,265 |
[
"hwchase17",
"langchain"
] | ### System Info
When attempting to package my application using PyInstaller, I encounter an error related to the "lark" library. When trying to initiate the SelfQueryRetriever from langchain, I encounter the following problem:
> Traceback (most recent call last):
> File "test.py", line 39, in
> File "langchain\... | ImportError of lark when packaging a standalone application with PyInstaller | https://api.github.com/repos/langchain-ai/langchain/issues/9264/comments | 25 | 2023-08-15T19:23:27Z | 2024-08-02T16:06:38Z | https://github.com/langchain-ai/langchain/issues/9264 | 1,851,997,057 | 9,264 |
[
"hwchase17",
"langchain"
] | ### System Info
Problem:
The intermediate_steps don't contain last `AI thought information`. Did I do anything wrong or was that a bug?
Or is there anything I can do to extract the last `AI thought information`?
Code:
```
agent = initialize_agent(
[self.search_tool, self.wikipedia_tool],
... | intermediate_steps missing last thought content | https://api.github.com/repos/langchain-ai/langchain/issues/9262/comments | 7 | 2023-08-15T18:24:45Z | 2024-05-14T07:08:02Z | https://github.com/langchain-ai/langchain/issues/9262 | 1,851,915,394 | 9,262 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version 0.0.265
Python 3.11.4
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selec... | In Azure vector store, metadata is kept as a string and can't be used in a filter | https://api.github.com/repos/langchain-ai/langchain/issues/9261/comments | 11 | 2023-08-15T17:45:21Z | 2024-07-12T18:10:31Z | https://github.com/langchain-ai/langchain/issues/9261 | 1,851,861,571 | 9,261 |
[
"hwchase17",
"langchain"
] | ### Feature request
cur version, milvus not support normalize_L2 of embedding, when can add this feature?
Thanks
### Motivation
Normalization and regularization of features can improve retrieval performance, and it is convenient to set thresholds when calculating similarity through inner product
### Your contrib... | milvus not support normalize_L2 of embedding | https://api.github.com/repos/langchain-ai/langchain/issues/9255/comments | 2 | 2023-08-15T15:31:15Z | 2023-11-29T16:08:05Z | https://github.com/langchain-ai/langchain/issues/9255 | 1,851,663,787 | 9,255 |
[
"hwchase17",
"langchain"
] | 
As shown in the figure, I hope that the `resp `variable can capture these outputs in each for loop, rather than displaying them on the console, What should I do? | How to output code variables word by word in `ChatOpenAI` instead of console? | https://api.github.com/repos/langchain-ai/langchain/issues/9247/comments | 2 | 2023-08-15T09:55:01Z | 2023-11-21T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9247 | 1,851,185,785 | 9,247 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain = "^0.0.264"
python = "^3.10"
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- ... | OpenAIFunctionsAgent | Streaming Bug | https://api.github.com/repos/langchain-ai/langchain/issues/9246/comments | 2 | 2023-08-15T09:41:21Z | 2023-09-05T12:07:36Z | https://github.com/langchain-ai/langchain/issues/9246 | 1,851,169,357 | 9,246 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.11
langchain 0.0.263.
### Who can help?
@agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Do... | Random question and Answer generation while using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9241/comments | 3 | 2023-08-15T06:56:12Z | 2024-07-10T09:01:36Z | https://github.com/langchain-ai/langchain/issues/9241 | 1,850,987,786 | 9,241 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
def define_model(model_threshold):
get_no = np.random.randint(1, 11)
if int(model_threshold) >= get_no:
return "gpt-4-0613"
else:
return "gpt-3.5-turbo-16k-0613"
messages = get_chat_history_format(api_params=api_params)
model_name = define_model(model_thres... | This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? | https://api.github.com/repos/langchain-ai/langchain/issues/9237/comments | 6 | 2023-08-15T02:58:25Z | 2024-06-21T22:20:38Z | https://github.com/langchain-ai/langchain/issues/9237 | 1,850,835,440 | 9,237 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm using the load_summarization_chain with the type of 'map_reduce' in the following fashion:
```
summary_prompt = ChatPromptTemplate.from_template(
"Write a long-form summary of the following text delimited by triple backquotes. "
"Include detailed evidence ... | Issue: load_summarization_chain in 'map_reduce' mode not breaking up document | https://api.github.com/repos/langchain-ai/langchain/issues/9235/comments | 3 | 2023-08-15T01:27:12Z | 2024-01-25T10:22:05Z | https://github.com/langchain-ai/langchain/issues/9235 | 1,850,779,448 | 9,235 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to use `MultiQueryRetriever` to generate variations on a question: it seems to work, but I can't use it inside a chain created with `load_qa_with_sources_chain()` because that generates a chain that expects a list of input_documents, rather than a retriever, and I don't want... | Issue: trying to call generate_queries() on a MultiQueryRetriever but where do I get a run_manager from? | https://api.github.com/repos/langchain-ai/langchain/issues/9231/comments | 5 | 2023-08-14T23:38:42Z | 2024-03-19T13:08:30Z | https://github.com/langchain-ai/langchain/issues/9231 | 1,850,711,136 | 9,231 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.245
python==3.9
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Docu... | Kernel dies for SVMRetriever with Huggingface Embeddings Model | https://api.github.com/repos/langchain-ai/langchain/issues/9219/comments | 16 | 2023-08-14T19:59:31Z | 2023-12-05T11:32:48Z | https://github.com/langchain-ai/langchain/issues/9219 | 1,850,451,609 | 9,219 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In https://python.langchain.com/docs/use_cases/extraction, I can run this example using the ChatOpenAI method. However, within my organization, I want to use AzureChatOpenAI. However, the same example doesnt work here. The error I get is
```
InvalidRequestError: Unrecognized ... | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/9218/comments | 2 | 2023-08-14T19:39:51Z | 2023-11-20T16:04:47Z | https://github.com/langchain-ai/langchain/issues/9218 | 1,850,424,838 | 9,218 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In document here - https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference#streaming
Streaming section, parameter **stream** should be **streaming**.
### Idea or request for content:
_No response_ | DOC: Typo in the streaming document with hugging face text inference | https://api.github.com/repos/langchain-ai/langchain/issues/9212/comments | 1 | 2023-08-14T17:14:38Z | 2023-11-20T16:04:52Z | https://github.com/langchain-ai/langchain/issues/9212 | 1,850,200,651 | 9,212 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I have a pandas dataframe with a column name 'chunk_id'.
I want to use this chunk_id as the custom_id in the langchain_pg_embedding table.
The langchain documentation search bot tells me this is how I can do it.
```python
from langchain.vectorstores import PGVectorStore
... | Issue: Using custom_id with PGVector store | https://api.github.com/repos/langchain-ai/langchain/issues/9209/comments | 3 | 2023-08-14T15:16:18Z | 2023-10-06T12:31:35Z | https://github.com/langchain-ai/langchain/issues/9209 | 1,849,993,920 | 9,209 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In [this](https://python.langchain.com/docs/use_cases/question_answering.html) documentation, there are broken links in the "further reading" section. The urls are not found.
### Idea or request for content:
_No response_ | DOC: broken url links in QA documentation | https://api.github.com/repos/langchain-ai/langchain/issues/9201/comments | 2 | 2023-08-14T13:13:23Z | 2023-11-20T16:04:56Z | https://github.com/langchain-ai/langchain/issues/9201 | 1,849,753,783 | 9,201 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When I create a Bedrock LLM Client for the Jurassic models - "ai21.j2-mid", "ai21.j2-ultra", the llm.invoke(query) methods are not returning with full results. The result seems to be getting truncated after first line.
It seems like the LLM Engine is streaming it's output but langch... | Issue: Amazon Bedrock Jurassic model responses getting truncated | https://api.github.com/repos/langchain-ai/langchain/issues/9199/comments | 11 | 2023-08-14T11:53:57Z | 2024-05-05T16:03:43Z | https://github.com/langchain-ai/langchain/issues/9199 | 1,849,630,816 | 9,199 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.263
python 3.9
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templa... | _convert_message_to_dict doesn't handle FunctionMessage type | https://api.github.com/repos/langchain-ai/langchain/issues/9197/comments | 2 | 2023-08-14T11:24:44Z | 2023-08-15T08:13:36Z | https://github.com/langchain-ai/langchain/issues/9197 | 1,849,585,577 | 9,197 |
[
"hwchase17",
"langchain"
] | ### Feature request
It seems that right now there is no way to pass the filter dynamically on the call of the ConversationalRetrievalChain, the filter can only be specified in the retriever when it's created and used for all the searches.
```
qa = ConversationalRetrievalChainPassArgs.from_llm(
OpenAI(.... | Passing filter through ConversationalRetrievalChain to the underlying vector store | https://api.github.com/repos/langchain-ai/langchain/issues/9195/comments | 21 | 2023-08-14T10:10:40Z | 2024-06-13T00:54:34Z | https://github.com/langchain-ai/langchain/issues/9195 | 1,849,461,369 | 9,195 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am just testing a very basic code as follows using LangChain
```
from langchain import HuggingFaceHub
from langchain import PromptTemplate, LLMChain
import asyncio
question = "Who won the FIFA World Cup in the year 1994? "
template = """Question: {question}
Answer: L... | Incomplete responses for basic example when used HF with llama and Falcon LLMS | https://api.github.com/repos/langchain-ai/langchain/issues/9194/comments | 2 | 2023-08-14T09:40:20Z | 2023-11-20T16:05:02Z | https://github.com/langchain-ai/langchain/issues/9194 | 1,849,408,199 | 9,194 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.263
python 3.11.4
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parser... | ConstitutionalChain may violate earlier principals when given more than one | https://api.github.com/repos/langchain-ai/langchain/issues/9189/comments | 2 | 2023-08-14T06:45:10Z | 2023-11-20T16:05:06Z | https://github.com/langchain-ai/langchain/issues/9189 | 1,849,110,211 | 9,189 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using pusher for streaming purpose.
I am getting error as "Response payload is not completed"
async def pusher__cal(new_payload, token):
# print(token)
session_id = new_payload["session_id"]
session_id_channel = f"{session_id}_channel"
user_id = new_payload["user_id"]
mess... | Error Response payload is not complete | https://api.github.com/repos/langchain-ai/langchain/issues/9187/comments | 4 | 2023-08-14T05:02:23Z | 2024-02-09T16:23:49Z | https://github.com/langchain-ai/langchain/issues/9187 | 1,849,004,724 | 9,187 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
This is the code I am using for custom callback
async def pusher__cal(new_payload, token):
print(token)
session_id = new_payload["session_id"]
session_id_channel = f"{session_id}_channel"
user_id = new_payload["user_id"]
message_id = new_payload["message_id"]
... | Issue for Streaming | https://api.github.com/repos/langchain-ai/langchain/issues/9185/comments | 1 | 2023-08-14T03:59:11Z | 2023-08-14T05:01:21Z | https://github.com/langchain-ai/langchain/issues/9185 | 1,848,951,353 | 9,185 |
[
"hwchase17",
"langchain"
] | ### System Info
### Pylance displays missing parameter errors for `client` and `model`

Neither `client` or `model` are required in the documenation.
## Suggested fix
Update attrib... | Typing Issue: Client and Model defaults are not defined in OpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/9182/comments | 1 | 2023-08-13T22:15:59Z | 2023-11-19T16:04:36Z | https://github.com/langchain-ai/langchain/issues/9182 | 1,848,761,648 | 9,182 |
[
"hwchase17",
"langchain"
] | ### Feature request
https://huggingface.co/inference-endpoints
### Motivation
We do not have support for HuggingFace Inference endpoints for tasks like embeddings and must be easy to implement inheriting the class from Embeddings base class
`InferenceEndpointHuggingFaceEmbeddings(Embeddings):` with only
`... | Add support HuggingFace Inference Endpoint for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/9181/comments | 6 | 2023-08-13T21:56:46Z | 2024-03-18T16:05:09Z | https://github.com/langchain-ai/langchain/issues/9181 | 1,848,753,869 | 9,181 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
While navigating the documentation, I came across an issue that I wanted to bring to your attention. It seems that `Python Guide` link on this page
[https://docs.langchain.com/docs/components/agents/agent](https://docs.langchain.com/docs/components/agents/agent)
is returnin... | Documentation Issue - Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/9178/comments | 1 | 2023-08-13T18:56:34Z | 2023-11-19T16:04:41Z | https://github.com/langchain-ai/langchain/issues/9178 | 1,848,701,107 | 9,178 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.263 (latest at time of writing)
llama-cpp-python Version: 0.1.77 (latest at time of writing)
Python Version: 3.11.4
Platform: Apple M1 Macbook 16GB
Llama2 Model: llama-2-7b-chat.ggmlv3.q4_1.bin via [https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML](https://huggingface.co/... | Langchain stripping out emojis when using Llama2 via llama_cpp | https://api.github.com/repos/langchain-ai/langchain/issues/9176/comments | 4 | 2023-08-13T17:59:46Z | 2023-11-27T16:07:36Z | https://github.com/langchain-ai/langchain/issues/9176 | 1,848,682,080 | 9,176 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version - 0.0.261
Python Version - 3.9.6
OS -MacOS
### Who can help?
@hwchase17 @ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Pro... | Unable to add PromptTemplate/FewShotPromptTemplate to LangChain Agents | https://api.github.com/repos/langchain-ai/langchain/issues/9175/comments | 3 | 2023-08-13T17:55:21Z | 2023-12-25T16:09:25Z | https://github.com/langchain-ai/langchain/issues/9175 | 1,848,681,012 | 9,175 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Difficult to find information.
Information is incomplete in certain sections. ie. AgentExecutor.
### Idea or request for content:
_No response_ | DOC: Documentation is frustratingly bad | https://api.github.com/repos/langchain-ai/langchain/issues/9171/comments | 1 | 2023-08-13T11:05:55Z | 2023-11-19T16:04:46Z | https://github.com/langchain-ai/langchain/issues/9171 | 1,848,543,653 | 9,171 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain Version- 0.0.260, platform- Windows, Python Version-3.9.16
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / P... | Caching (SQLiteCache/InMemoryCache etc.) is not working with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9168/comments | 4 | 2023-08-13T05:18:56Z | 2023-12-08T16:47:58Z | https://github.com/langchain-ai/langchain/issues/9168 | 1,848,417,866 | 9,168 |
[
"hwchase17",
"langchain"
] | ### Feature request
The SQL Database Agent utilizes the SQLDatabaseToolkit. This toolkit encompasses four distinct tools:
1. InfoSQLDatabaseTool
2. ListSQLDatabaseTool
3. QuerySQLCheckerTool
4. QuerySQLDataBaseTool
The QuerySQLCheckerTool is designed to identify and rectify errors within SQL code. It employs ... | Passing SQL error in QuerySQLCheckerTool | https://api.github.com/repos/langchain-ai/langchain/issues/9167/comments | 5 | 2023-08-13T05:08:21Z | 2023-11-19T16:04:56Z | https://github.com/langchain-ai/langchain/issues/9167 | 1,848,414,387 | 9,167 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain version used: 0.0.261 Python (this has been observed in older versions of LangChain too)
This context (context attached) is passed from the search results retrieved from Azure vector search.
Question: what are the legal services in Australia?
Response: "Some legal services in Australia ... | Langchain bug: Responding to out of context questions when using GPT4 with Vector Database where as when the context is passed directly to the Azure OpenAI completion endpoint, we consistently get the correct response | https://api.github.com/repos/langchain-ai/langchain/issues/9165/comments | 7 | 2023-08-13T01:21:43Z | 2023-11-19T16:05:01Z | https://github.com/langchain-ai/langchain/issues/9165 | 1,848,338,018 | 9,165 |
[
"hwchase17",
"langchain"
] | ### Feature request
I know that Llamapi is in the experimental phase but can is there way to set the model configurations for the llms.
### Motivation
I have my own finetune llama that works the api, and would like to use it
### Your contribution
N/A | llamapi | https://api.github.com/repos/langchain-ai/langchain/issues/9157/comments | 1 | 2023-08-12T19:49:08Z | 2023-11-18T16:04:47Z | https://github.com/langchain-ai/langchain/issues/9157 | 1,848,207,782 | 9,157 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] V... | ConversationalAgent/CHAT_CONVERSATIONAL_REACT_DESCRIPTION dosen't work well with ChatOpenAI 3.5 and Claude models | https://api.github.com/repos/langchain-ai/langchain/issues/9154/comments | 7 | 2023-08-12T13:32:19Z | 2024-02-14T16:11:58Z | https://github.com/langchain-ai/langchain/issues/9154 | 1,848,016,732 | 9,154 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello,
Thanks for the wonderful extension on PG. I am using on DigitialOcean for my SAAS app where i have different schemas for each tenant. I had installed PG vector in public schema, so that all the tenants could make use of them.
My app is built on OpenAI api and I am using the default method fr... | Able to query contents of a PG vector table, ever after dropping the table. | https://api.github.com/repos/langchain-ai/langchain/issues/9152/comments | 2 | 2023-08-12T11:13:01Z | 2023-08-12T12:07:34Z | https://github.com/langchain-ai/langchain/issues/9152 | 1,847,942,408 | 9,152 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am trying to use the from_llm_and_api_docs() function with an API that only supports parameters via the "params" parameter in the request library:
The from_llm_and_api_docs works well with apis that encode parameters in the URl. Eg: https://api.open-meteo.com/v1/forecast/latitude=123&longitud... | Custom Parameter/ Json Payload for from_llm_and_api_docs() function | https://api.github.com/repos/langchain-ai/langchain/issues/9151/comments | 6 | 2023-08-12T10:53:09Z | 2024-05-08T14:23:31Z | https://github.com/langchain-ai/langchain/issues/9151 | 1,847,930,416 | 9,151 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.262
aim==3.17.5
aim-ui==3.17.5
aimrecords==0.0.7
aimrocks==0.4.0
python: 3.11.4
env: MacOS
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding ... | Error in AimCallbackHandler.on_chain_start callback: 'input' | https://api.github.com/repos/langchain-ai/langchain/issues/9150/comments | 6 | 2023-08-12T04:58:36Z | 2024-02-18T16:07:41Z | https://github.com/langchain-ai/langchain/issues/9150 | 1,847,753,503 | 9,150 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.262
text_generation_server. https://github.com/huggingface/text-generation-inference
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat ... | Langchain text_generation client hits: pydantic.error_wrappers.ValidationError: 1 validation error for Response | https://api.github.com/repos/langchain-ai/langchain/issues/9146/comments | 2 | 2023-08-11T21:41:36Z | 2023-08-11T21:45:22Z | https://github.com/langchain-ai/langchain/issues/9146 | 1,847,459,877 | 9,146 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
llm = OpenAI(temperature=0, model_name = args.ModelName)
system_message = SystemMessage(content=template)
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
"system_message": system_message,
}
tools = load_tools(["google-serper"], l... | GoogleSerperRun._arun() got an unexpected keyword argument 'arg1' | https://api.github.com/repos/langchain-ai/langchain/issues/9144/comments | 1 | 2023-08-11T19:44:55Z | 2023-08-13T23:07:17Z | https://github.com/langchain-ai/langchain/issues/9144 | 1,847,337,972 | 9,144 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
`API Reference` `langchain.agents Functions` table unreadable:

The `name` column is soo long, which makes the `description` column unreadable.
It is because of this value:... | DOC: `API Reference` `langchain.agents Functions ` table unreadable | https://api.github.com/repos/langchain-ai/langchain/issues/9133/comments | 5 | 2023-08-11T17:16:33Z | 2023-11-19T22:54:10Z | https://github.com/langchain-ai/langchain/issues/9133 | 1,847,171,982 | 9,133 |
[
"hwchase17",
"langchain"
] | ### Feature request
**Make the intent identification (SELECT vs UPDATE) within the SPARQL QA chain more resilient**
As originally described in #7758 and further discussed in #8521, some models struggle with providing an unambiguous response to the intent identification prompt, i.e., they return a sentence that contai... | Make intent identification in SPARQL QA chain more resilient | https://api.github.com/repos/langchain-ai/langchain/issues/9132/comments | 5 | 2023-08-11T17:09:45Z | 2024-02-18T17:49:30Z | https://github.com/langchain-ai/langchain/issues/9132 | 1,847,163,390 | 9,132 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10
LangChain: 0.0.245
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- ... | LangChain tools as gpt func, always fails at the JSON decode | https://api.github.com/repos/langchain-ai/langchain/issues/9130/comments | 4 | 2023-08-11T16:48:08Z | 2023-11-19T16:05:06Z | https://github.com/langchain-ai/langchain/issues/9130 | 1,847,137,279 | 9,130 |
[
"hwchase17",
"langchain"
] | ### System Info
I have an extremely simple setup where I created a GH app following the [Github App Quickstart Guide](https://docs.github.com/en/apps/creating-github-apps/writing-code-for-a-github-app/quickstart). I then start a blank repo with the template `README.md` (just the name of the repo in a title).
The ag... | [Request] Note that the 'base branch' needs to match the repository's 'default branch' | https://api.github.com/repos/langchain-ai/langchain/issues/9129/comments | 4 | 2023-08-11T16:31:15Z | 2023-11-20T16:05:11Z | https://github.com/langchain-ai/langchain/issues/9129 | 1,847,115,156 | 9,129 |
[
"hwchase17",
"langchain"
] | ### System Info
in sagemaker.
langchain==0.0.256 or 0.0.249 (I tried both)
Image: Data Science 3.0
Kernel: Python 3
Instance type: ml.t3.medium 2 vCPU + 4 GiB
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Com... | inference configurations are invalid forBedrockEmbeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/9127/comments | 11 | 2023-08-11T15:33:52Z | 2023-12-13T16:07:43Z | https://github.com/langchain-ai/langchain/issues/9127 | 1,847,039,060 | 9,127 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.260
model = "gpt-3.5-turbo-16k"
temperature = 0.0
### Who can help?
@hwchase17 and @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prom... | Intermediate answer from STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION received as a final answer | https://api.github.com/repos/langchain-ai/langchain/issues/9122/comments | 13 | 2023-08-11T13:52:19Z | 2024-01-30T00:41:15Z | https://github.com/langchain-ai/langchain/issues/9122 | 1,846,866,526 | 9,122 |
[
"hwchase17",
"langchain"
] | ### Feature request
Many parameters described on the TGI [Swagger](https://huggingface.github.io/text-generation-inference/#/Text%20Generation%20Inference/generate) find their direct equivalent in the corresponding LangChain [API](https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_infe... | Add `do_sample` to HuggingFaceTextGenInference | https://api.github.com/repos/langchain-ai/langchain/issues/9120/comments | 3 | 2023-08-11T12:42:50Z | 2023-11-13T08:45:47Z | https://github.com/langchain-ai/langchain/issues/9120 | 1,846,760,511 | 9,120 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.