issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
[Infino callback handler](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/callbacks/infino_callback.py) as of this writing does not support ChatOpenAI models, as it does not support `on_chat_model_start` callback.
Adding this callback will enable users to track latency, errors and token usage for ChatOpenAI models (in addition to existing support for OpenAI and other non-chat models).
### Motivation
Infino customers have requested for this integration, as this increases Infino callback handler's coverage to OpenAI chat models.
Customer request GitHub issue for Infino - https://github.com/infinohq/infino/issues/93
### Your contribution
I have a working code change for this issue, and will submit a PR shortly. | Support ChatOpenAI models in Infino callback manager | https://api.github.com/repos/langchain-ai/langchain/issues/11607/comments | 5 | 2023-10-10T14:32:26Z | 2024-02-03T07:12:26Z | https://github.com/langchain-ai/langchain/issues/11607 | 1,935,507,768 | 11,607 |
[
"langchain-ai",
"langchain"
] | ### System Info
openai==0.27.6
urllib3==1.26.15
pandas==2.0.1
slack-sdk==3.21.3
pydantic==2.4.2
langchain==0.0.311
SQLAlchemy==2.0.11
mysqlclient==2.2.0
pymysql==1.1.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i am using AWS lambda
### Expected behavior
it should run fine without MariaDB | langchain is not working on AWS , It is always giving "ImportError: libmariadb.so.3: cannot open shared object file: No such file or directory" , no where i am using mariadb | https://api.github.com/repos/langchain-ai/langchain/issues/11606/comments | 3 | 2023-10-10T14:13:21Z | 2024-02-08T16:21:06Z | https://github.com/langchain-ai/langchain/issues/11606 | 1,935,467,502 | 11,606 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I built a self-hosted LLM and applied Langchain's HuggingFaceTextGenInference for use in an offline environment, but an error occurred because the tokenizer was forced to call online in the code using the map_reduce type of summarize_chain.
I would like to solve this problem to use self-hosted LLM in an offline environment
The error message is as follows:
OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?8199cda9-1c5c-4995-ba50-f58f326273c3) or open in a [text editor](command:workbench.action.openLargeOutput?8199cda9-1c5c-4995-ba50-f58f326273c3). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Suggestion:
I would like to know how to change the tokenizer used online in HuggingFaceTextGenInference to offline or how to utilize self-hosted LLM without using HuggingFaceTextGenInference. | Using a self-hosted LLM in an offline environment | https://api.github.com/repos/langchain-ai/langchain/issues/11599/comments | 4 | 2023-10-10T11:22:02Z | 2024-02-09T16:17:44Z | https://github.com/langchain-ai/langchain/issues/11599 | 1,935,118,678 | 11,599 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add optional multithreading support for `TextSplitter`, e.g for the loop in `TextSplitter.create_documents`:
https://github.com/langchain-ai/langchain/blob/e2a9072b806b1a45b0e4c107b30dddd0f67a453f/libs/langchain/langchain/text_splitter.py#L138-L153
Question: Is there anything opposing this idea / preventing it from a technical perspective?
### Motivation
Text splitting can take up significant time and resources if a custom length function is used to measure chunk length (e.g. based on a huggingface tokenizer's encode method), especially for the `RecursiveCharacterTextSplitter`.
Therefore we want to introduce multithreading support on a document level.
### Your contribution
Feature Request: https://github.com/langchain-ai/langchain/issues/11595
PR: https://github.com/langchain-ai/langchain/pull/11598 | Multithreading support for TextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/11595/comments | 2 | 2023-10-10T10:02:42Z | 2024-02-08T16:21:16Z | https://github.com/langchain-ai/langchain/issues/11595 | 1,934,953,718 | 11,595 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey hi hello :)
Some organizations expose Azure OpenAI endpoints via some proxies that need additional HTTP headers. Currently AzureOpenAI class doesn't expose (at least I wasn't able to find one) a capability to set this. Similarly to what is possible in OpenAIEmbeddings ([https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings.headers](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings.headers)). Something similar was already discussed ([https://github.com/langchain-ai/langchain/issues/2120](https://github.com/langchain-ai/langchain/issues/2120)) and embeddings have that already solved. It would be great to have that also for AzureOpenAI class.
### Motivation
I need this to get via proxy that needs additional HTTP headers.
### Your contribution
I don't think I can contribute to this. | AzureOpenAI doesn't expose parameter to set custom HTTP headers. | https://api.github.com/repos/langchain-ai/langchain/issues/11593/comments | 3 | 2023-10-10T07:56:04Z | 2024-02-06T16:24:21Z | https://github.com/langchain-ai/langchain/issues/11593 | 1,934,656,142 | 11,593 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to get the ids of the document returned when performing `similarity_search()` or `similarity_search_with_score()`. The id should be present in the metadata = {"id": id}
### Motivation
Want to update the metadata of the documents that are returned in the similarity search. This update can only be done if the the documents returned has the id in its metadata. When adding the documents to the vectordb, I am not adding the ids, as they are automatically generated if not passed.
### Your contribution
No contributions but below is the changes that can be made:
```
def _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:
return [
# TODO: Chroma can do batch querying,
# we shouldn't hard code to the 1st result
(Document(page_content=result[0], metadata=(result[1] | {'id': result[3]}) or {}), result[2])
for result in zip(
results["documents"][0],
results["metadatas"][0],
results["distances"][0],
results["ids"][0]
)
]
``` | Return ids for the document returned from the Similarity Search. | https://api.github.com/repos/langchain-ai/langchain/issues/11592/comments | 3 | 2023-10-10T07:14:45Z | 2024-02-15T16:08:35Z | https://github.com/langchain-ai/langchain/issues/11592 | 1,934,574,489 | 11,592 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.311
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://academyselfdefense.com/")
data = loader.load()
raw_text = data[0].page_content
print(raw_text)
```
### Expected behavior
load proper content in website instead of` Just a moment...Enable JavaScript and cookies to continue` | Getting error "Just a moment...Enable JavaScript and cookies to continue" when loading website using WebBaseLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11590/comments | 4 | 2023-10-10T05:53:55Z | 2024-03-02T12:50:18Z | https://github.com/langchain-ai/langchain/issues/11590 | 1,934,432,018 | 11,590 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there any plan for supporting BedrockChat async calls?
### Motivation
I am using this api in a fastapi backend, while receiving data from bedrock I have to send data back to frontend. But for now the network is totally blocked, and streaming is no possible in this case.
### Your contribution
no | Request for BedrockChat async functions(BedrockChat.agenerate). | https://api.github.com/repos/langchain-ai/langchain/issues/11589/comments | 3 | 2023-10-10T04:36:10Z | 2024-02-05T23:26:22Z | https://github.com/langchain-ai/langchain/issues/11589 | 1,934,334,394 | 11,589 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.311
Python: 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I loaded the my vector store from **pinecone** and try to find top-3 most similar documents with the relevance score
```
vectorstore = Pinecone(index, embed.embed_query, text_field)
answers = vectorstore.similarity_search_with_score(query, 3)
for item in answers:
print(item[1]) # print out score
```
By running above code, I can get relevance score for those 3 documents:
```
0.851780415
0.851505935
0.850369573
```
It looks as expected
But when I try to use `similarity_search_with_relevance_scores` to find out similar docs:
```
answers= vectorstore.similarity_search_with_relevance_scores(query, score_threshold=0.8)
```
From my understanding it should return me at least 3 docs since we do have docs similar higher than 0.85, but I got warning `No relevant docs were retrieved using the relevance score threshold 0.8` with an empty return
And I tried vectorstore retriever as well and got same warning and empty return:
```
retriever = vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.8}
)
answer = retriever.get_relevant_documents(query)
print(answer)
```
Did I use the score function incorrectly ? If not, is there any other ways to query with score_threshold?
Thanks
### Expected behavior
When using
```
answers= vectorstore.similarity_search_with_relevance_scores(query, score_threshold=0.8)
```
or
```
retriever = vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.8}
)
answer = retriever.get_relevant_documents(query)
print(answer)
```
It should return me few of doc that have >= 0.8 similarity.
| similarity_search_with_relevance_scores is not working properly with Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/11587/comments | 3 | 2023-10-10T02:51:40Z | 2023-10-17T03:19:33Z | https://github.com/langchain-ai/langchain/issues/11587 | 1,934,209,959 | 11,587 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Introduce a comprehensive Discord Integration Toolkit to Langchain. This would allow a more seamless and direct interaction with the Discord API through an Agent interface. It should encompass capabilities like message dispatch, channel navigation, user role management, and channel administration.
### Motivation
At present, Langchain offers a limited set of functionalities for interfacing with the Discord API. Specifically, the available method is the DiscordChatLoader, necessitating manual data downloading and uploading in a CSV format. This approach not only lacks versatility but is cumbersome. Furthermore, there's an absence of functions that would empower an LLM Agent to undertake tasks like messaging, channel searches, role assignments, and channel handling on Discord.
### Your contribution
We are initiating the development phase for this proposal and intend to submit a PR once the feature reaches completion. | Discord Integration Toolkit | https://api.github.com/repos/langchain-ai/langchain/issues/11584/comments | 1 | 2023-10-10T01:39:47Z | 2023-12-01T05:29:28Z | https://github.com/langchain-ai/langchain/issues/11584 | 1,934,118,870 | 11,584 |
[
"langchain-ai",
"langchain"
] | ### Feature request
As of today, it's not possible to use Amazon API Gateway for exposing embeddings model and use it as part of a chain (e.g. ConversationalRetrievalChain). As of today, AmazonAPIGateway can be used only as LLM for text generation, but you cannot use it as embeddings for text embeddings generation (e.g. as part of ConversationalRetrievalChain)
### Motivation
Amazon API Gateway can be adopted for both Text generation and Text Embeddings. Amazon Bedrock provides different type of models (LLMs and Embeddings models). In this way, developers can use Amazon API Gateway for Retrieval Augmented Generation solutions
### Your contribution
The class can be defined as following:
```
from langchain.embeddings.bedrock import Embeddings
import requests
from typing import List
class AmazonAPIGatewayEmbeddings(Embeddings):
def __init__(self, api_url, headers):
self.api_url = api_url
self.headers = headers
def embed_documents(self, texts: List[str]) -> List[List[float]]:
results = []
for text in texts:
response = requests.post(
self.api_url,
json={"inputs": text},
headers=self.headers
)
results.append(response.json()[0]["embedding"])
return results
def embed_query(self, text: str) -> List[float]:
response = requests.post(
self.api_url,
json={"inputs": text},
headers=self.headers
)
return response.json()[0]["embedding"]
---
embeddings = AmazonAPIGatewayEmbeddings(
api_url=f"{api_url}/invoke_model?model_id={model_id}",
headers={
... # Required headers for the API invocation
}
)
embeddings.embed_query("Hello, how are you?")
``` | AmazonAPIGatewayEmbeddings class for text embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/11580/comments | 1 | 2023-10-09T22:10:30Z | 2024-02-06T16:24:26Z | https://github.com/langchain-ai/langchain/issues/11580 | 1,933,898,140 | 11,580 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I propose to add the Python client for Arcee.ai as an LLM and retriever.
`arcee.py` under [langchain/utilities](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/utilities)
`arcee.py` under [langchain/llms](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/llms)
`arcee.py` under [langchain/retrievers](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/retrievers)
```python
class Arcee(LLM):
# client for Arcee's Domain Adapted Language Models (DALMs)
#
```
```python
class ArceeRetriever(BaseRetriever):
# retriever for Arcee's DALMs
#
```
See: Client docs
https://github.com/arcee-ai/arcee-python
### Motivation
Arcee.ai offers seamless Domain Adaptation with its Specialized Domain Adapted Language Model system.
I want to utilize these adapted language models on https://arcee.ai and build applications with LangChain.
### Your contribution
Discussions - https://github.com/arcee-ai/arcee-python/issues/15
PR - https://github.com/langchain-ai/langchain/pull/11579 | Support of Arcee.ai LLM and Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/11578/comments | 2 | 2023-10-09T21:26:51Z | 2023-10-10T19:43:10Z | https://github.com/langchain-ai/langchain/issues/11578 | 1,933,846,716 | 11,578 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
This link leads to a 404. A quick google search does not find a working page.
### Idea or request for content:
_No response_ | DOC: Wolfram Agent Link broken on README | https://api.github.com/repos/langchain-ai/langchain/issues/11574/comments | 1 | 2023-10-09T19:15:40Z | 2024-02-06T16:24:31Z | https://github.com/langchain-ai/langchain/issues/11574 | 1,933,650,569 | 11,574 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
To get documents from collections (vector db) there is a method:
```python
class Collection(BaseModel):
name: str
id: UUID
metadata: Optional[CollectionMetadata] = None
_client: "API" = PrivateAttr()
_embedding_function: Optional[EmbeddingFunction] = PrivateAttr()
def __init__(
self,
client: "API",
name: str,
id: UUID,
embedding_function: Optional[EmbeddingFunction] = ef.DefaultEmbeddingFunction(),
metadata: Optional[CollectionMetadata] = None,
):
self._client = client
self._embedding_function = embedding_function
super().__init__(name=name, metadata=metadata, id=id)
'''
'''
def get(
self,
ids: Optional[OneOrMany[ID]] = None,
where: Optional[Where] = None,
limit: Optional[int] = None,
offset: Optional[int] = None,
where_document: Optional[WhereDocument] = None,
include: Include = ["metadatas", "documents"],
) -> GetResult:
"""Get embeddings and their associate data from the data store. If no ids or where filter is provided returns
all embeddings up to limit starting at offset.
Args:
ids: The ids of the embeddings to get. Optional.
where: A Where type dict used to filter results by. E.g. `{"$and": ["color" : "red", "price": {"$gte": 4.20}]}`. Optional.
limit: The number of documents to return. Optional.
offset: The offset to start returning results from. Useful for paging results with limit. Optional.
where_document: A WhereDocument type dict used to filter by the documents. E.g. `{$contains: {"text": "hello"}}`. Optional.
include: A list of what to include in the results. Can contain `"embeddings"`, `"metadatas"`, `"documents"`. Ids are always included. Defaults to `["metadatas", "documents"]`. Optional.
Returns:
GetResult: A GetResult object containing the results.
"""
where = validate_where(where) if where else None
where_document = (
validate_where_document(where_document) if where_document else None
)
ids = validate_ids(maybe_cast_one_to_many(ids)) if ids else None
include = validate_include(include, allow_distances=False)
return self._client._get(
self.id,
ids,
where,
None,
limit,
offset,
where_document=where_document,
include=include,
)
```
To query documents by searching for a particular term in the document `where_document = {"$contains": "langchain"}` can be passed to the `get()` method. This value for the key/operator `$contains` is case sensitive. How to search for keywords irrespective of case?
### Suggestion:
I want to extract entities from a sentence and pass it get documents from the chromadb that contains that words. But this entities need to be case sensitive. If the LLM outputs a different case like for example, the document contains the keyword: "Langchain", but asking llm to extract the entity from the sentence it isnt always sure that it will generate "Langchain", it can output: "langchain". This can be handled if the first letter of the entities generated is capitalized. But this may not work for keywords like OpenCV. | Support for case insensitive query search something like "$regex" instead of "$contains" | https://api.github.com/repos/langchain-ai/langchain/issues/11571/comments | 15 | 2023-10-09T18:43:18Z | 2023-10-10T20:31:04Z | https://github.com/langchain-ai/langchain/issues/11571 | 1,933,607,837 | 11,571 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The current Docusaurus default settings display a link icon in the footer of the website. This icon is a small hyperlink symbol that appears next to external links in the footer. While this may be a helpful feature for some websites, it may not align with the design or functional requirements of our documentation site.
### Idea or request for content:
The goal of this issue is to remove the link icon from the footer of our Docusaurus-powered website. This will result in a cleaner and more minimalistic footer design.
Expected Behavior:
The link icon should be removed from the footer, ensuring that external links are presented without the additional icon.
Attachments:
Current Footer:

| DOC: Remove the Link Icon in the footer due to the docusaurus default settings | https://api.github.com/repos/langchain-ai/langchain/issues/11565/comments | 5 | 2023-10-09T17:57:51Z | 2024-05-19T16:06:53Z | https://github.com/langchain-ai/langchain/issues/11565 | 1,933,542,876 | 11,565 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain v0.0.304
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use a combine documents chain with Bedrock/anthropic.claude-v2 LLM:
```python
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import Bedrock
llm = Bedrock(model_id="anthropic.claude-v2")
chain = load_qa_chain(llm, chain_type="map_reduce", verbose=verbose, **kwargs)
chain.run(...)
```
### Expected behavior
The chain runs correctly.
Instead, a series of warnings are raised about missing `transformers` library, about too long context passed to tokenizer, etc. etc., and at last the chain fails to run.
## Background
The default implementation of `get_token_ids` and `get_num_tokens` in `BaseLanguageModel` uses a GPT-2 based tokenizer. The `Bedrock` LLM implementation does not override these methods, so it tries to tokenize the text with the incorrect tokenizer. In the particular case when using `anthropic.claude-v2` model, this causes incorrect token counting, requires an otherwise unnecessary dynamic dependencies (the transformers library), and emits a series of warnings about too long input text passed for tokenizer. | Incorrect token counting in Bedrock LLMs | https://api.github.com/repos/langchain-ai/langchain/issues/11560/comments | 2 | 2023-10-09T16:37:33Z | 2024-02-06T16:24:36Z | https://github.com/langchain-ai/langchain/issues/11560 | 1,933,438,442 | 11,560 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
On the class description page there is no list with links to class methods
The class:
langchain.chains.retrieval_qa.base.RetrievalQA
https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html#langchain.chains.retrieval_qa.base.RetrievalQA
List as on this page:
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html#langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler
### Idea or request for content:
_No response_ | On the class description page there is no list with links to class methodsDOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/11554/comments | 1 | 2023-10-09T15:02:05Z | 2024-02-06T16:24:41Z | https://github.com/langchain-ai/langchain/issues/11554 | 1,933,283,622 | 11,554 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am getting this error while setting a RetrievalQA using FAISS, the code is as follows:
Using the following version on a MAC : python 3.10, Faiss_cpu 1.7.4, langchain 0.0.310
embeddings_model_name = os.environ.get('EMBEDDINGS_MODEL_NAME')
faiss_index = os.environ.get('FAISS_INDEX')
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
self.db = FAISS.load_local(faiss_index, embeddings)
retriever = self.db.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 10}
)
prompt = [
("human", "Hello"),
("assistant", "Hi there!"),
]
qa = RetrievalQA.from_chain_type(
llm=self.llm,
chain_type="refine",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt}
)
res = qa({
"query": query,
"prompt": prompt,
"context": "You are helpful"
})
The error is as follows:
([ErrorWrapper(exc=ExtraError(), loc=('prompt',))], <class 'langchain.chains.combine_documents.refine.RefineDocumentsChain'>)
The console show the following:
rompt : [('human', 'Hello'), ('assistant', 'Hi there!')]
self.llm : LlamaCpp
Params: {'model_path': 'models/llama-2-7b-chat.Q8_0.gguf', 'suffix': None, 'max_tokens': 256, 'temperature': 0.8, 'top_p': 0.9, 'logprobs': None, 'echo': False, 'stop_sequences': [], 'repeat_penalty': 1.1, 'top_k': 40}
Failed while retrieve documents: ([ErrorWrapper(exc=ExtraError(), loc=('prompt',))], <class 'langchain.chains.combine_documents.refine.RefineDocumentsChain'>)
Query response None
### Suggestion:
_No response_ | Issue: <Getting error while setting up a RetrievalQA Conversation with Faiss_cpu prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/11548/comments | 8 | 2023-10-09T12:24:24Z | 2024-02-13T16:11:08Z | https://github.com/langchain-ai/langchain/issues/11548 | 1,932,951,692 | 11,548 |
[
"langchain-ai",
"langchain"
] | ### System Info
max_marginal_relevance_search is mentioned in the ElasticSearchStore python documentation, but when calling the referenced API with langchain 0.0.310 and Python 3.9 I receive a NotImplementedError.
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elasticsearch.ElasticsearchStore.html#langchain.vectorstores.elasticsearch.ElasticsearchStore.max_marginal_relevance_search
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
vector_store = ElasticsearchStore(
embedding=OpenAIEmbeddings(),
index_name="{INDEX_NAME}",
es_cloud_id="{CLOUD_ID}",
es_user="{USER}",
es_password="{PASSWORD}"
)
pages = vector_store.max_marginal_relevance_search("test query")
```
### Expected behavior
max_marginal_relevance_search functions as per the documentation. | Support max_marginal_relevance_search for ElasticSearchStore | https://api.github.com/repos/langchain-ai/langchain/issues/11547/comments | 3 | 2023-10-09T12:02:23Z | 2024-02-09T16:17:58Z | https://github.com/langchain-ai/langchain/issues/11547 | 1,932,912,761 | 11,547 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
https://arxiv.org/abs/2305.10601
### Suggestion:
https://arxiv.org/abs/2305.10601 | Issue: Add Examples of implementing Tree of Thoughts using Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/11546/comments | 4 | 2023-10-09T11:54:58Z | 2024-02-01T18:48:53Z | https://github.com/langchain-ai/langchain/issues/11546 | 1,932,899,628 | 11,546 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I am trying to use Hugging Face model for langchain. However, I got this error :
"FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '0.19.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client."
This is my code :
`
os.environ["HUGGINGFACEHUB_API_TOKEN"] = huggingFace_api_key
llm_repo_id = "google/flan-t5-xxl"
def generate_pet_name(animal_type,pet_color):
llm = HuggingFaceHub(
repo_id=llm_repo_id, model_kwargs={"temperature": 0.5})
prompt_template = PromptTemplate(
input_variables = ['animal_type','animal_color'],
template = "I have an {animal_type} with {animal_color} in color. Suggest me five name which sound"
+ "appropriate with the color for my {animal_type}")
name_chain = LLMChain(llm=llm,prompt=prompt_template,output_key="pet_name")
response = name_chain({'animal_type':animal_type,'animal_color':pet_color})
return response
`
How do I get rid of this warning.
### Suggestion:
_No response_ | Issue: Inference Client in Lang chain | https://api.github.com/repos/langchain-ai/langchain/issues/11545/comments | 5 | 2023-10-09T10:15:14Z | 2024-02-12T16:11:59Z | https://github.com/langchain-ai/langchain/issues/11545 | 1,932,738,201 | 11,545 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Based on the discussion in : https://github.com/langchain-ai/langchain/issues/11540
The WebBaseLoader in LangChain has a default User-Agent set in the session headers, and this could be a good enhancement for the RecursiveUrlLoader as well.
Here's a potential solution:
class RecursiveUrlLoader(BaseLoader):
"""Load all child links from a URL page."""
def __init__(
self,
url: str,
max_depth: Optional[int] = 2,
use_async: Optional[bool] = None,
extractor: Optional[Callable[[str], str]] = None,
metadata_extractor: Optional[Callable[[str, str], str]] = None,
exclude_dirs: Optional[Sequence[str]] = (),
timeout: Optional[int] = 10,
prevent_outside: Optional[bool] = True,
link_regex: Union[str, re.Pattern, None] = None,
headers: Optional[dict] = None,
check_response_status: bool = False,
) -> None:
...
self.headers = headers if headers is not None else {"User-Agent": "Mozilla/5.0"}
...
### Motivation
The RecursiveUrlLoader need to have an implicit user-Agent defined with the session like in WebBaseLoader(https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/web_base.py) else some of the websites we are trying to scrape gives a forbidden error.
Troubleshooting the error took a lot of time and finally we realised that it was due to the lack of appropriate headers.
loader = RecursiveUrlLoader(url=web_page,max_depth=1,extractor=lambda x: Soup(x, "html.parser").text)
docs = loader.load()
docs[0]
Document(page_content='\n403 Forbidden\n\n403 Forbidden\nnginx\n\n\n', metadata={'source': '.....', 'title': '403 Forbidden', 'language': None})
from fake_useragent import UserAgent
header_template = {}
header_template["User-Agent"] = UserAgent().random
loader = RecursiveUrlLoader(url=web_page,max_depth=1,headers=header_template,extractor=lambda x: Soup(x, "html.parser").text)
docs = loader.load()
docs[0]
Document(page_content="Hello and Welcome to....)
### Your contribution
yes | User-Agent needs to be set for RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11541/comments | 2 | 2023-10-09T06:37:51Z | 2024-02-06T16:24:56Z | https://github.com/langchain-ai/langchain/issues/11541 | 1,932,397,687 | 11,541 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
>>> loader = RecursiveUrlLoader(url=web_page,max_depth=1,extractor=lambda x: Soup(x, "html.parser").text)
>>> docs = loader.load()
>>> docs[0]
Document(page_content='\n403 Forbidden\n\n403 Forbidden\nnginx\n\n\n', metadata={'source': '.....', 'title': '403 Forbidden', 'language': None})
>>> from fake_useragent import UserAgent
>>> header_template = {}
>>> header_template["User-Agent"] = UserAgent().random
>>> loader = RecursiveUrlLoader(url=web_page,max_depth=1,headers=header_template,extractor=lambda x: Soup(x, "html.parser").text)
>>> docs = loader.load()
>>> docs[0]
Document(page_content="Hello and Welcome to....)
### Expected behavior
The RecursiveUrlLoader need to have an implicit user-Agent defined with the session like in WebBaseLoader(https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/web_base.py) else some of the websites we are trying to scrape gives a forbidden error.
Troubleshooting the error took a lot of time and finally we realised that it was due to the lack of appropriate headers. | Recursive URL doesn't work on some websites until User-Agent is added | https://api.github.com/repos/langchain-ai/langchain/issues/11540/comments | 2 | 2023-10-09T05:57:59Z | 2024-02-09T16:18:08Z | https://github.com/langchain-ai/langchain/issues/11540 | 1,932,355,135 | 11,540 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.310
### Who can help?
@joemcelroy :It seems that the method _select_relevance_score_fn was forgotten to be implemented in vectorstores-elasticsearch.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
elastic_vector_search = ElasticsearchStore(
index_name='langchain-demo',
es_connection=es,
embedding=embedding,
)
retriever = elastic_vector_search.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8})
error log:
line 268, in similarity_search_with_relevance_scores
docs_and_similarities = self._similarity_search_with_relevance_scores(
File "/usr/local/lib/python3.10/site-packages/langchain/schema/vectorstore.py", line 242, in _similarity_search_with_relevance_scores
relevance_score_fn = self._select_relevance_score_fn()
File "/usr/local/lib/python3.10/site-packages/langchain/schema/vectorstore.py", line 211, in _select_relevance_score_fn
raise NotImplementedError
### Expected behavior
I hope to implement the method and submit the changes as soon as possible. Thank you for your understanding. | The ElasticsearchStore implementation is not correct. | https://api.github.com/repos/langchain-ai/langchain/issues/11539/comments | 8 | 2023-10-09T03:43:54Z | 2024-01-22T18:52:22Z | https://github.com/langchain-ai/langchain/issues/11539 | 1,932,250,317 | 11,539 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are seeking to add a tool to Langchain for Twitter API (v1.1). Integrating the API as a tool will allow agents to search for tweets and timelines using a specific search query that filters by users, locations, hashtags, etc. to respond to prompts.
### Motivation
Although Langchain currently has TwitterTweetLoader, we have noticed a plethora of more parameters the Twitter API provides that are not integrated into Langchain. TwitterTweetLoader currently only allows us to specify a list of users to return tweets from and the maximum number of tweets. It would be beneficial if we had more options to specify [different operators in search queries](https://developer.twitter.com/en/docs/twitter-api/v1/rules-and-filtering/search-operators). A tool also allows an agent to actively use the API to respond to prompts, without the user having to manually create their own custom tool or load tweets manually.
### Your contribution
We have a small team of developers who will be working on this feature request, and we will submit a pull request later in 1-2 months which implements it. We will do our best to follow the guidelines for contributions, as stated in contributing.md. | Integrating Twitter search API as a tool | https://api.github.com/repos/langchain-ai/langchain/issues/11538/comments | 3 | 2023-10-09T02:28:26Z | 2023-12-11T03:16:11Z | https://github.com/langchain-ai/langchain/issues/11538 | 1,932,202,481 | 11,538 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Create a natural language interface with DGraph, enabling LangChain to connect more effectively with clients using DGraph, and the DGraph community overall.
### Motivation
LangChain already boasts a range of graph integrations with databases like Neo4j and AWS Neptune. Including DGraph in LangChain's roster of graph database integrations will significantly enhance LangChain's compatibility with a wide array of software engineering projects that rely on DGraph, strengthening LangChain's position as a versatile LLM solution's provider for the DGraph-powered ecosystem.
### Your contribution
We will be looking to submit a pull request by the end of November that will contain the required code additions along with the requirements as per CONTRIBUTING.MD (involving adding a demo notebook in docs/modules and adding unit and integration tests). | Add DGraqh integration with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/11533/comments | 2 | 2023-10-08T19:46:26Z | 2024-02-12T16:12:03Z | https://github.com/langchain-ai/langchain/issues/11533 | 1,932,042,979 | 11,533 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Integrating Pandas DataFrame as an output parser in LangChain AI would offer users a specific and robust tool for data analysis and manipulation. This addition would enable users to receive AI-generated data in a structured tabular format, simplifying tasks like data cleaning, transformation, and visualization while streamlining the process of extracting insights.
### Motivation
Pandas is undeniably one of the most popular and powerful libraries in Python for data manipulation and analysis. Its widespread adoption in the data science and analytics community speaks to its versatility and efficiency. Pandas simplifies tasks such as data cleaning, transformation, and exploration with easy-to-use data structures and functions. Given its frequent use in various data-related tasks, integrating Pandas DataFrames as an output parser in LangChain AI would benefit users immensely. It would provide a familiar and reliable tool for processing and interpreting data, enhancing the utility and accessibility of data-driven workflows.
### Your contribution
We will be looking to submit a pull request by the end of November that will contain the required code additions along with the requirements as per CONTRIBUTING.MD (involving adding a demo notebook in docs/modules and adding unit tests). | Add support for a Pandas DataFrame OutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/11532/comments | 1 | 2023-10-08T19:22:47Z | 2023-11-30T03:46:06Z | https://github.com/langchain-ai/langchain/issues/11532 | 1,932,028,190 | 11,532 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
template = """SYSTEM:Please give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
Current conversation:
{history}
USER: {input}
ASSISTANT: """
PROMPT = PromptTemplate(template=template, input_variables=["history","input"])
memory = ConversationBufferMemory(memory_key='history', ai_prefix="ASSISTANT:", return_messages=True)
llmChain = ConversationChain(llm=llm, prompt=PROMPT,verbose=True, memory=memory)
AI_response = llmChain.predict(input=prompt)
print(AI_response)
#### OUTPUT:
Entering new ConversationChain chain...
Prompt after formatting:
SYSTEM: Please give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
Current conversation:
*** [HumanMessage(content='hello'), AIMessage(content=' Hello! How can I assist you today?')] ***
USER: who was president in 1920?
ASSISTANT:
RETURNING JSON OBJECT instead of formatted string.
### Expected behavior
https://python.langchain.com/docs/modules/memory/conversational_customization
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Friend: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Friend: What's the weather?
AI:
> Finished ConversationChain chain.
| ConversationBufferMemory returning json not formatted when chain is run | https://api.github.com/repos/langchain-ai/langchain/issues/11531/comments | 3 | 2023-10-08T16:41:12Z | 2024-02-10T16:14:32Z | https://github.com/langchain-ai/langchain/issues/11531 | 1,931,904,754 | 11,531 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/chains/how_to/openai_functions
from typing import Sequence
class Person(BaseModel):
"""Identifying information about a person."""
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a world class algorithm for extracting information in structured formats."),
("human", "Use the given format to extract information from the following input: {input}"),
("human", "Tip: Make sure to answer in the correct format"),
]
)
class People(BaseModel):
"""Identifying information about all people in a text."""
people: Sequence[Person] = Field(..., description="The people in the text")
chain = create_structured_output_chain(People, llm, prompt, verbose=True)
chain.run(
"Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally."
)
no work,having some error:
ValidationError: 1 validation error for _OutputFormatter
output
value is not a valid dict (type=type_error.dict)
### Idea or request for content:
_No response_ | documents have error (create_structured_output_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/11524/comments | 4 | 2023-10-08T08:00:40Z | 2024-02-09T16:18:23Z | https://github.com/langchain-ai/langchain/issues/11524 | 1,931,696,800 | 11,524 |
[
"langchain-ai",
"langchain"
] | I am trying to use the `add_texts` method on a previously instanciated Neo4jVector object for setting the node_label.
The scenario is that I am processing different types of text and want to create different nodes accordingly. Because I am handling the database connection somewhere else in my code, I don't want to establish the connection to the database when embedding texts.
```python
[...]
# initialize vector store
self.vector_store = Neo4jVector(
embedding=self.embeddings_model,
username=neo4j_config['USER'],
password=neo4j_config['PASSWORD'],
url=neo4j_config['URI'],
database=neo4j_config['DATABASE'], # neo4j by default
index_name="vector", # vector by default
embedding_node_property="embedding", # embedding by default
create_id_index=True, # True by default
)
[...]
# embed text
created_node = self.vector_store.add_texts(
[text],
metadatas=[
{
"label": [node_label]
}
],
embedding=self.embeddings_model,
node_label=node_label, # Chunk by default
text_node_property=text_node_property, # text by default
)
```
Currently the `node_label` property is overwritten with the default value and not respected when used as a parameter in **kwargs.
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L125
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L450
According to the Docstring, the function is supposed to accept vectorstore specific parameters:
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L458
In order to conform with this, I added this function:
```python
def update_vector_store_properties(self, **kwargs):
# List of vector store-specific arguments we want to check and potentially update
vector_store_args = ["node_label", "text_node_property", "embedding_node_property"]
# Iterate over these arguments
for arg in vector_store_args:
# Check if the argument is present in kwargs
if arg in kwargs:
# Update the corresponding property of the Neo4jVector instance
setattr(self, arg, kwargs[arg])
```
I then modified `add_embeddings` like this:
```python
def add_embeddings(
self,
texts: Iterable[str],
embeddings: List[List[float]],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Add embeddings to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
embeddings: List of list of embedding vectors.
metadatas: List of metadatas associated with the texts.
kwargs: vectorstore specific parameters
"""
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
if not metadatas:
metadatas = [{} for _ in texts]
self.update_vector_store_properties(**kwargs)
[...]
```
Don't really know if this is a missing feature or a bug or if I am just using the Vectorstore wrong, but I thought I'd leave this here in case it might be the desired behavior.
| Neo4jVectorstore: Parse vectorstore specific parameters in **kwargs when calling add_embeddings() | https://api.github.com/repos/langchain-ai/langchain/issues/11515/comments | 3 | 2023-10-07T18:29:54Z | 2024-02-07T16:22:08Z | https://github.com/langchain-ai/langchain/issues/11515 | 1,931,456,319 | 11,515 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.310
Python 3.11
Windows 10
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just following the usual steps:
`from langchain.document_loaders import UnstructuredRSTLoader`
### Expected behavior
Expecting a successful import but greeted with
`ImportError: cannot import name 'Document' from 'langchain.schema' ` | Can't import UnstructuredRSTLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11510/comments | 1 | 2023-10-07T11:17:32Z | 2023-10-07T11:33:42Z | https://github.com/langchain-ai/langchain/issues/11510 | 1,931,310,770 | 11,510 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Loaders are designed to load a single source of data and transform it to a list of Documents. If several sources are to be processed, it is up to the developer to call the Loader several times.
The idea is to propose a new `BatchLoader` object that could simplify this task and enable loading tasks to be launched sequentially, mutlithreaded, with mutli processor or asynchronously.
Draft idea:
```
class BatchLoader(BaseLoader):
...
def load(self, method: str = 'sequential', max_workers: int=1) -> List[Document]:
# TODO: faire ça stylé
if method == 'thread':
return self._load_thread(max_workers)
elif method == 'process':
return self._load_process(max_workers)
elif method == 'sequential':
return self._load_sequential()
elif method == 'async':
return asyncio.run(self._async_load())
else:
raise ValueError(f'Invalid method {method}')
...
# Call it with a Loader constructor callable and a set of arguments
batch_loader = BatchLoader(TextLoader, {"file_path": [f"file_{i}.txt" for i in range(1000)]})
```
### Motivation
In most real use case we need to load several Data Sources, combine their outputs and work with them (e.g. store in a vector store). Some use cases also require to wait for I/O intensive tasks or tasks you want to parallelized, so having a Meta loader that wrap this complexity for you could be a good idea.
### Your contribution
WIP: https://github.com/langchain-ai/langchain/pull/11527 | [Feature] Batch loader | https://api.github.com/repos/langchain-ai/langchain/issues/11509/comments | 3 | 2023-10-07T09:48:11Z | 2024-05-13T16:07:52Z | https://github.com/langchain-ai/langchain/issues/11509 | 1,931,282,259 | 11,509 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want streaming when call chatgpt with n>1. can u support me?
### Suggestion:
_No response_ | why when streaming only setup n=1 | https://api.github.com/repos/langchain-ai/langchain/issues/11508/comments | 2 | 2023-10-07T04:08:32Z | 2024-02-07T16:22:13Z | https://github.com/langchain-ai/langchain/issues/11508 | 1,931,176,253 | 11,508 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We propose the addition of Google Scholar querying with the serpapi, as google Scholar is a prominent platform for accessing academic and scholarly articles.
### Motivation
Google scholar is a crucial resource for researchers, students, and professionals in various fields. Integrating Google Scholar querying with serpapi would provide users with the ability to programmatically retrieve search results from Google Scholar, giving the llms more better/more reliable information to work with.
### Your contribution
We intend to submit a pull request for this issue at some point in November. | Add support for google scholar querying with serpapi | https://api.github.com/repos/langchain-ai/langchain/issues/11505/comments | 4 | 2023-10-06T23:45:22Z | 2023-11-13T20:13:57Z | https://github.com/langchain-ai/langchain/issues/11505 | 1,931,050,084 | 11,505 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain==0.0305
Mac
python == 3.9.6
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a simple web page summarize function basacilly followed example
```python
llm = OpenAI(temperature=0)
loader = WebBaseLoader(link)
docs = loader.load()
MAP_PROMPT = PromptTemplate(
template="""Write a concise summary of the following web page(keep all identifiable information about the main entities):
{text}
CONCISE SUMMARY:
""",
input_variables=["text"],
)
REDUCE_PROMPT = PromptTemplate(
template="""Based on the following text, write a concise summary(keep all identifiable information about the main entities)):
{text}
CONCISE SUMMARY:
""",
input_variables=["text"],
)
chain = load_summarize_chain(
llm,
chain_type="map_reduce",
return_intermediate_steps=True,
map_prompt=MAP_PROMPT,
combine_prompt=REDUCE_PROMPT,
)
text_splitter = RecursiveCharacterTextSplitter()
docs = text_splitter.split_documents(docs)
if len(docs) == 0:
return ""
result = chain(
text_splitter.split_documents(documents=docs),
return_only_outputs=True,
)
logging.debug(result)
return result["output_text"]
```
my program loaded in a pdf url link (https://www.fiserv.com/content/dam/fiserv-ent/final-files/marketing-collateral/case-studies/regions-bank-case-study-0623.pdf)
web based loader created 1 page document with a large number of tokens (13524539 characters)
I then started to get openai library error:
```
error_code=rate_limit_exceeded error_message='Rate limit reached for default-text-davinci-003 in organization org-a6hHivOPvKe9X60G8E6YqM5m on tokens_usage_based per min. Limit: 250000 / min. Current: 245411 / min. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens_usage_based message='OpenAI API error received' stream_error=False
```
langchain kept retrying on this error definetly more than the default 6 times, I had `14` retry log from langchain in my terminal.
I think this is a quite critical bug, my OpenAI usage limit blew up the $120 limit under 10mins because of this.
### Expected behavior
retry limit should kick it to short circuit this situation or there should be some other method to prevent unexpected length of document being passed to openAI | Retry limit is not respected | https://api.github.com/repos/langchain-ai/langchain/issues/11500/comments | 2 | 2023-10-06T21:37:53Z | 2024-02-07T16:22:18Z | https://github.com/langchain-ai/langchain/issues/11500 | 1,930,961,933 | 11,500 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The Redis Vectorstore `add_documents()` method calls `add_texts()` which embeds documents one by one like:
```
embedding = (
embeddings[i] if embeddings else self._embeddings.embed_query(text)
)
```
The `add_documents()` method would seem to imply that it is a good way to jointly embed and upload a larger list of documents, but using an API call for each document as above is slow and prone to run into rate limits.
Redis `add_documents()` could pass a kwarg `from_documents=True`to `add_texts()` which would change embedding to
```
embedding = (
embeddings[i] if embeddings else self._embeddings.embed_documents(documents)
)
```
### Motivation
This would simplify the overall use of the Redis Vectorstore and would be more inline with how the example documents imply the add_documents() method should be used. Currently `add_documents()` is not suitable for something like a csv or a long list of smaller documents unless it is used like:
```
add_documents(
documents=split_documents,
embeddings=embedder.embed_documents(texts),
)
```
### Your contribution
I could create a PR for this issue if it makes sense! | Redis Vectorestore.add_documents() should use embed_documents() instead of embed_query() | https://api.github.com/repos/langchain-ai/langchain/issues/11496/comments | 3 | 2023-10-06T20:31:55Z | 2024-02-12T16:12:09Z | https://github.com/langchain-ai/langchain/issues/11496 | 1,930,899,139 | 11,496 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS, PGVector, TypeScript, NodeJS.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hey!
I have been trying to reproduce the langchain's pgvector official documentation without success. For some reason when I try to use async await around PGVector it throws an error.
`const pgvectorStore = await PGVectorStore.initialize(
new OpenAIEmbeddings(),
config
);`
This is the official documentation.
`export const pgvectorStore = await PGVectorStore.initialize(
new OpenAIEmbeddings(),
config
);`
This is my version.
I'm using TypeScript and it recommends to change both module and target versions inside compilerOptions, **done both without success.**
`Top-level 'await' expressions are only allowed when the 'module' option is set to 'es2022', 'esnext', 'system', 'node16', or 'nodenext', and the 'target' option is set to 'es2017' or higher.ts(1378)`
This is the output if I try to run the code:
`node:internal/process/esm_loader:108
internalBinding('errors').triggerUncaughtException(
^
Error: Transform failed with 1 error:
/Users/diogo/Documents/www/ai-api/src/store.ts:34:29: ERROR: Top-level await is currently not supported with the "cjs" output format
at failureErrorWithLog (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:1649:15)
at /Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:847:29
at responseCallbacks.<computed> (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:703:9)
at handleIncomingPacket (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:762:9)
at Socket.readFromStdout (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:679:7)
at Socket.emit (node:events:517:28)
at addChunk (node:internal/streams/readable:335:12)
at readableAddChunk (node:internal/streams/readable:308:9)
at Readable.push (node:internal/streams/readable:245:10)
at Pipe.onStreamRead (node:internal/stream_base_commons:190:23) {
errors: [
{
detail: undefined,
id: '',
location: {
column: 29,
file: '/Users/diogo/Documents/www/ai-api/src/store.ts',
length: 5,
line: 34,
lineText: 'export const pgvectorStore = await PGVectorStore.initialize(',
namespace: '',
suggestion: ''
},
notes: [],
pluginName: '',
text: 'Top-level await is currently not supported with the "cjs" output format'
}
],
warnings: []
}_`
Would appreciate any help.
### Expected behavior
It should not throw an error. | await PGVectorStore throws an ts(1378) error. | https://api.github.com/repos/langchain-ai/langchain/issues/11492/comments | 2 | 2023-10-06T17:37:48Z | 2024-02-06T16:25:41Z | https://github.com/langchain-ai/langchain/issues/11492 | 1,930,691,209 | 11,492 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The llm loader `load_llm_from_config` defined [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/loading.py#L12) only returns `BaseLLM`s. There is no equivalent chat_model loader in langchain to load Chat Model LLMs from a config that I'm aware of.
What do you think of either extending the `load_llm_from_config` such that it returns both `BaseLLM` and `BaseChatModel`
```
load_llm_from_config(config: dict) -> Union[BaseLLM, BaseChatModel]
```
or create a chat_model loader in `langchain.chat_models.loading.py`
```
load_chat_llm_from_config(config: dict) -> BaseChatModel
```
### Suggestion:
_No response_ | Issue: load_llm_from_config doesn't working with Chat Models | https://api.github.com/repos/langchain-ai/langchain/issues/11485/comments | 1 | 2023-10-06T15:52:46Z | 2023-10-11T23:29:10Z | https://github.com/langchain-ai/langchain/issues/11485 | 1,930,491,541 | 11,485 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The SentimentAnalysis chain is designed to analyze the sentiment of textual data using langchain. It uses a language model like OpenAI's GPT-3.5 & 4 to process text inputs and provide sentiment labels and scores. This chain can be used for various applications, including sentiment analysis of product reviews, social media comments, or customer feedback.
### Motivation
To contribute to open-source
### Your contribution
yes , i am ready with my PR | Sentimental analysis chain | https://api.github.com/repos/langchain-ai/langchain/issues/11480/comments | 2 | 2023-10-06T14:21:14Z | 2024-02-06T16:25:46Z | https://github.com/langchain-ai/langchain/issues/11480 | 1,930,280,545 | 11,480 |
[
"langchain-ai",
"langchain"
] | ### System Info
pip 23.2.1 from /usr/local/lib/python3.12/site-packages/pip (python 3.12)
Python 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] on linux
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# The issue
When I run either of these:
`pip3 install 'langchain[all]'` or
`pip install langchain`
The command fails with the full stack trace below.
# How to reproduce easily with docker
`docker run --rm -it python:3 pip install langchain`
# Stack Trace
```
Building wheels for collected packages: aiohttp, frozenlist, multidict, yarl
Building wheel for aiohttp (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for aiohttp (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [160 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
creating build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/locks.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-312/aiohttp
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_helpers.pyi -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_helpers.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_websocket.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-312/aiohttp
creating build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyi.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_websocket.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
running build_ext
building 'aiohttp._websocket' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/aiohttp
gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.12 -c aiohttp/_websocket.c -o build/temp.linux-x86_64-cpython-312/aiohttp/_websocket.o
aiohttp/_websocket.c: In function ‘__pyx_pf_7aiohttp_10_websocket__websocket_mask_cython’:
aiohttp/_websocket.c:1475:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations]
1475 | if (unlikely(!Py_OptimizeFlag)) {
| ^~
In file included from /usr/local/include/python3.12/Python.h:48,
from aiohttp/_websocket.c:6:
/usr/local/include/python3.12/cpython/pydebug.h:13:37: note: declared here
13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
| ^~~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_get_tp_dict_version’:
aiohttp/_websocket.c:2680:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2680 | return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
| ^~~~~~
In file included from /usr/local/include/python3.12/dictobject.h:90,
from /usr/local/include/python3.12/Python.h:61:
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_get_object_dict_version’:
aiohttp/_websocket.c:2692:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2692 | return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
| ^~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_object_dict_version_matches’:
aiohttp/_websocket.c:2696:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2696 | if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
| ^~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_CLineForTraceback’:
aiohttp/_websocket.c:2741:9: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2741 | __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c:2741:9: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2741 | __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_PyInt_As_long’:
aiohttp/_websocket.c:3042:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3042 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c:3097:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3097 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c: In function ‘__Pyx_PyInt_As_int’:
aiohttp/_websocket.c:3238:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3238 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c:3293:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3293 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c: In function ‘__Pyx_PyIndex_AsSsize_t’:
aiohttp/_websocket.c:3744:45: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3744 | const digit* digits = ((PyLongObject*)b)->ob_digit;
| ^~
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Building wheel for frozenlist (pyproject.toml) ... done
Created wheel for frozenlist: filename=frozenlist-1.4.0-cp312-cp312-linux_x86_64.whl size=261459 sha256=a163dbc3bcddc5bf23bf228b0774c1c783465a0456dd3ad5da6b3fa065ec3e23
Stored in directory: /root/.cache/pip/wheels/f1/9c/94/9386cb0ea511a93226456388d41d35f1c24ba15a62ffd7b1ef
Building wheel for multidict (pyproject.toml) ... done
Created wheel for multidict: filename=multidict-6.0.4-cp312-cp312-linux_x86_64.whl size=114930 sha256=d6cde2e1fe8812d821a35581f31f1ff4797800ad474bf9ec1407fe5398ba2739
Stored in directory: /root/.cache/pip/wheels/f6/d8/ff/3c14a64b8f2ab1aa94ba2888f5a988be6ab446ec5c8d1a82da
Building wheel for yarl (pyproject.toml) ... done
Created wheel for yarl: filename=yarl-1.9.2-cp312-cp312-linux_x86_64.whl size=285235 sha256=fb299f423d87aae65e64d9a2b450b6540257c0f1a102ee01330a02f746ac791c
Stored in directory: /root/.cache/pip/wheels/84/e3/6a/7d0fa1abee8e4aa39922b5bd54689b4b5e4269b2821f482a32
Successfully built frozenlist multidict yarl
Failed to build aiohttp
ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects
```
### Expected behavior
Installation succeeds | Unable to use Python 3.12 due to `aiohttp` and other dependencies not supporting 3.12 yet | https://api.github.com/repos/langchain-ai/langchain/issues/11479/comments | 9 | 2023-10-06T14:05:06Z | 2024-03-09T07:27:40Z | https://github.com/langchain-ai/langchain/issues/11479 | 1,930,250,920 | 11,479 |
[
"langchain-ai",
"langchain"
] | ### System Info
pyython: 3.11
langChain:0.0.309
SO: windows 10
pip list:
Package Version
---------------------------------------- ---------
aiofiles 23.2.1
aiohttp 3.8.5
aiosignal 1.3.1
anyio 3.7.1
async-timeout 4.0.3
asyncer 0.0.2
attrs 23.1.0
auth0-python 4.4.0
backoff 2.2.1
bcrypt 4.0.1
beautifulsoup4 4.12.2
bidict 0.22.1
certifi 2023.7.22
cffi 1.15.1
chardet 5.2.0
charset-normalizer 3.2.0
chroma-hnswlib 0.7.3
chromadb 0.4.10
click 8.1.7
colorama 0.4.6
coloredlogs 15.0.1
cryptography 41.0.3
dataclasses-json 0.5.14
Deprecated 1.2.14
django-environ 0.10.0
docx2txt 0.8
emoji 2.8.0
faiss-cpu 1.7.4
fastapi 0.97.0
fastapi-socketio 0.0.10
filelock 3.12.2
filetype 1.2.0
flatbuffers 23.5.26
frozenlist 1.4.0
fsspec 2023.6.0
google-search-results 2.4.2
googleapis-common-protos 1.60.0
gpt4all 1.0.9
greenlet 2.0.2
grpcio 1.57.0
h11 0.14.0
html2text 2020.1.16
httpcore 0.17.3
httptools 0.6.0
httpx 0.24.1
huggingface-hub 0.16.4
humanfriendly 10.0
idna 3.4
importlib-metadata 6.8.0
importlib-resources 6.0.1
Jinja2 3.1.2
joblib 1.3.2
jsonpatch 1.33
jsonpointer 2.4
langchain 0.0.309
langdetect 1.0.9
langsmith 0.0.43
lxml 4.9.3
markdownify 0.11.6
MarkupSafe 2.1.3
marshmallow 3.20.1
monotonic 1.6
mpmath 1.3.0
multidict 6.0.4
mypy-extensions 1.0.0
nest-asyncio 1.5.7
networkx 3.1
nltk 3.8.1
nodeenv 1.8.0
numexpr 2.8.5
numpy 1.25.2
onnxruntime 1.15.1
openai 0.28.1
openapi-schema-pydantic 1.2.4
opentelemetry-api 1.19.0
opentelemetry-exporter-otlp 1.19.0
opentelemetry-exporter-otlp-proto-common 1.19.0
opentelemetry-exporter-otlp-proto-grpc 1.19.0
opentelemetry-exporter-otlp-proto-http 1.19.0
opentelemetry-instrumentation 0.40b0
opentelemetry-proto 1.19.0
opentelemetry-sdk 1.19.0
opentelemetry-semantic-conventions 0.40b0
overrides 7.4.0
packaging 23.1
pandas 1.5.3
pdf2image 1.16.3
Pillow 10.0.0
pip 23.2.1
playwright 1.37.0
posthog 3.0.2
prisma 0.9.1
protobuf 4.24.1
pulsar-client 3.3.0
pycparser 2.21
pydantic 1.10.12
pyee 9.0.4
PyJWT 2.8.0
pypdf 3.15.5
PyPika 0.48.9
pyreadline3 3.4.1
python-dateutil 2.8.2
python-dotenv 1.0.0
python-engineio 4.5.1
python-graphql-client 0.4.3
python-iso639 2023.6.15
python-magic 0.4.27
python-socketio 5.8.0
pytz 2023.3
PyYAML 6.0.1
regex 2023.8.8
requests 2.31.0
safetensors 0.3.2
scikit-learn 1.3.0
scipy 1.11.2
sentence-transformers 2.2.2
sentencepiece 0.1.99
setuptools 68.0.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.5
SQLAlchemy 1.4.49
starlette 0.27.0
sympy 1.12
syncer 2.0.3
tabulate 0.9.0
tenacity 8.2.3
threadpoolctl 3.2.0
tiktoken 0.5.1
tokenizers 0.13.3
tomli 2.0.1
tomlkit 0.12.1
torch 2.0.1
torchvision 0.15.2
tqdm 4.66.1
transformers 4.31.0
typing_extensions 4.7.1
typing-inspect 0.9.0
tzdata 2023.3
unstructured 0.10.18
uptrace 1.19.0
urllib3 2.0.4
uvicorn 0.22.0
watchfiles 0.19.0
websockets 11.0.3
wheel 0.38.4
wrapt 1.15.0
yarl 1.9.2
zipp 3.16.2
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import create_extraction_chain
# Accessing the OPENAI_API_KEY KEY
import environ
DIR_COMMON = "./common"
env = environ.Env()
environ.Env.read_env(env_file=DIR_COMMON+"/.env_iveco")
# Schema
schema = {
"properties": {
"name": {"type": "string"},
"height": {"type": "integer"},
"hair_color": {"type": "string"},
},
"required": ["name", "height"],
}
# Input
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
# # Run chain
llm = AzureChatOpenAI(deployment_name="gpt4-datalab",model_name="gpt-4")
chain = create_extraction_chain(schema, llm,verbose= True)
print(chain.run(inp))
```
When i run i get:
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: Extract and save the relevant entities mentionedin the following passage together with their properties.
Only extract the properties mentioned in the 'information_extraction' function.
If a property is not present and is not required in the function parameters, do not include it in the output.
Passage:
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
Traceback (most recent call last):
File "C:\Sviluppo\python\AI\Iveco\LangChain\extract_1.1.py", line 28, in <module>
print(chain.run(inp))
^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 306, in __call__
raise e
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\llm.py", line 103, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 469, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 359, in generate
raise e
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 349, in generate
self._generate_with_cache(
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 501, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 345, in _generate
response = self.completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 284, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 282, in _completion_with_retry
return self.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: functions
```
### Expected behavior
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: Extract and save the relevant entities mentionedin the following passage together with their properties.
Only extract the properties mentioned in the 'information_extraction' function.
If a property is not present and is not required in the function parameters, do not include it in the output.
Passage:
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
> Finished chain.
[{'name': 'Alex', 'height': 5, 'hair_color'
``` | Fail using langchain extractor with AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/11478/comments | 8 | 2023-10-06T13:29:21Z | 2024-03-18T16:05:39Z | https://github.com/langchain-ai/langchain/issues/11478 | 1,930,190,172 | 11,478 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.294
### Who can help?
@hwchase17 , @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Build a `SimpleSequentialChain` including in its chains a `RunnableLambda`
It throws: `AttributeError: 'RunnableLambda' object has no attribute 'run'`
### Expected behavior
A RunnableLambda to have an attribute `run`...
Or rename the concept of Runnable. | 'RunnableLambda' object has no attribute 'run' | https://api.github.com/repos/langchain-ai/langchain/issues/11477/comments | 2 | 2023-10-06T12:19:43Z | 2024-02-09T16:18:34Z | https://github.com/langchain-ai/langchain/issues/11477 | 1,930,053,551 | 11,477 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.279
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With GPT-3.5 turbo 16k model,
Query - What is universal workflow type?
Thought: Do I need to use a tool? Yes
Action: chat_with_datasource
Action Input: What is universal workflow type?
Observation: The Universal Workflow type represents logic flows in a system. It includes two types: Flows and Subflows.
Thought: Do I need to use a tool? No
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Do I need to use a tool? No`
Analysis -
Although when I checked i found, its using
Langchain/agents/Conversational/prompt.py
--------------------------------------------------
FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
{ai_prefix}: [Your response here]
```"""
-----------------------------------------------------
So, when it does not get any tool to use, it should generate some response in "ai_prefix}:"may be previous response of llm/observation at last step"
but, its unable to generate any response and resulted into parser error.
However, when I run the same code with GPT-4, below is the response I get, and its able to parse the output as well, since llm is able to generate
{ai_prefix}: [Your response here] as per the prompt instruction, but GPT-3.5 model is unable to generate it.
-----------------------------------------------------------
Thought: Do I need to use a tool? Yes
Action: chat_with_datasource
Action Input: What is universal workflow type?
Observation: The Universal Workflow type represents logic flows in a system. It includes two types: Flows and Subflows.
Thought: Do I need to use a tool? No
AI: The Universal Workflow is a system that represents logic flows. It consists of different types such as Flows and Subflows.
------------------------------------------------------------
### Expected behavior
With GPT 3.5 model as well, it should generate AI: response, when agent found that it does not need to use a tool, which would fix the parse error. | Parser Error with Langchain/agents/Conversational/Output_parser.py | https://api.github.com/repos/langchain-ai/langchain/issues/11475/comments | 3 | 2023-10-06T12:08:15Z | 2024-02-08T16:22:05Z | https://github.com/langchain-ai/langchain/issues/11475 | 1,930,035,345 | 11,475 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I'm trying to change dynamically a prompt, based on the value of the context in a ConversationalRetrievalChain
```
prompt_template = """You are an AI assistant.
Use the following pieces of context to answer the question, in a precise way, at the end.
Context: {context if context else "some default: I don't have information about it"}
Question: {question}
Helpful Answer :"""
QA_PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = AzureChatOpenAI(
temperature=0,
deployment_name="gpt-35-turbo",
)
memory = ConversationBufferWindowMemory(
k=1,
output_key='answer',
memory_key='chat_history',
return_messages=True)
retriever = vector.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .70})
qa_chain = ConversationalRetrievalChain.from_llm(llm=llm,
memory=memory,
retriever=retriever,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
combine_docs_chain_kwargs={'prompt': QA_PROMPT},
return_source_documents=True,
verbose=True)
```
The idea is when the retriever gets 0 documents, to have something in the context that push the model to say I don't know and don't make up and answer. I already try by adding in the prompt "if you don't know the answer don't make it up" and didn't work.
Any idea how to add a condition in the template or control the LLM based on the output of the retriever?
### Suggestion:
_No response_ | Add condition in Prompt based on the retriever | https://api.github.com/repos/langchain-ai/langchain/issues/11474/comments | 12 | 2023-10-06T11:45:34Z | 2024-02-20T16:08:12Z | https://github.com/langchain-ai/langchain/issues/11474 | 1,930,003,162 | 11,474 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I use Qdrant through langchain to store vectors. But I can't find any example in docs where the dataset is searched based on a previously created collection. How to load the data? I found that Pinecone has "from_existing_index" function, which probably does the thing. But Qdrant doesn't have such a function. I created my own solution using qdrant_client, but I would like to use Langchain to simplify the script. How to do it?
### Suggestion:
_No response_ | Qdrant - Load the saved vector db/store ? | https://api.github.com/repos/langchain-ai/langchain/issues/11471/comments | 6 | 2023-10-06T10:37:36Z | 2024-01-08T12:29:50Z | https://github.com/langchain-ai/langchain/issues/11471 | 1,929,887,356 | 11,471 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using Weaviate with ConversationalRetrievalChain
I am using `similarity_score_threshold` search type.
It will call `similarity_search_with_score` in `vectorstore/weaviate.py`
Currently it's written like this
` if not self._by_text:
vector = {"vector": embedded_query}
result = (
query_obj.with_near_vector(vector)
.with_limit(k)
.with_additional("vector")
.do()
)
else:
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
`
The `with_additional` function can accept a list of `AdditionalProperties`
So there's no point of hardcoding to be just the vector, and let the user add the additional arguments required.
I've tested this just by changing ` .with_additional("vector")` to ` .with_additional(["vector", "certainty"] )` and it works just fine
### Motivation
I am using the Conversational Chain just because it's easier to manage than creating one. But it's limitations are apparent. I want to retrieve the scores it get which is currently not possible
### Your contribution
I don't know how to pass in the `_additional` arguments from the vectorstore.as_retriever() but just hard coding the args i want the tests worked fine. | Vectorstore and ConversationalRetrievalChain should return the certainty of the relevance | https://api.github.com/repos/langchain-ai/langchain/issues/11470/comments | 1 | 2023-10-06T10:29:22Z | 2024-02-06T16:26:01Z | https://github.com/langchain-ai/langchain/issues/11470 | 1,929,874,506 | 11,470 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The first code example following the Pydantic section, right after "Lets define a class with attributes annotated with types."
results for me in the following pydantic errors using gpt-4:
RuntimeError: no validator found for <class '__main__.Properties'>, see `arbitrary_types_allowed` in Config
The following versions of langchain and pydantic were used:
langchain==0.0.309
pydantic==2.4.2
pydantic_core==2.10.1
### Idea or request for content:
It would be nice to have the code updated so that it works out of the box.
Perhaps a reference to the extraction functions in the 'openai functions' section would be nice. | DOC: pydantic extraction example code not working | https://api.github.com/repos/langchain-ai/langchain/issues/11468/comments | 2 | 2023-10-06T09:27:55Z | 2024-02-10T16:14:42Z | https://github.com/langchain-ai/langchain/issues/11468 | 1,929,774,144 | 11,468 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain latest version: 0.0.161
"mammoth": "^1.6.0",
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Creating blob data with the use of fetch
const blobData = await fetch(totalFiles[i].url).then((res) => res.blob());
After this use this blob data in
const loader = new DocxLoader(blobData);
But When I hit the next step
const data = await loader.load();
It give me an error:
Error: Could not find file in options
at Object.openZip (unzip.js:10:1)
at extractRawText (index.js:82:1)
at DocxLoader.parse (docx.js:25:1)
at async langChainInitialization (index.js:88:20)
at async handleRefDocUpload (index.js:136:9)
### Expected behavior
It should load the document. | DOCX loader is not working properly in js | https://api.github.com/repos/langchain-ai/langchain/issues/11466/comments | 3 | 2023-10-06T07:22:12Z | 2024-02-08T16:22:16Z | https://github.com/langchain-ai/langchain/issues/11466 | 1,929,588,339 | 11,466 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version : 0.0.292
Python version: 3.9.13
Platform: Apple M1, Sonoma 14.0
### Who can help?
@eyurtsev @hwc
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Have a google drive folder with multiple documents with formats like`.mp4`, `.md`, `google docs`, `google sheets` etc.
2. Try using `from langchain.document_loaders import GoogleDriveLoader`, it doesn't work as it only supports google docs, google sheets and pdf files.
```
folder_id = "0ABqitCqK0S_MUk9PVA" # have your own folder id, this is an example
loader = GoogleDriveLoader(
gdrive_api_file=GOOGLE_ACCOUNT_FILE,
folder_id= folder_id,
recursive=True,
# num_results=2, # Maximum number of file to load
)
```
3. Try using `from langchain.document_loaders import UnstructuredFileIOLoader` to load
```
folder_id = "0ABqitCqK0S_MUk9PVA" # have your own folder id, this is an example
loader = GoogleDriveLoader(
service_account_key=GOOGLE_ACCOUNT_FILE,
folder_id=folder_id,
file_loader_cls=UnstructuredFileIOLoader,
file_loader_kwargs={"mode": "elements"},
recursive=True
)
```
4. It throws the following error because `.mp4` files are not supported
`The MIME type is 'video/mp4'. This file type is not currently supported in unstructured.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [34], in <cell line: 3>()
1 start_time = time.time()
----> 3 docs = loader.load()
5 print("Total docs loaded", len(docs))
7 end_time = time.time() - start_time
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:351, in GoogleDriveLoader.load(self)
349 """Load documents."""
350 if self.folder_id:
--> 351 return self._load_documents_from_folder(
352 self.folder_id, file_types=self.file_types
353 )
354 elif self.document_ids:
355 return self._load_documents_from_ids()
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:257, in GoogleDriveLoader._load_documents_from_folder(self, folder_id, file_types)
252 returns.extend(self._load_sheet_from_id(file["id"])) # type: ignore
253 elif (
254 file["mimeType"] == "application/pdf"
255 or self.file_loader_cls is not None
256 ):
--> 257 returns.extend(self._load_file_from_id(file["id"])) # type: ignore
258 else:
259 pass
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:316, in GoogleDriveLoader._load_file_from_id(self, id)
314 fh.seek(0)
315 loader = self.file_loader_cls(file=fh, **self.file_loader_kwargs)
--> 316 docs = loader.load()
317 for doc in docs:
318 doc.metadata["source"] = f"https://drive.google.com/file/d/{id}/view"
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/unstructured.py:86, in UnstructuredBaseLoader.load(self)
84 def load(self) -> List[Document]:
85 """Load file."""
---> 86 elements = self._get_elements()
87 self._post_process_elements(elements)
88 if self.mode == "elements":
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/unstructured.py:319, in UnstructuredFileIOLoader._get_elements(self)
316 def _get_elements(self) -> List:
317 from unstructured.partition.auto import partition
--> 319 return partition(file=self.file, **self.unstructured_kwargs)
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/unstructured/partition/auto.py:183, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, ssl_verify, ocr_languages, pdf_infer_table_structure)
181 else:
182 msg = "Invalid file" if not filename else f"Invalid file {filename}"
--> 183 raise ValueError(f"{msg}. The {filetype} file type is not supported in partition.")
185 for element in elements:
186 element.metadata.url = url
ValueError: Invalid file. The FileType.UNK file type is not supported in partition.
`
### Expected behavior
I would expect this error to be handled by the loader, simply ignore the unsupported files and load the rest. Can throw some warning to notify but not break the code. | The MIME type is 'video/mp4'. This file type is not currently supported in unstructured. | https://api.github.com/repos/langchain-ai/langchain/issues/11464/comments | 2 | 2023-10-06T03:04:26Z | 2024-02-06T16:26:17Z | https://github.com/langchain-ai/langchain/issues/11464 | 1,929,358,704 | 11,464 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
we are following https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent however, by default it's using one input only, how to make it accept multiple inputs same as STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, thanks.
### Suggestion:
_No response_ | Issue: how to use multiple inputs with custom llm agent | https://api.github.com/repos/langchain-ai/langchain/issues/11460/comments | 2 | 2023-10-06T01:08:12Z | 2024-01-31T23:38:04Z | https://github.com/langchain-ai/langchain/issues/11460 | 1,929,282,468 | 11,460 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = "^0.0.304"
Python 3.11.5
MacBook Pro, Apple M2 chip, 8GB memory
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import OnlinePDFLoader
OnlinePDFLoader("https://www.africau.edu/images/default/sample.pdf").load()
```
I saw the following runtime import error:
```
ImportError: cannot import name 'PDFResourceManager' from 'pdfminer.converter' (/Users/{user}/Library/Caches/pypoetry/virtualenvs/replai-19tDtF2f-py3.11/lib/python3.11/site-packages/pdfminer/converter.py)
```
### Expected behavior
I expected `OnlinePDFLoader` to load the file. | OnlinePDFLoader crashes with import error | https://api.github.com/repos/langchain-ai/langchain/issues/11459/comments | 4 | 2023-10-05T23:46:16Z | 2024-05-20T18:23:03Z | https://github.com/langchain-ai/langchain/issues/11459 | 1,929,226,840 | 11,459 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.250
Python: 3.11.4
OS: Pop 22.04
### Who can help?
@agola11
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Input**
```py
from pathlib import Path
from langchain.document_loaders.blob_loaders.schema import Blob
from langchain.document_loaders.parsers.audio import OpenAIWhisperParser
OPENAI_API_KEY = "<MY-KEY>"
audio_file = Path("~/Downloads/audio.mp3").expanduser()
audio_total_seconds = AudioSegment.from_file(audio_file).duration_seconds
print("Audio length (seconds):", audio_total_seconds)
blob = Blob.from_path(audio_file, mime_type="audio/mp3")
parser = OpenAIWhisperParser(api_key=OPENAI_API_KEY)
parser.parse(blob)
```
**Output**
```
Audio length (seconds): 1200.064
Transcribing part 1!
Transcribing part 2!
Attempt 1 failed. Exception: Audio file is too short. Minimum audio length is 0.1 seconds.
Attempt 2 failed. Exception: Invalid file format. Supported formats: ['flac', 'm4a', 'mp3', 'mp4', 'mpeg', 'mpga', 'oga', 'ogg', 'wav', 'webm']
Attempt 3 failed. Exception: Invalid file format. Supported formats: ['flac', 'm4a', 'mp3', 'mp4', 'mpeg', 'mpga', 'oga', 'ogg', 'wav', 'webm']
Failed to transcribe after 3 attempts.
```
### Expected behavior
Due to its internal rule of processing audio in [20-minute chunks](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/parsers/audio.py#L47), `OpenAIWhisperParser` is prone to crashing when transcribing audio with durations that are dangerously close to this limit. Given that OpenAI already has a very low audio length threshold of 0.1 seconds, a simple bypass could effectively resolve this issue.
```python
# https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/parsers/audio.py#L47
chunk_duration = 20
chunk_duration_ms = chunk_duration * 60 * 1000
chunk_length_threshold = 0.1
# Split the audio into chunk_duration_ms chunks
for split_number, i in enumerate(range(0, len(audio), chunk_duration_ms)):
# Audio chunk
chunk = audio[i : i + chunk_duration_ms]
if chunk.duration_seconds < chunk_length_threshold:
continue
``` | `OpenAIWhisperParser` raises error if audio has duration too close to the chunk limit | https://api.github.com/repos/langchain-ai/langchain/issues/11449/comments | 4 | 2023-10-05T19:44:40Z | 2024-02-23T18:09:09Z | https://github.com/langchain-ai/langchain/issues/11449 | 1,928,949,307 | 11,449 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I'm running the code "from langchain.chains import LLMChain", I'm getting the following error:
'RuntimeError: no validator found for <class 're.Pattern'>, see `arbitrary_types_allowed` in Config'
<img width="887" alt="Screenshot 2023-10-06 at 12 01 49 AM" src="https://github.com/langchain-ai/langchain/assets/85587494/0b171391-bbc6-41f8-b237-17bc99b11b95">
<img width="871" alt="Screenshot 2023-10-06 at 12 01 58 AM" src="https://github.com/langchain-ai/langchain/assets/85587494/592c2688-be0f-4cf4-8e7d-a4a56d7d056c">
### Suggestion:
_No response_ | Error while importing LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/11448/comments | 3 | 2023-10-05T18:32:29Z | 2024-02-10T16:14:47Z | https://github.com/langchain-ai/langchain/issues/11448 | 1,928,839,177 | 11,448 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to make a general chat bot when ever I try to get it to write code I get [ERROR] OutputParserException: Could not parse LLM output:
is there a way to parse all parse of output?
Any help would be great.
### Suggestion:
_No response_ | Issue: Trying to be able to parse all parse of LLM outputs | https://api.github.com/repos/langchain-ai/langchain/issues/11447/comments | 3 | 2023-10-05T17:10:23Z | 2024-02-07T16:22:53Z | https://github.com/langchain-ai/langchain/issues/11447 | 1,928,727,245 | 11,447 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Specs:**
langchain 0.0.301
Python 3.11.4
embeddings =HuggingFace embeddings
llm = Claud 2
EXAMPLE:
1. Chunks object below in my code contains the following string: leflunomide **(LEF) (≤ 20 mg/day)**
`Chroma.from_documents(documents=chunks, embedding=embeddings, collection_name=collection_name, persist_directory=persist_db) `
2. after saving and retrieving from my local file with:
'db = Chroma(persist_directory=persist_db, embedding_function=embeddings, collection_name=collection_name)'
. . . then extracting with . . .
'db.get(include=['documents'])'
3. That string is now: leflunomide **(LEF) ( ≤20 mg/day)**, with a single space newly inserted before the ≤
This matters because it messes up retrieval augmentation with queries like “what doses of leflunomide are appropriate?” using Claude-2 as the llm
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Chunks object below in my code contains the following string: leflunomide **(LEF) (≤ 20 mg/day)**
`Chroma.from_documents(documents=chunks, embedding=embeddings, collection_name=collection_name, persist_directory=persist_db) `
2. after saving and retrieving from my local file with:
'db = Chroma(persist_directory=persist_db, embedding_function=embeddings, collection_name=collection_name)'
. . . then extracting with . . .
'db.get(include=['documents'])'
3. That string is now: leflunomide **(LEF) ( ≤20 mg/day)**, with a single space newly inserted before the ≤
This matters because it messes up retrieval augmentation with queries like “what doses of leflunomide are appropriate?” using Claude-2 as the llm
### Expected behavior
See description above | Chroma is adding an extra space (very consistently) in front of '≤ ’ characters and it has a real impact. | https://api.github.com/repos/langchain-ai/langchain/issues/11441/comments | 7 | 2023-10-05T16:00:27Z | 2024-02-12T16:12:14Z | https://github.com/langchain-ai/langchain/issues/11441 | 1,928,619,715 | 11,441 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Issue: Is it possible to use a HF LLM using Conversational Retrieval Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/11440/comments | 3 | 2023-10-05T15:54:16Z | 2024-02-10T16:14:57Z | https://github.com/langchain-ai/langchain/issues/11440 | 1,928,609,140 | 11,440 |
[
"langchain-ai",
"langchain"
] | Hello there,
I have deployed an OpenLLM on a managed service that is protected by the use of an auth Bearer token:
```bash
curl -X 'POST' \
'https://themodel.url/v1/generate' \
-H "Authorization: Bearer Sh6Kh4[ . . . super long bearer token ]W18UiWuzsz+0r+U"
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "What is the difference between a pigeon",
"llm_config": {
"use[ . . . and so on]
}
}'
```
Curl works like a charm.
In LangChain, I try to create my new llm as such:
```python
llm = OpenLLM(server_url="https://themodel.url/v1/generate", temperature=0.2)
```
But I don't know how to include this Bearer token, I even suspect it is impossible...
**Bot seems to confirm this is impossible as of 5 Oct 2023, as seen in https://github.com/langchain-ai/langchain/discussions/11437** ; this issue would then be a feature request.
Thanks!
Cheers,
Colin
_Originally posted by @ColinLeverger in https://github.com/langchain-ai/langchain/discussions/11437_ | Bearer token support for self-deployed OpenLLM | https://api.github.com/repos/langchain-ai/langchain/issues/11438/comments | 4 | 2023-10-05T15:09:13Z | 2024-02-10T16:15:02Z | https://github.com/langchain-ai/langchain/issues/11438 | 1,928,525,648 | 11,438 |
[
"langchain-ai",
"langchain"
] | ### System Info
python: 3.11
langchain: 0.0.306
### Who can help?
Although PGVector implementation utilizes SQLAlchemy that supports connection pooling features, it gets one connection from the pool, saves it as a class attribute and reuses in all requests.
```python
class PGVector(VectorStore):
# ...
def __init__(
self,
# ...
) -> None:
# ...
self.__post_init__()
def __post_init__(
self,
) -> None:
self._conn = self.connect()
# ...
def create_vector_extension(self) -> None:
try:
with Session(self._conn) as session:
# ...
except Exception as e:
self.logger.exception(e)
def create_tables_if_not_exists(self) -> None:
with self._conn.begin():
Base.metadata.create_all(self._conn)
@contextlib.contextmanager
def _make_session(self) -> Generator[Session, None, None]:
yield Session(self._conn)
# ...
```
This means that:
- Since the same connection is user for handle all requests, they will not be executed in parallel but rather enqueued. In a scenario with a large number of requests, this may not scale well.
- Great pool features like connection recycle and test for liveness can not be used.
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Configure Postgres to kill idle sessions after 10 seconds (I'm using the `ankane/pgvector` docker image):
```sql
ALTER SYSTEM SET idle_session_timeout TO '10000';
ALTER SYSTEM SET idle_in_transaction_session_timeout TO '10000';
```
2. Restart the database
3. Confirm the new settings are active
```sql
SHOW ALL;
```
4. Create a sample program that instantiate a `PGVector` class and search by similarity every 30 seconds, sleeping between each iteration
```python
if __name__ == '__main__':
# Logging level
logging.basicConfig(level=logging.DEBUG)
load_dotenv()
# PG Vector
driver = os.environ["PGVECTOR_DRIVER"]
host = os.environ["PGVECTOR_HOST"]
port = os.environ["PGVECTOR_PORT"]
database = os.environ["PGVECTOR_DATABASE"]
user = os.environ["PGVECTOR_USER"]
password = os.environ["PGVECTOR_PASSWORD"]
collection_name = "state_of_the_union_test"
connection_string = f'postgresql+{driver}://{user}:{password}@{host}:{port}/{database}'
embeddings = OpenAIEmbeddings()
loader = TextLoader("./state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
chunks = text_splitter.split_documents(documents)
db = PGVector.from_documents(
embedding=embeddings,
documents=chunks,
collection_name=collection_name,
connection_string=connection_string,
)
for i in range(100):
try:
print('--> *** Searching by similarity ***')
result = db.similarity_search_with_score("foo")
print('--> *** Documents retrieved successfully ***')
# for doc, score in result:
# print(f'{score}:{doc.page_content}')
except Exception as e:
print('--> *** Fail ***')
print(str(e))
print('--> *** Sleeping ***')
time.sleep(30)
print('\n\n')
```
5. Confirm in the console output that some requests fail because the idle connection was closed
```
(poc-pgvector-py3.11) ➜ poc-pgvector python -m poc_pgvector
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ...:443
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=211 request_id=ec4ef62b-da79-4055-a8da-66fce881a6e9 response_code=200
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=40 request_id=f2b791ac-0c88-4362-b14c-d6a36084a932 response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=30 request_id=de593b87-29ad-4342-81c1-3c495d5b4f56 response_code=200
--> *** Fail ***
(psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: SELECT langchain_pg_collection.name AS langchain_pg_collection_name, langchain_pg_collection.cmetadata AS langchain_pg_collection_cmetadata, langchain_pg_collection.uuid AS langchain_pg_collection_uuid
FROM langchain_pg_collection
WHERE langchain_pg_collection.name = %(name_1)s
LIMIT %(param_1)s]
[parameters: {'name_1': 'state_of_the_union_test', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=42 request_id=e2cb6145-8ceb-4127-b4ee-c5130ba42e8e response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=38 request_id=05e15050-0c37-4f7c-a118-a3fafc2bf979 response_code=200
--> *** Fail ***
(psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: SELECT langchain_pg_collection.name AS langchain_pg_collection_name, langchain_pg_collection.cmetadata AS langchain_pg_collection_cmetadata, langchain_pg_collection.uuid AS langchain_pg_collection_uuid
FROM langchain_pg_collection
WHERE langchain_pg_collection.name = %(name_1)s
LIMIT %(param_1)s]
[parameters: {'name_1': 'state_of_the_union_test', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=28 request_id=2deecd8b-42de-42dd-a242-004d9d5811eb response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
...
```
### Expected behavior
- Use pooled connections to make feasible executing requests in parallel.
- Allow fine tuning the connection pool used by PGVector, like configuring the its maximum size, number of seconds to wait before giving up on getting a connection from the pool, and enabling [pessimist](https://docs.sqlalchemy.org/en/20/core/pooling.html#pool-disconnects-pessimistic) (liveness testing connections when borrowing them from the pool) or optimistic (connection recycle time) strategies for disconnection dandling.
| PGVector bound to a single database connection | https://api.github.com/repos/langchain-ai/langchain/issues/11433/comments | 7 | 2023-10-05T12:52:39Z | 2024-05-11T16:07:07Z | https://github.com/langchain-ai/langchain/issues/11433 | 1,928,223,846 | 11,433 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 11
Python 3.10
Langchain 0.0.286
Pyinstaller 5.7.0
File "test1.py", line 5, in <module>
File "langchain\document_loaders\unstructured.py", line 86, in load
File "langchain\document_loaders\pdf.py", line 57, in _get_elements
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "unstructured\partition\pdf.py", line 40, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "unstructured\partition\lang.py", line 3, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "iso639\__init__.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "iso639\language.py", line 12, in <module>
sqlite3.OperationalError: unable to open database file
[19716] Failed to execute script 'test1' due to unhandled exception!
### Who can help?
@agola11
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
loader = UnstructuredPDFLoader("C:\\Temp\\AI_experimenting\\indexer_test1\\test_docs\\testdoc1.pdf")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=20)
docs_transformed = text_splitter.split_documents(data)
print (f'Number of chunks of data: {len(docs_transformed )}')
```
Compile the code above using the following Pyinstaller spec file:
```
# -*- mode: python -*-
import os
from os.path import join, dirname, basename
datas = [
("C:/Temp/AI_experimenting/venv/Lib/site-packages/langchain/chains/llm_summarization_checker/prompts/*.txt", "langchain/chains/llm_summarization_checker/prompts"),
]
googleapihidden = ["pkg_resources.py2_warn",
"googleapiclient",
"apiclient",
"google-api-core"]
# list of modules to exclude from analysis
excludes = []
# list of hiddenimports
hiddenimports = googleapihidden
# binary data
# assets
tocs = []
a = Analysis(['test1.py'],
pathex=[os.getcwd()],
binaries=None,
datas=datas,
hiddenimports=hiddenimports,
hookspath=[],
runtime_hooks=[],
excludes=excludes,
win_no_prefer_redirects=False,
win_private_assemblies=False)
pyz = PYZ(a.pure, a.zipped_data)
exe1 = EXE(pyz,
a.scripts,
name='mytest',
exclude_binaries=True,
icon='',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True)
coll = COLLECT(exe1,
a.binaries,
a.zipfiles,
a.datas,
*tocs,
strip=False,
upx=True,
name='mytest_V1')
```
Compile with: pyinstaller --log-level DEBUG test1.spec
Then in cmd terminal change working directory to ....dist\mytest_V1 and run mytest.exe
### Expected behavior
The expected behavior would be an output like:
Number of chunks of data: 18 | Loading UnstructeredPDF fails after build with PyInstaller | https://api.github.com/repos/langchain-ai/langchain/issues/11430/comments | 5 | 2023-10-05T12:08:46Z | 2023-10-05T20:04:59Z | https://github.com/langchain-ai/langchain/issues/11430 | 1,928,130,885 | 11,430 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using Python 3.11.5 and I run into this issue with `ModuleNotFoundError: No module named 'langchain.document_loaders'` after running `pip install 'langchain[all]'`, which appears to be installing langchain-0.0.39
If I then run `pip uninstall langchain`, followed by `pip install langchain`, it proceeds to install langchain-0.0.308 and suddenly my document loaders work again.
- Why does langchain 0.0.39 have document_loaders broken?
- Why does langchain[all] install a different version of langchain?
### Reproduction
1. Install python 3.11.5 (using pyenv on OSX)
2. Run `pip install `langchain[all]'`
3. Observe langchain version that is installed
4. Run `ipython` and execute `from langchain.document_loaders import WebBaseLoader` to receive the module error.
5. Run `pip uninstall langchain`
6. Run `pip install langchain`
7. Note how an older version of langchain is installed
8. Run `ipython` and execute `from langchain.document_loaders import WebBaseLoader` to see how it's working.
### Expected behavior
`pip install langchain` should install the same underlying version of langchain as the version of langchain that includes all of the various modules. | pip installing 'langchain[all]' installs a newer (broken) version than pip install langchain does | https://api.github.com/repos/langchain-ai/langchain/issues/11426/comments | 1 | 2023-10-05T10:49:12Z | 2023-10-05T11:18:44Z | https://github.com/langchain-ai/langchain/issues/11426 | 1,927,989,312 | 11,426 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In [QA using a Retriever docs](https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_qa) you can observe that after each LLM reply we observe additional empty space. What is the purpose of having this space or is it just a bug in Prompt?
Copied from the docs provided above
```python3
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
```
Provides response:
`" The president said that she is one of the nation's top legal minds ..."`
### Idea or request for content:
_No response_ | DOC: QA using a Retriever, why do we need additional empty space? | https://api.github.com/repos/langchain-ai/langchain/issues/11425/comments | 2 | 2023-10-05T10:09:14Z | 2024-02-07T16:23:13Z | https://github.com/langchain-ai/langchain/issues/11425 | 1,927,900,506 | 11,425 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to access documents from SharePoint using Langchain's [SharePointLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html).
I changed the redirect URI to a local host on this [documentation](https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint). How can I pass the new redirect URI as a parameter to the [SharePointLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html) function.
### Suggestion:
_No response_ | Issue: Custom Redirect URI as parameter in SharePointLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11423/comments | 3 | 2023-10-05T06:16:38Z | 2024-02-07T16:23:18Z | https://github.com/langchain-ai/langchain/issues/11423 | 1,927,461,674 | 11,423 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.300
supabase==1.1.1
### Who can help?
@hwaking @eyurtsev @agola11 @eyurtsev @hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### **Creation of Supabase client**
`supabase_url: str = os.environ.get("SUPABASE_URL") # type: ignore
supabase_key: str = os.environ.get("SUPABASE_SERVICE_KEY") # type: ignore
supabase_client = create_client(supabase_url, supabase_key)`
### Text Splitter creation
`text_splitter = CharacterTextSplitter(
chunk_size=800,
chunk_overlap=0,
)`
### **Embeddings**
`embeddings = OpenAIEmbeddings()`
### **Loading the document**
`loader = PyPDFLoader("Alice_in_wonderland2.pdf")
pages = loader.load_and_split()
docs = text_splitter.split_documents(pages)`
### **Save values to Supabase**
`vector_store = SupabaseVectorStore.from_documents(documents=docs, embedding=embeddings, client=supabase_client)`
### **Error encountring**
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\VSCode\Python\langchain project\supabase-try\test.py", line 34, in <module>
vector_store = SupabaseVectorStore.from_documents(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\base.py", line 417, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\supabase.py", line 147, in from_texts
cls._add_vectors(client, table_name, embeddings, docs, ids)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\supabase.py", line 323, in _add_vectors
result = client.from_(table_name).upsert(chunk).execute() # type: ignore
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\postgrest\_sync\request_builder.py", line 57, in execute
r = self.session.request(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 901, in send
response = self._send_handling_auth(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_transports\default.py", line 218, in handle_request
resp = self._pool.handle_request(req)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_transports\default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.WriteTimeout: The write operation timed out
### I tried changing the code according to the langchain docs as
`vector_store = SupabaseVectorStore.from_documents(
docs,
embeddings,
client=supabase_client,
table_name="documents",
query_name="match_documents",
)`
### Then I encountered the following error
2023-10-05 10:33:29,879:INFO - HTTP Request: POST https://scptrclvtrvcwjdunlrn.supabase.co/rest/v1/documents "HTTP/1.1 404 Not Found"
Traceback (most recent call last):
File "D:\VSCode\Python\langchain project\supabase-try\test.py", line 34, in <module>
vector_store = SupabaseVectorStore.from_documents(
**I didnt create the document table in the supabase manually as i need it to be created automatically with the code. And if i need to create it manually i need to know the steps of create that as well and how to integrate it as well. Please help me immediately**
### Expected behavior
SupabaseVectorStore.from_documents works fine and Store all the embeddings in the vector store. | SupabaseVectorStore.from_documents is not working | https://api.github.com/repos/langchain-ai/langchain/issues/11422/comments | 28 | 2023-10-05T05:39:43Z | 2024-07-04T16:06:49Z | https://github.com/langchain-ai/langchain/issues/11422 | 1,927,423,576 | 11,422 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using UnstructuredWordDocumentLoader module to load a .docx file. I already tried several things... but the text data returned is always without the percentages that are several times in the .docx file.
How can I maintain / get this (%) percentage symbol?
Thanks
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements")
data = loader.load()
### Expected behavior
Have in the data the percentages characters also. | UnstructuredWordDocumentLoader - Missing Percentages (%) characters in data | https://api.github.com/repos/langchain-ai/langchain/issues/11416/comments | 3 | 2023-10-05T00:35:06Z | 2024-02-07T16:23:23Z | https://github.com/langchain-ai/langchain/issues/11416 | 1,927,175,230 | 11,416 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In the documentation for some examples like [Weaviate](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html#langchain.vectorstores.weaviate.Weaviate.delete) and [ElasticsearchVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elasticsearch.ElasticsearchStore.html#langchain.vectorstores.elasticsearch.ElasticsearchStore.delete), the delete method is described as having the ids parameter as optional.
`delete(ids: Optional[List[str]] = None, **kwargs: Any) → None`
However, the ids parameter is mandatory for those vectorstores, and if it is not provided, a ValueError will be raised with the message "No ids provided to delete." Here is the revised [documentation](https://github.com/langchain-ai/langchain/blob/b9fad28f5e093415d76aeb71b5e555eb87fd2ec2/libs/langchain/langchain/vectorstores/elastic_vector_search.py#L336C5-L348C61) to clarify this:
`
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
if ids is None:
raise ValueError("No ids provided to delete.")
# TODO: Check if this can be done in bulk
for id in ids:
self.client.delete(index=self.index_name, id=id)
`
### Idea or request for content:
Please clarify in the documentation
Is it also possible to get bulk delete functionality for some vectorstore (ex. ElasticsearchVectorStore/ Weviate)? | DOC: optional ids issue in delete method in vectorstores | https://api.github.com/repos/langchain-ai/langchain/issues/11414/comments | 1 | 2023-10-04T22:50:28Z | 2024-02-06T16:27:01Z | https://github.com/langchain-ai/langchain/issues/11414 | 1,927,090,102 | 11,414 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would like to implement a `ListOutputParser` that could transform Markdown list into a string list
### Motivation
It seems that most of the lists generated by LLMs are Markdown lists, so I think this will be useful.
It might also make it easier to handle cases where the generated list doesn't work with the existing `ListOutputParser` (Numered and CommaSeparated).
### Your contribution
Here is the PR: https://github.com/langchain-ai/langchain/pull/11411 | Feature: Markdown list output parser | https://api.github.com/repos/langchain-ai/langchain/issues/11410/comments | 1 | 2023-10-04T21:38:14Z | 2023-10-07T01:57:03Z | https://github.com/langchain-ai/langchain/issues/11410 | 1,927,019,118 | 11,410 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.303, python 3.9, redis-py 5.0.1, latest redis stack with RedisSearch module
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to create Redis vector store like this from non existent index_name:
`
embeddings = OpenAIEmbeddings()
vectorstore = Redis.from_existing_index(
embeddings,
index_name=index_name,
schema='schema.yaml',
redis_url=REDIS_URL,
)
`
it executes and returns object of Redis type, and logs that index was found. However, of course it fails to query later.
### Expected behavior
It should raise a ValueError accortding to doc. | Redis from_existing_index does not raise ValueError when index does not exist | https://api.github.com/repos/langchain-ai/langchain/issues/11409/comments | 5 | 2023-10-04T21:30:28Z | 2023-10-09T15:05:22Z | https://github.com/langchain-ai/langchain/issues/11409 | 1,927,010,913 | 11,409 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about the problem in the.
How do I resolve this error?
Langchain version: 0.0.308
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter
llm = SageMakerEndpointModel
_filter = LLMChainFilter.from_llm(llm)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=faiss_retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown?")
### Expected behavior
Get filtered docs | BooleanOutputParser expected output value error | https://api.github.com/repos/langchain-ai/langchain/issues/11408/comments | 6 | 2023-10-04T21:18:38Z | 2024-04-09T20:43:32Z | https://github.com/langchain-ai/langchain/issues/11408 | 1,926,997,959 | 11,408 |
[
"langchain-ai",
"langchain"
] | ### System Info
I’m running this on a local machine Windows 10, Spyder 5.2.1 IDE with Anaconda package management, using python 3.10.
### Who can help?
@leo-gan @holtskinner
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi,
I've just started learning to code with python, working with LLMs and I'm following the [tutorial for setting](https://python.langchain.com/docs/integrations/document_transformers/docai) up LangChain with Google Document AI and I’m getting this error “InvalidArgument: 400 Request contains an invalid argument.” with this line of code
`docs = list(parser.lazy_parse(blob))`
Here are the things I’ve tried so far:
• Setting up gcloud ADC so I can run this as authroized session, code wouldn’t work otherwise
• Set the permission in the GSC bucket to Storage Admin so I can read/write
• Chatgpt wrote test to see if current credentials are working, it is
• Chatgpt wrote test to see if DocAIParser object is working, it is
I think there’s some issue with the output path for “lazy_parse” but I can’t get it to work. I've looked into the documentation but I can't tell if I'm missing something or not. How do I get this working?
See full code and full error message below:
```python
import pprint
from google.auth.transport.requests import AuthorizedSession
from google.auth import default
from google.cloud import documentai
from langchain.document_loaders.blob_loaders import Blob
from langchain.document_loaders.parsers import DocAIParser
PROJECT = "[replace with project name]"
GCS_OUTPUT_PATH = "gs://[replace with bucket path]"
PROCESSOR_NAME = "https://us-documentai.googleapis.com/v1/projects/[replace with processor name]"
# Get the credentials object using ADC.
credentials, _ = default()
session = AuthorizedSession(credentials=credentials)
# Create a Document AI client object.
client = documentai.DocumentProcessorServiceClient(credentials=credentials)
"""Tests if the current credentials are working in gcloud."""
import google.auth
def test_credentials():
try:
# Try to authenticate to the Google Cloud API.
google.auth.default()
print("Credentials are valid.")
except Exception as e:
print("Credentials are invalid:", e)
if __name__ == "__main__":
test_credentials()
import logging
from google.cloud import documentai
# Set up logging
logging.basicConfig(level=logging.DEBUG)
# Create DocumentAI client
client = documentai.DocumentProcessorServiceClient()
# Print out actual method call
logging.debug("Calling client.batch_process_documents(%s, %s)",
PROCESSOR_NAME, GCS_OUTPUT_PATH)
"""Test of DocAIParser object is working"""
# Try to create a DocAIParser object.
try:
parser = DocAIParser(
processor_name=PROCESSOR_NAME,
gcs_output_path=GCS_OUTPUT_PATH,
client=client,
)
# If the DocAIParser object was created successfully, then the Google is accepting the parameters.
print("Google is accepting the parameters.")
except Exception as e:
# If the DocAIParser object fails to be created, then the Google is not accepting the parameters.
print("Google is not accepting the parameters:", e)
parser = DocAIParser(
processor_name=PROCESSOR_NAME,
gcs_output_path=GCS_OUTPUT_PATH,
client=client,
)
blob = Blob(path="gs://foia_doc_bucket/input/goog-exhibit-99-1-q1-2023-19.pdf")
docs = list(parser.lazy_parse(blob))
print(len(docs))
```
**********************************************
Full error message:
```
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:urllib3.util.retry:Converted retries value: 3 -> Retry(total=3, connect=None, read=None, redirect=None, status=None)
DEBUG:google.auth.transport.requests:Making request: POST https://oauth2.googleapis.com/token
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): oauth2.googleapis.com:443
DEBUG:urllib3.connectionpool:https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 200 None
Traceback (most recent call last):
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\grpc_helpers.py", line 75, in error_remapped_callable
return callable_(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\grpc\_channel.py", line 1161, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\grpc\_channel.py", line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.31.95:443 {created_time:"2023-10-04T19:56:49.9162929+00:00", grpc_status:3, grpc_message:"Request contains an invalid argument."}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Inspiron 15 amd 5505\Dropbox\[...]\local_doc_upload.py", line 80, in <module>
docs = list(parser.lazy_parse(blob))
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 91, in lazy_parse
yield from self.batch_parse([blob], gcs_output_path=self._gcs_output_path)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 122, in batch_parse
operations = self.docai_parse(blobs, gcs_output_path=output_path)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 268, in docai_parse
operations.append(self._client.batch_process_documents(request))
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\cloud\documentai_v1\services\document_processor_service\client.py", line 786, in batch_process_documents
response = rpc(
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\retry.py", line 366, in retry_wrapped_func
return retry_target(
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\retry.py", line 204, in retry_target
return target()
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\grpc_helpers.py", line 77, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
InvalidArgument: 400 Request contains an invalid argument.
```
### Expected behavior
Suppose to output "11" based on the number of pages in [this pdf](https://abc.xyz/assets/a7/5b/9e5ae0364b12b4c883f3cf748226/goog-exhibit-99-1-q1-2023-19.pdf) per the [Doc AI tutorial](https://python.langchain.com/docs/integrations/document_transformers/docai) | Error "InvalidArgument: 400 Request" by following tutorial for Document AI | https://api.github.com/repos/langchain-ai/langchain/issues/11407/comments | 18 | 2023-10-04T20:32:22Z | 2024-02-15T16:08:50Z | https://github.com/langchain-ai/langchain/issues/11407 | 1,926,936,215 | 11,407 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hey there, I'm trying got find a way to prevent an agent from going over the token limit, I've looked at the docs, asked around and had no luck.
This is the error for better content:
[ERROR] InvalidRequestError: This model's maximum context length is 16384 tokens. However, your messages resulted in 17822 tokens. Please reduce the length of the messages.
Any help would be great 😊
### Suggestion:
_No response_ | Issue: prevent a agent from exceeding the token limit | https://api.github.com/repos/langchain-ai/langchain/issues/11405/comments | 9 | 2023-10-04T20:11:52Z | 2024-02-14T16:09:53Z | https://github.com/langchain-ai/langchain/issues/11405 | 1,926,907,879 | 11,405 |
[
"langchain-ai",
"langchain"
] | ### System Info
python:3.10.13 bookworm (docker)
streamlit
Version: 1.27.1
langchain
Version: 0.0.306
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Display assistant response in chat message container
with st.chat_message("🧞♂️"):
message_placeholder = st.empty()
cbh = StreamlitCallbackHandler(st.container())
AI_response = llm_chain.run(prompt,callbacks=[cbh])
### Expected behavior
the "Thinking.." spinner STOPS or hides after LLM finishes its response
No Parameters I can find here
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
that would affect this
| StreamlitCallbackHandler thinking.. / spinner Not stopping | https://api.github.com/repos/langchain-ai/langchain/issues/11398/comments | 14 | 2023-10-04T19:30:40Z | 2024-06-25T16:13:40Z | https://github.com/langchain-ai/langchain/issues/11398 | 1,926,846,508 | 11,398 |
[
"langchain-ai",
"langchain"
] | ### System Info
"There was a BUG when using return_source_documents = True with any chain, it was always raising an error!!
!!, this is a temporary fix that requires the 'answer' key to be present there, but FIX IT, it is now impossible to to return_source_docs !!!!!!!!!!!!"
def _get_input_output(
self, inputs: Dict[str, Any], outputs: Dict[str, Any]
) -> Tuple[str, str]:
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if 'answer' in outputs:
output_key = 'answer'
else:
raise ValueError("Output key 'answer' not found.")
else:
output_key = self.output_key
return inputs[prompt_input_key], outputs[output_key]
### Who can help?
@eyur
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def create_chain():
retriever = cretae_retriver....
)
memory = memory...
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
retriever=retriever,
verbose=True,
memory=memory,
return_source_documents=True, !!!!!! Here , it does not work
)
return chain
def run_chat():
chain = create_chain()
while True:
query = input("Prompt: ")
if query in ["quit", "q", "exit"]:
sys.exit()
result = chain({"question": query})
print(f"ANSWER: {result['answer']}")
print(f"DOCS_USED: {result['source_documents']}")
query = None
### Expected behavior
It will say this error
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents']) | Retrun_source_documets does not work !!!! | https://api.github.com/repos/langchain-ai/langchain/issues/11396/comments | 2 | 2023-10-04T19:13:38Z | 2024-02-06T16:27:06Z | https://github.com/langchain-ai/langchain/issues/11396 | 1,926,823,792 | 11,396 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.308
Python: 3.8
def get_ml_response(self, question):
try:
chain = create_conversation_retrieval_chain()
llm_response = chain({'question': question, "chat_history": []})
logger.info(f"LLm Response: {llm_response}")
except Exception:
err_msg = "Execption! {}".format(traceback.format_exc())
logger.info(err_msg)
def create_conversation_retrieval_chain(self):
try:
session = boto3.Session()
sts_client = session.client("sts")
assumed_role = sts_client.assume_role(RoleArn=role_arn, RoleSessionName="AssumeRoleSession")
# Get the temporary credentials
credentials = assumed_role["Credentials"]
access_key = credentials["AccessKeyId"]
secret_key = credentials["SecretAccessKey"]
session_token = credentials["SessionToken"]
# Configure the default AWS Session with assumed role credentials
# config = Config(region_name="us-west-2")
session = boto3.Session(
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token
)
custom_template = """Given the following conversation and a follow up question, rephrase the follow up question
to be a standalone question. At the end of standalone question add this 'Answer the question
in English language.If you do not know the answer reply with 'I am sorry'.
Chat History: {chat_history}
Follow Up Input: {question}
Standalone question:"""
custom_question_prompt = PromptTemplate.from_template(custom_template)
auth = AWSV4SignerAuth(credentials, 'us-west-2', 'aoss')
embeddings = SagemakerEndpointEmbeddingsJumpStart(
endpoint_name="jumpstart-dft-hf-textembedding-all-minilm-l6-v2",
region_name="us-west-2",
content_handler=content_handler_embeddings
)
opensearch_vector_search = OpenSearchVectorSearch(
opensearch_url="<opensearch_url>",
embedding_function=embeddings,
index_name="<index_name>",
http_auth=auth,
connection_class=RequestsHttpConnection,
)
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
input_key="question",
output_key="answer",
)
chain = ConversationalRetrievalChain.from_llm(
llm=SagemakerEndpoint(
endpoint_name="jumpstart-dft-hf-llm-falcon-40b-instruct-bf16",
region_name="us-west-2",
content_handler=content_handler_llm,
),
# for experimentation, you can change number of documents to retrieve here
retriever=opensearch_vector_search.as_retriever(
search_kwargs={
"k": 3,
}
),
memory=memory,
condense_question_prompt=custom_question_prompt,
return_source_documents=True,
)
return chain
except Exception:
err_msg = "Execption! {}".format(traceback.format_exc())
logger.info(err_msg)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chain = self.create_conversation_retrieval_chain()
llm_response = chain({'question': question, "chat_history": []})
### Expected behavior
I am expecting to connect to connect to Sagemaker endpoints using Assume role permissions but getting Access Denied Exception. I tried invoking the endpoints through the lambda function which is working fine. But not able to figure out how to pass the credentials using Langchain. Could you please help me with this. | Access Denied Exception while accessing Cross Account Sagemaker endpoints. | https://api.github.com/repos/langchain-ai/langchain/issues/11392/comments | 2 | 2023-10-04T17:32:33Z | 2024-02-06T16:27:11Z | https://github.com/langchain-ai/langchain/issues/11392 | 1,926,681,297 | 11,392 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using Google Colab, authenticating with my own account.
Packages installation:
!pip install google-cloud-discoveryengine google-cloud-aiplatform langchain==0.0.236 -q
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sample code:
import vertexai
from langchain.llms import VertexAI
from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever
vertexai.init(project=PROJECT_ID, location=REGION)
llm = VertexAI(model_name=MODEL, temperature=0, top_p=0.2, max_output_tokens=1024)
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID, search_engine_id=DATA_STORE_ID
)
from langchain.chains import RetrievalQAWithSourcesChain
search_query = "How to create a new google calendar?"
retrieval_qa_with_sources = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm, chain_type="stuff", retriever=retriever
)
results = retrieval_qa_with_sources({"question": search_query})
print(f"Answer: {results['answer']}")
print(f"Sources: {results['sources']}")
Return:
Answer: To create a new calendar, follow these steps:
1. On the computer, open Google Calendar.
2. On the left, next to "Other calendars," click Add other calendars->Create new calendar.
3. Add a name and description to the calendar.
4. Click on Create a Schedule.
5. To share the calendar, click on it in the left bar and select Share with specific people.
Sources:
### Expected behavior
Expected to have the sources as URI from Google Cloud Storage. It sometimes return, sometimes not. It is really random. | Sources are not returned in RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/11387/comments | 2 | 2023-10-04T16:54:58Z | 2024-02-06T16:27:16Z | https://github.com/langchain-ai/langchain/issues/11387 | 1,926,626,279 | 11,387 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Lang chain version 0.0.307 and I noticed an issue with the similarity_score_threshold. I just upgraded from 0.0.208 or 288 (I forgot).
This is the code I am using to retrieve the documents from pinecone:
```
docsearch = Pinecone.from_existing_index(index, embeddings, text_key="text")
retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
'score_threshold': 0.6
}
)
retriever_docs_result = retriever.get_relevant_documents(query)
```
it used to work fine but after upgrading the langchain version it stopped returning any docs. Upon investigating the code, I've noticed that inside the `_similarity_search_with_relevance_scores` method in `vectorstore.py` , `docs_and_scores` is returning the scores as expected but then this code tries to calculate the relevance score and inverts the values:
`return [(doc, relevance_score_fn(score)) for doc, score in docs_and_scores]`.
I am getting search score values as 0.7 to 0.85 but then the relevancy score inverts it and makes it (1-score) which is around 0.3 to 0.15. The problem is the score_threshold works the opposite way. So if I set score_threshold to 0.6 it wants scores bigger than 0.6 which is impossible because the relevancy score makes it invert (if the relevancy score is smaller it means it is more closer to what we queried).
I am using dot product and I have used the following code as well but it also didn't help:
`
docsearch = Pinecone(index, embeddings, text_key="text", distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
'score_threshold': 0.6
}
)
`
Can someone help or I am missing something here ?
### Suggestion:
_No response_ | Issue: similarity_score_threshold not working with pinecone as expected | https://api.github.com/repos/langchain-ai/langchain/issues/11386/comments | 2 | 2023-10-04T16:48:26Z | 2024-02-07T16:23:28Z | https://github.com/langchain-ai/langchain/issues/11386 | 1,926,617,079 | 11,386 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I got this error when I built an ChatBot with Langchain using VertexAI. I'm seeing this error and couldn't find any details so far.
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 141, in _call
answer = self.combine_docs_chain.run(
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 492, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 105, in _call
output, extra_return_dict = self.combine_docs(
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 257, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in generate
return self.llm.generate_prompt(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 504, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 668, in generate
new_results = self._generate_helper(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 541, in _generate_helper
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 528, in _generate_helper
self._generate(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 281, in _generate
res = completion_with_retry(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 102, in completion_with_retry
return _completion_with_retry(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 100, in _completion_with_retry
return llm.client.predict(*args, **kwargs)
TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'stop_sequences'
### Suggestion:
_No response_ | Issue: TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'stop_sequences' | https://api.github.com/repos/langchain-ai/langchain/issues/11384/comments | 3 | 2023-10-04T16:10:08Z | 2024-02-11T16:12:46Z | https://github.com/langchain-ai/langchain/issues/11384 | 1,926,558,948 | 11,384 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I have implemented the indexing workflow as in https://python.langchain.com/docs/modules/data_connection/indexing making use of Pinecone and sqlite. This runs perfect within a Pycharm dev environment. But after building the app with Pyinstaller, I get the following error: sqlite3.OperationalError: unable to open database file. I think this one is coming from langchain.indexes.index. The error message is very cryptic. Why can't this file be opened? Is it locked? Is it corrupted - no as I can open it with a desktop sql manager app. I tried different locations for storing the sqlite database but makes no difference.
Does anyone has some ideas on how to solve this?
### Suggestion:
_No response_ | Issue: sqlite3.OperationalError: unable to open database file | https://api.github.com/repos/langchain-ai/langchain/issues/11379/comments | 4 | 2023-10-04T12:16:34Z | 2023-10-05T11:22:05Z | https://github.com/langchain-ai/langchain/issues/11379 | 1,926,086,161 | 11,379 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain==0.0.306
Python 3.10.12
(Ubuntu Linux 20.04.6)
### Who can help?
The `ConversationBufferMemory` returns an empty string instead of an empty list when there's nothing stored, which breaks the expectations of the `MessagesPlaceholder` used within the Conversational REACT agent.
Related: https://github.com/langchain-ai/langchain/issues/7365 (where it was commented that changing the loading logic in `ConversationBufferWindowMemory` could break other things).
A possible solution that seems safer (in light of the subsequent type-checking code) could be to default empty strings to empty lists in the `format_messages` method of the `MessagesPlaceholder`, i.e.:
```
class MessagesPlaceholder(BaseMessagePromptTemplate):
...
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
value = kwargs[self.variable_name]
if not value: ## <-- ADDED CODE
value = [] ## <-- ADDED CODE
if not isinstance(value, list):
raise ValueError(...)
for v in value:
if not isinstance(v, BaseMessage):
raise ValueError(...)
return value
...
```
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce the behaviour, make sure you have an OPENAI_API_KEY env var and run the following:
```
# pip install langchain openai
import os
from langchain.memory import ChatMessageHistory, ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.tools import BaseTool
conversational_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=ChatMessageHistory()
)
llm = ChatOpenAI(
openai_api_key=os.environ['OPENAI_API_KEY'],
temperature=0,
model_name="gpt-4"
)
class MyTool(BaseTool):
name = "yell_loud"
description = "The tool to yell loud!!!"
def _run(self, query, run_manager=None):
"""Use the tool."""
return 'WAAAAH!'
async def _arun(self, query, run_manager=None):
"""Use the tool asynchronously."""
raise NotImplementedError("no async")
agent = initialize_agent(
agent="chat-conversational-react-description",
tools=[MyTool()],
llm=llm,
max_iterations=5,
verbose=True,
memory=conversational_memory,
early_stopping_method='generate'
)
print(agent.run("Please yell very loud, thank you, and then report the result to me."))
# Will raise:
# ValueError: variable chat_history should be a list of base messages, got
# at location:
# "langchain/prompts/chat.py", line 98, in format_messages
```
### Expected behavior
I would expect the script to not complain when the memory is empty and result in something like the following agent interaction (as I got from the modified `format_messages` tentatively suggested above):
```
$> python agent_script.py
> Entering new AgentExecutor chain...
'''json
{
"action": "yell_loud",
"action_input": "Please yell very loud, thank you"
}
'''
Observation: WAAAAH!
Thought:'''json
{
"action": "Final Answer",
"action_input": "The response to your last comment was a loud yell, as you requested."
}
'''
> Finished chain.
The response to your last comment was a loud yell, as you requested.
``` | ConversationBufferMemory returns empty string (and not a list) when empty, breaking agents with memory ("should be a list of base messages, got") | https://api.github.com/repos/langchain-ai/langchain/issues/11376/comments | 3 | 2023-10-04T09:18:45Z | 2024-03-20T18:51:23Z | https://github.com/langchain-ai/langchain/issues/11376 | 1,925,768,723 | 11,376 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have applied RetrievalQA's chain_type to map_reduce in my application and would like to customize its system message.
### Suggestion:
There is a way to modify the map_reduce_prompt code in the question answering, but I would like to know how to change it to the argument value of RetrievalQA.from_chain_type. | How to change the system message of qa chain? | https://api.github.com/repos/langchain-ai/langchain/issues/11375/comments | 2 | 2023-10-04T08:53:33Z | 2024-02-06T16:27:36Z | https://github.com/langchain-ai/langchain/issues/11375 | 1,925,720,401 | 11,375 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am having python REST API which calls to Open AI using following code for embedding.
`openai.Embedding.create(input=text, engine='text-embedding-ada-002')['data'][0]['embedding']`
I am having text length below the 8191 (max token size for text-embedding-ada-002).
When I am calling it, some times it doesn't return any results or exceptions. (execution get blocks after the calling this. But it is giving result on 2nd or 3rd attempts for same input text. It is happening time to time and not consistence.
Kindly help me to resolve this issue.
Thanks
Nuwan
### Suggestion:
_No response_ | Issue: Open AI embedding dose not returning results for some times | https://api.github.com/repos/langchain-ai/langchain/issues/11373/comments | 3 | 2023-10-04T05:46:24Z | 2024-02-07T16:23:38Z | https://github.com/langchain-ai/langchain/issues/11373 | 1,925,448,887 | 11,373 |
[
"langchain-ai",
"langchain"
] | ### System Info
I running code in Google Colab:
LangChain version: 0.0.306
Python 3.10.12
Transformers library 4.34.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hello! I have encountered the following error when trying to develop HuggingFace question answering models:
tokenizer = AutoTokenizer.from_pretrained("/content/flan-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("/content/flan-t5-large")
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
llm = HuggingFacePipeline(
pipeline = pipeline,
model_kwargs={"temperature": 0, "max_length": 512},
)
chain = load_qa_chain(llm, chain_type="stuff")
query = "My question?"
docs = db.similarity_search(query)
chain.run(input_documents=docs, question=query)
But the output is:
TypeError Traceback (most recent call last)
[<ipython-input-34-226234bd6dea>](https://localhost:8080/#) in <cell line: 1>()
----> 1 chain.run(input_documents=docs, question=query)
15 frames
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py](https://localhost:8080/#) in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
773
774 # Retrieve the task
--> 775 if task in custom_tasks:
776 normalized_task = task
777 targeted_task, task_options = clean_custom_task(custom_tasks[task])
TypeError: unhashable type: 'list'
### Expected behavior
I just wanna find solution, I didn't find nothing about this in the Google. | RAG - TypeError: unhashable type: 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/11371/comments | 5 | 2023-10-04T03:33:06Z | 2023-10-04T15:35:46Z | https://github.com/langchain-ai/langchain/issues/11371 | 1,925,341,660 | 11,371 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm creating a langchain chatbot (Conversationl-ReACT-description) with zep long term memory and that bot has access to real-time data. How do I prevent use the memory when same query asked again few minutes later?
### Suggestion:
_No response_ | Issue: Limit Memory Access on LangChain Bot | https://api.github.com/repos/langchain-ai/langchain/issues/11370/comments | 2 | 2023-10-04T03:27:22Z | 2024-02-06T16:27:46Z | https://github.com/langchain-ai/langchain/issues/11370 | 1,925,333,897 | 11,370 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to clarify some questions about tool and llm usage.
If an agent is initialized with multiple llm and tools,
1. how does the agent choose which llm or tool to use to answer the prompt?
2. How is llm different from tools? Can you give some examples of each category?
3. Can the agent use multiple tools/llm? If the agent can use multiple tools, what happen if the output generated by such tools/llm conflict with each other? In that case, which result will be returned to user?
for example, in the code beloew (cited from langchain doc), the agent is initialized with both 'llm_chain=llm_chain' (where llm_chain is OpenAI)and 'tools=tools' (where tool is google search engine). Which one is used to generate the result?
```search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
)
]
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
agent_chain.run(input="How many people live in canada?")
agent_chain.run(input="what is their national anthem called?")```
| Issue: differences between tool and llm | https://api.github.com/repos/langchain-ai/langchain/issues/11365/comments | 2 | 2023-10-03T21:42:07Z | 2024-02-06T16:27:51Z | https://github.com/langchain-ai/langchain/issues/11365 | 1,924,996,244 | 11,365 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I use directory loader, does langchain index the speaker notes in powerpoint?
### Suggestion:
_No response_ | Does langchain index the speaker notes in powerpoint? | https://api.github.com/repos/langchain-ai/langchain/issues/11363/comments | 6 | 2023-10-03T21:00:18Z | 2024-02-10T16:15:22Z | https://github.com/langchain-ai/langchain/issues/11363 | 1,924,939,090 | 11,363 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Trying to submit a code update. This would be a first. Help a non software eng out. :)
Here's a manual diff of the intended change:
```
langchain\libs\langchain\langchain\document_loaders\text.py
<<[41] with open(self.file_path, encoding=self.encoding) as f:
>>[41] with open(self.file_path, encoding=self.encoding, errors='replace') as f:
```
The COMMIT_EDITMSG file:
"""
Solves unicode [codec can't decode byte] traceback error:
Traceback (most recent call last):
File "C:\Program Files\Python3\Lib\site-packages\langchain\document_loaders\text.py", line 41, in load
text = f.read()
^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa7 in position 549: invalid start byte
"""
The general flow used:
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> notepad .\text.py
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git add text.py
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git checkout -b unicode_replace_fix
Switched to a new branch 'unicode_replace_fix'
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git commit -m "Solves unicode [codec can't decode byte] traceback error."
... // had to update user.name/email
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git commit --amend --reset-author
hint: Waiting for your editor to close the file... unix2dos: converting file C:/Dev/Github/langchain/.git/COMMIT_EDITMSG to DOS format...
dos2unix: converting file C:/Dev/Github/langchain/.git/COMMIT_EDITMSG to Unix format...
[unicode_replace_fix 09c9cb77e] Solves unicode [codec can't decode byte] traceback error:
1 file changed, 1 insertion(+), 1 deletion(-)
// below, I got a github auth popup, went through, succeeded - then denied?
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git push origin unicode_replace_fix
info: please complete authentication in your browser...
remote: Permission to langchain-ai/langchain.git denied to hackedpassword.
fatal: unable to access 'https://github.com/langchain-ai/langchain/': The requested URL returned error: 403
// denied but no diff?
PS C:\dev\github\langchain> git diff HEAD
PS C:\dev\github\langchain> git status
On branch unicode_replace_fix
nothing to commit, working tree clean
PS C:\dev\github\langchain> git diff
PS C:\dev\github\langchain>
Maybe I'm up against a repo push permission issue?
### Suggestion:
_No response_ | Issue: Fixed unicode decode byte error | https://api.github.com/repos/langchain-ai/langchain/issues/11359/comments | 4 | 2023-10-03T19:47:13Z | 2024-05-10T16:07:15Z | https://github.com/langchain-ai/langchain/issues/11359 | 1,924,832,590 | 11,359 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Chatbot using prompt template (Langchain) gives proper response to first prompt "Hi there!" but gives below error to second prompt "Give me a few tips on how to start a new garden."
_```
ValueError: Error: Prompt must alternate between '
Human:' and '
Assistant:'.
### Motivation
Yes its a problem being raised by my customer who is blocked currently
### Your contribution
NA

[Error-Test.ipynb.txt](https://github.com/langchain-ai/langchain/files/12796468/Error-Test.ipynb.txt)
| BedrockChat Error | https://api.github.com/repos/langchain-ai/langchain/issues/11358/comments | 2 | 2023-10-03T19:43:11Z | 2024-02-06T16:28:01Z | https://github.com/langchain-ai/langchain/issues/11358 | 1,924,827,161 | 11,358 |
[
"langchain-ai",
"langchain"
] | ### System Info
Today, all of a sudden I'm getting a error status code 429 using the Conversation chain. Nothing has changed, expect the day I've run the script. Any ideas?
It works with the normal LLM chain, so it must be to do with the Conversation Chain not working or the memories.
Using Flowise.
Chrome
@hwchase17
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Request failed with status code 429 and body {"error":{"message":"Rate limit reached for 10KTPM-200RPM in organization org-ByeURhCRRujcKf7u7QrlrudH on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.","type":"tokens","param":null,"code":"rate_limit_exceeded"}}
### Expected behavior
I expect the LLM chain to write the content. | Conversational Chain Error 429 | https://api.github.com/repos/langchain-ai/langchain/issues/11347/comments | 4 | 2023-10-03T16:28:32Z | 2024-02-10T16:15:27Z | https://github.com/langchain-ai/langchain/issues/11347 | 1,924,512,627 | 11,347 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.286
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
``` python
def add_product(self, input: str):
print(f"🟢 Add product {input}")
```
``` python
tools = [
Tool(
name="AddProduct",
func=add_product,
description="""
Add product to order.
Input of this tool must be a single string JSON format:
For example: '{{"name":"Nike Pegasus 40"}}'
"""
)
]
```
``` python
agent = initialize_agent(
tools=tools,
llm=__gpt4,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
prompt=prompt
)
```
### Expected behavior
Error:
```
TypeError: add_product() missing 1 required positional argument: 'input'
```
Expected:
```
Function `add_product` called with correct input.
``` | Missing 1 required positional argument for function that has `self` argument. | https://api.github.com/repos/langchain-ai/langchain/issues/11341/comments | 6 | 2023-10-03T15:06:00Z | 2024-07-15T04:32:06Z | https://github.com/langchain-ai/langchain/issues/11341 | 1,924,354,324 | 11,341 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest version of langchain. python=3.11.4
### Who can help?
@all
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import *
from langchain.llms import OpenAI
from langchain.sql_database import SQLDatabase
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
from langchain.chat_models import ChatOpenAI
# from secret_key import openapi_key
openapi_key = "######"
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
driver = 'ODBC Driver 17 for SQL Server'
host = '####'
database = 'chatgpt'
user = 'rnd'
password = '###'
#db_uri = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{password}@{host}/{database}?driver={driver}")
db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{password}@{host}/{database}?driver=ODBC+Driver+17+for+SQL+Server")
llm = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=1000)
# toolkit = SQLDatabaseToolkit(db=db)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
reduce_k_below_max_tokens=True,
)
mrkl = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
return agent_executor.run(question)
### Expected behavior
i need to connect to test server, but i'm getting an error while connecting to it, but is working fine on local server , been getting
"OperationalError: (pyodbc.OperationalError) ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Named Pipes Provider: Could not open a connection to SQL Server [53]. (53) (SQLDriverConnect); [08001] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0); [08001] [Microsoft][ODBC Driver 17 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (53)')
" | OperationalError: (pyodbc.OperationalError) ('08001', '[08001] | https://api.github.com/repos/langchain-ai/langchain/issues/11337/comments | 25 | 2023-10-03T14:39:18Z | 2024-02-15T16:08:56Z | https://github.com/langchain-ai/langchain/issues/11337 | 1,924,300,917 | 11,337 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi! I am trying to create a question answering chatbot with PDFs. There is something different in these documents. I have different articles with their number, example: article 1.2.2.1.3, article 1.2.3.4.5, article 2.3.4.1.3, etc.
When I ask for a specific article, it can't find the answer and return "the article x.x.x.x.x is not in the context". I have tried with some embeddings techniques and vector store, but it does not work.
Any ideas?
PD: The PDF documents are around 450 pages
### Suggestion:
_No response_ | Issue: Documents embeddings with many and similar numbers don't return good results | https://api.github.com/repos/langchain-ai/langchain/issues/11331/comments | 5 | 2023-10-03T12:04:42Z | 2024-02-12T16:12:34Z | https://github.com/langchain-ai/langchain/issues/11331 | 1,923,981,257 | 11,331 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Yeah, so I've been attempting to load Llama2 and utilize Langchain to create a condensed model for my documents. I managed to accomplish the task successfully, as Llama2 generated the results quite effectively. However, when I tried to speed up the process by referencing Langchain's [LLMChain ](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html?highlight=llmchain#langchain.chains.llm.LLMChain)and one of its function -- arun, I kept encountering an error that said "Not Implement Error". It didn't matter which function I tried to use, whether it was acall, arun, or apredict, they all failed.
So here are my codes:
```
# load LLama from local drive
tokenizer=AutoTokenizer.from_pretrained("/content/drive/MyDrive/CoLab/LLama7b")
model=LlamaForCausalLM.from_pretrained("/content/drive/MyDrive/CoLab/LLama7b",quantization_config=quant_config,device_map="auto",)
# define output parameters
prefix_llm=transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True,
task="text-generation",# temperature=0,
top_p=0.15, top_k=15,
max_new_tokens=1060, repetition_penalty=1.2,
do_sample=True
)
# wrap up with Langchain HuggingfaceLLm class
llm=HuggingFacePipeline(pipeline=prefix_llm, cache=False)
# Combine with LLMChain
prompt: PromptTemplate, input_variables="words"
llm_chain = LLMChain(prompt=prompt, llm=llm)
# calling arun
input: list[str]
llm_chain.run(input[num]) || llm_chain.run(words=input[num]) # run smoothly
llm_chain.arun(input) || llm_chain.arun(words=input) # Raise error NotImplementError
```
```
error comes from langchain/llms/base.py in _agenerate(self, prompts, stop, run_manager, **kwargs)
479 ) -> LLMResult:
480 """Run the LLM on the given prompts."""
--> 481 raise NotImplementedError()
482
483 def _stream(
NotImplementedError:
# in this last function, "run_manager" should be None, "stop" too.
```
As far as I explored, this process will call 10 frames. Start on
```
langchian/chains/base.py -> def acall() to
langchian/chains/base.py -> def agenerate()
then jump to
langchian/chains/llm.py -> def _acall() to
langchian/chains/llm.py -> def agenerate()
in the last it stopped at
langchian/llms/base.py -> def agenerate() to
langchian/llms/base.py -> def agenerate_prompt(),
langchian/llms/base.py -> def agenerate_helper()
langchian/llms/base.py -> def _agenerate()
```
I don't know what "thing" I should implement or Langchian didn't implement. the calling code is 100% refer to the [Langchain example](https://python.langchain.com/docs/modules/chains/how_to/async_chain). I suspect maybe it's because Langchain didn't fully support Llama? Since it runs smoothly on OpenAI API.
### Suggestion:
_No response_ | Issue: LLMChain arun running with Llama7B encounter NotImplementError | https://api.github.com/repos/langchain-ai/langchain/issues/11325/comments | 2 | 2023-10-03T09:18:23Z | 2024-02-07T16:23:58Z | https://github.com/langchain-ai/langchain/issues/11325 | 1,923,694,958 | 11,325 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there a codeinterpreter plugin for langchain?
Use Case: I want to generate insights with charts as like the paid ChatGPT out there. Is this possible?
### Suggestion:
_No response_ | CodeInterpreter for Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/11319/comments | 1 | 2023-10-03T04:45:52Z | 2024-02-06T16:28:21Z | https://github.com/langchain-ai/langchain/issues/11319 | 1,923,299,236 | 11,319 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain Version: 0.0.306
- Python Version: 3.10.11
- Azure Cognitive Seach (any sku)
- Embedding that has batch functionality
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Get a document (or set of documents) that will produce a few hundred chunks
2. Run the sample code provided by Langchain to index those documents into the Azure Search vector store (sample code below)
3. Run steps 1 and 2 on another vector store that supports batch embedding (milvus implementation supports batch embeddings)
4. Analyze the delta between the two vectorization speeds (Azure Search should be noticeably slower)
Sample code:
```python
import openai
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader("../../../state_of_the_union.txt", encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
```
### Expected behavior
The expected behavior for the Azure search implementation to also support batch embedding if the implementation of the embedding class supports batch embedding. This should result in significant speed improvements when comparing the single embedding approach vs the batch embedding approach. | Azure Cognitive Search vector DB store performs slow embedding as it does not utilize the batch embedding functionality | https://api.github.com/repos/langchain-ai/langchain/issues/11313/comments | 6 | 2023-10-02T21:42:29Z | 2024-02-27T11:12:59Z | https://github.com/langchain-ai/langchain/issues/11313 | 1,922,776,905 | 11,313 |
[
"langchain-ai",
"langchain"
] | ### System Info
v: 0.0306
python version: Python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Recently upgraded to version 0.0.306 of langchainversion. (from 0.0.259)
I see to run into the below error
```
2023-10-02T19:42:20.279102248Z File "/home/appuser/.local/lib/python3.11/site-packages/langchain/__init__.py", line 322, in __getattr__
2023-10-02T19:42:20.279258947Z raise AttributeError(f"Could not find: {name}")
2023-10-02T19:42:20.279296147Z AttributeError: Could not find: llms
```
I have no idea as to what is causing this and from where the call is being made. I know that the error is coming from the below file (line 328)
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/__init__.py
I am not able find any attribute called "llms" that is being set in any of our code base - hence more confused about the Attribute error.
### Expected behavior
- Perhaps a stack trace that shows the flow of calls
- Documentation that shows the possibilities of the error | Upgrade to v :0.0.306 - AttributeError: Could not find: llms | https://api.github.com/repos/langchain-ai/langchain/issues/11306/comments | 2 | 2023-10-02T19:50:11Z | 2024-05-01T16:05:15Z | https://github.com/langchain-ai/langchain/issues/11306 | 1,922,568,920 | 11,306 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everyone,
I've encountered an issue while trying to instantiate the ConversationalRetrievalChain in the Langchain library. It seems to be related to the abstract class BaseRetriever and the required method _get_relevant_documents.
Here's my base code:
```
import os
from langchain.vectorstores import FAISS
from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.embeddings.openai import OpenAIEmbeddings
from dotenv import load_dotenv
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
# Load environment variables
load_dotenv("info.env")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Initialize the FAISS vector database
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
index_filenames = ["index1", "index2"]
dbs = {f"category{i+1}": FAISS.load_local(filename, embeddings) for i, filename in enumerate(index_filenames)}
# Load chat model and question answering chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="map_reduce")
# Initialize conversation memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Set the role of the AI
role = "Assistant"
# Write the role of the AI
memory.save_context({"input": f"AI: As an {role}, I can help you with your questions based on the retrievers and my own data."}, {"output": ""})
# Initialize the ConversationalRetrievalChain with memory and custom retrievers
qa = ConversationalRetrievalChain.from_llm(llm, retriever=dbs, memory=memory)
# Pass in chat history using the memory
while True:
user_input = input("User: ")
# Add user input to the conversation history
memory.save_context({"input": f"User: {user_input}"}, {"output": ""})
# Check if the user wants to exit
if user_input == "!exit":
break
# Run the chain on the user's query
response = qa.run(user_input)
# Print the response with a one-line space and role indication
print("\n" + f"{role}:\n{response}" + "\n")
```
And here's the error message I received:
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
```
It appears that the retriever I provided doesn't fully implement the required methods. Could someone provide guidance on how to properly implement a retriever for ConversationalRetrievalChain and help update the code based on that?
Any help would be greatly appreciated. Thank you!
### Suggestion:
_No response_ | Issue: Abstract Class Implementation problem in Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/11303/comments | 21 | 2023-10-02T19:06:33Z | 2023-11-15T12:51:45Z | https://github.com/langchain-ai/langchain/issues/11303 | 1,922,487,402 | 11,303 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.291
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
elements_for_filter = ['a10','b10']
the_filter = {
'type': {
'in': elements_for_filter
}
}
if rag_similarity_strategy == 'similarity_score_threshold':
retriever = vectordb.as_retriever(search_type=rag_similarity_strategy, search_kwargs={
"k": 4, 'score_threshold': 0.6,
'filter': the_filter})
elif rag_similarity_strategy == 'mmr':
retriever = vectordb.as_retriever(search_type=rag_similarity_strategy, search_kwargs={
'filter': the_filter, "k": 4, 'lambda_mult': 0.5,
'fetch_k': 20,
})
```
### Expected behavior
I expect the retrieve apply the filter when I use 'mmr' as it does when I used 'similarity_score_threshold' but it does not. | PGVector 'mmr' does not apply the filter at the time of retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/11295/comments | 4 | 2023-10-02T16:36:34Z | 2024-02-11T16:13:06Z | https://github.com/langchain-ai/langchain/issues/11295 | 1,922,260,149 | 11,295 |
[
"langchain-ai",
"langchain"
] | ### Feature request
we are using your VLLMOpenAI class in a project to connect to our vLLM. we however don't use openAI, and found kind of weird that you have this implemented, but naming suggests it is specifically for openAI only. You could generalise this class so that it can be used with any kind of vLLM (which it already does). So basically just some refactoring might be the whole thing.
### Motivation
We are using this VLLMOpenAI class with other open source models. Kind of misleading and unintuitive to say we use the VLLMOpenAI class. We wish ourselves a custom class for open source models or any other kind of vLLM deployment.
### Your contribution
No time capacity :( So other than my genuine enthusiasm on your project and the whole documenting of the feature I can't really offer much more... | VLLMOpenAI class should be generalised | https://api.github.com/repos/langchain-ai/langchain/issues/11291/comments | 5 | 2023-10-02T14:25:10Z | 2024-02-09T16:19:48Z | https://github.com/langchain-ai/langchain/issues/11291 | 1,922,030,381 | 11,291 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, I'd like to request the chain type of Chain-of-Verification (CoVe).
### Motivation
I see that there's already a `LLMCheckerChain` and a `SmartLLMChain` which use related techniques, but implementing the below four-step process as described in https://arxiv.org/abs/2309.11495 would, I think, still be a very popular feature.
**Chain-of-Verification steps:**
1. drafts an initial response
2. plans verification questions to fact-check its draft
3. answers those questions independently so the answers are not biased by other responses
4. generates its final verified response
### Your contribution
Not sure yet. | [Feature Request] Chain-of-Verification (CoVe) | https://api.github.com/repos/langchain-ai/langchain/issues/11285/comments | 6 | 2023-10-02T13:38:39Z | 2024-02-13T16:11:28Z | https://github.com/langchain-ai/langchain/issues/11285 | 1,921,942,394 | 11,285 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.305
Python version 3.101.1
Platform VScode
I am trying to create a chatbot using langchain and streamlit by running this code:
```
import os
import streamlit as st
from st_chat_message import message
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
SystemMessage,
HumanMessage,
AIMessage
)
def init():
load_dotenv()
#Checking Openai api key
if os.getenv("OPENAI_API_KEY") is None or os.getenv("OPENAI_API_KEY") == "":
print("OPENAI_API_KEY is NOT set yet")
exit(1)
else:
print("OPENAI_API_KEY is fully set")
st.set_page_config(
page_title = "ZIKO",
page_icon = "🤖"
)
def main():
init()
chat = ChatOpenAI(temprature = 0)
messages = [
SystemMessage(content="You are a helpful assistant.")
#HumanMessage(content=input),
#AIMessage()
]
st.header("ZIKO 🤖")
with st.sidebar:
user = st.text_input("Enter your message: ", key = "user")
if user:
message(user, is_user = True)
messages.append(HumanMessage(content=user))
response = chat(messages)
message(response.content)
if __name__ == '__main__':
main()
```
But I am getting this message:
ModuleNotFoundError: No module named 'langchain.chat_models'
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
Installing langchain
Importing langchain.chat_modules
### Expected behavior
Module loads successfully | ModuleNotFoundError: No module named 'langchain.chat_models' | https://api.github.com/repos/langchain-ai/langchain/issues/11277/comments | 4 | 2023-10-02T06:45:42Z | 2024-02-21T16:08:24Z | https://github.com/langchain-ai/langchain/issues/11277 | 1,921,339,371 | 11,277 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want simple methods to use on the various memory classes to enable easy serialization and de serialization of the current state of the convo, crucially including system messages as json dictionaries.
If this is currently already included, I cannot for the life of me find it anywhere in the documentation or the source code.
In particular there doesn't seem to be a way to add system messages directly to the memory.
Heres some pseudo code outlining what this would look like
```python
serialized_memory = memory_instance.serialize_to_json()
new_memory = MemoryClass()
new_memory.load_from_json(serialized_memory)
```
In the new memory, the entire contents of the history(at least what has not been pruned) would be included. Including any system messages, which with the summary variants would include the convo summaries.
### Motivation
The absence of easy serialization and deserialization methods makes it difficult to save and load conversations in JSON files or other formats. Working with dictionaries is straightforward, and they can be easily converted into various data storage formats.
### Your contribution
Possibly. I just started messing around with this module today, and I am relatively new to Python. I will be doing my best to learn more about how this library works so I can add this feature.
With some more research and experimentation this could be something I could do. | Simple Serialization and Deserialization of Memory classes to and from dictionaries w/ System Messages included | https://api.github.com/repos/langchain-ai/langchain/issues/11275/comments | 8 | 2023-10-02T04:50:53Z | 2024-02-15T16:09:05Z | https://github.com/langchain-ai/langchain/issues/11275 | 1,921,243,571 | 11,275 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.