issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I was using Langchain (map reduce) for summarization of longer documents with local HuggingFace model. I am using local models as my work prohibits me to directly connect to huggingface and/or openai.
I am having some issue in running the model. I am getting the following errors:
"'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f11d2f511d0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/gpt2/resolve/main/tokenizer_config.json"
My code is:
`from transformers import AutoTokenizer, LlamaForCausalLM,AutoModelForSeq2SeqLM,AutoModel, AutoModelForCausalLM, , LlamaTokenizer, LlamaForCausalLM,AutoModelForCausalLM,pipeline
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.docstore.document import Document
from langchain.document_loaders import PyPDFLoader
import os
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("/home/projects/nlp_summarize/led-large-16384")
tokenizer2 = AutoTokenizer.from_pretrained("/home/projects/nlp_summarize/led-large-16384")
tokenizer2.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True
pipe = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer2,
#max_new_tokens= 4096,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15,device= DEVICE
)
llm = HuggingFacePipeline(pipeline=pipe)
text_splitter = CharacterTextSplitter()
with open("stateoftheunion.txt") as f:
data = f.read()
texts = text_splitter.split_text(data)
docs = [Document(page_content=t) for t in texts[:]]
chain = load_summarize_chain(llm, chain_type="map_reduce")
output_summary = chain.run(docs)
`
Even though I am not using any gpt2 model2 but still the model is looking for a gpt2 model online.
Any idea why such behavior is happening and how can I avoid it?
### Suggestion:
_No response_ | Local huggingface model for summarization task | https://api.github.com/repos/langchain-ai/langchain/issues/8931/comments | 4 | 2023-08-08T19:23:10Z | 2024-01-30T00:41:16Z | https://github.com/langchain-ai/langchain/issues/8931 | 1,841,912,446 | 8,931 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The qdrant vector store currently has an async function for adding texts to the (aadd_texts) which only supports creating embeddings with the _generate_rest_batches function synchronously.
There could be the option to add a async embedding function and aadd_texts could maybe then have a parameter to create the embeddings async with an async version of _generate_rest_batches.
### Motivation
It would be nice to have the option to have an aadd_texts function with completely non blocking IO.
### Your contribution
I could try to implement this or support with the implementation. | Qdrant support for async embedding functions in aadd_texts | https://api.github.com/repos/langchain-ai/langchain/issues/8930/comments | 1 | 2023-08-08T18:27:50Z | 2023-11-14T16:06:19Z | https://github.com/langchain-ai/langchain/issues/8930 | 1,841,811,758 | 8,930 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.257
Python 3.8.17
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms.openai import AzureOpenAI
llm = AzureOpenAI(deployment_name="text-davinci-003", verbose=True)
llm._generate(["You are a chatbot. \nUser: How are you?\nBot:",
"You are a chatbot. \nUser: What is the weather like?\nBot:"], n=2)
```
### Expected behavior
The result should be the same when we are providing 'n' parameter when initiating the 'AzureOpenAI' class in the beginning. However, it is not being handled, when passing it through the 'generate' function.
When n is "2", Generations only contain 1 example for each prompt, however, I expect it to have 2 examples for each. Additionally, 2 responses for the first prompt are returned as the result of two prompts, ultimately losing the result of the second prompt.
result:
```
[
[Generation(text=" I'm doing great, thanks for asking! How about you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
],
[Generation(text=" I'm doing well, thanks for asking! How about you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
]
]
```
This may be because when the 'create_llm_result' function is called, it does not receive the recent value of 'n', instead of 2 it shows 'n' as 1. When I manually set 'self.n' to 2, it returns the expected result.
code change:
```
def create_llm_result(
self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
++ self.n = 2
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
```
result:
```
[
[Generation(text=" I'm doing great, thanks for asking! How can I help you today?", generation_info={'finish_reason': 'stop', 'logprobs': None
}), Generation(text=" I'm doing well, thank you for asking! How can I help you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
],
[Generation(text=' The weather today is mostly sunny with a high of 70 degrees Fahrenheit and a low of 45 degrees Fahrenheit.', generation_info={'finish_reason': 'stop', 'logprobs': None
}), Generation(text=' The current weather is sunny and warm with temperatures in the high 70s.', generation_info={'finish_reason': 'stop', 'logprobs': None
})
]
]
``` | n hyperparameter in AzureOpenai is not updated | https://api.github.com/repos/langchain-ai/langchain/issues/8928/comments | 4 | 2023-08-08T18:03:51Z | 2023-11-29T09:49:10Z | https://github.com/langchain-ai/langchain/issues/8928 | 1,841,767,814 | 8,928 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: MacOS
Python version: Python 3.10.12
LangChain version: 0.0.257
Azure Search package version: 1.0.0b2
Azure Search Document package version: 11.3.0
azure-search==1.0.0b2
azure-search-documents==11.3.0
langchain==0.0.257
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install packages langchain, azure-search, azure-search-documents.
2. Try to create an instance of AzureSearch as described [here](https://python.langchain.com/docs/integrations/vectorstores/azuresearch). (Official LangChain documentation.)
3. Receive error `AttributeError: module ‘azure.search.documents.indexes.models._edm’ has no attribute ‘Single’` from:
```
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
### Expected behavior
AzureSearch instance should be created without any errors from langchain dependencies. _edm.py file in azure-search-documents package does not define Single as a data type. | SearchFieldDataType.Single raises error in azuresearch.py, under AzureSearch() class. | https://api.github.com/repos/langchain-ai/langchain/issues/8917/comments | 4 | 2023-08-08T14:27:07Z | 2023-08-11T14:08:16Z | https://github.com/langchain-ai/langchain/issues/8917 | 1,841,431,623 | 8,917 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version - 0.0.257
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i run the following
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://" + settings.self_hosted_server_ip+":8080",
max_new_tokens=2000,
top_k=5,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.23,
callbacks = callbacks,
stream = True
)
my_chain = LLMChain(
llm=llm, # type: ignore
prompt=prompt,
verbose=False,
)
tools = [PythonAstREPLTool()]
custom_agent = LLMSingleActionAgent(
llm_chain=my_chain,
output_parser=my_output_parser,
stop=["\nObservation:"],
)
agent_memory = CustomConversationTokenBufferMemory(
k=1,
llm=llm, # type: ignore
max_token_limit=4096,
memory_key="history"
)
return AgentExecutor.from_agent_and_tools(
agent=custom_agent,
tools=tools,
verbose=True,
memory=agent_memory,
)
```
where the HF inference endpoint is running with the following docker :
```
model=meta-llama/Llama-2-70b-chat-hf
docker run -d --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --max-input-length 3000 --max-total-tokens 4096 --quantize bitsandbytes --json-output --model-id $model --trust-remote-code
```
the error i get is :
**Token indices sequence length is longer than the specified maximum sequence length for this model (1286 > 1024). Running this sequence through the model will result in indexing errors**
but the Llama-2-70B model token limit is 4096....
i tried to test using just the :
```
llm_local = HuggingFaceTextGenInference(
inference_server_url=inference_server_url_local,
max_new_tokens=2000,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.7,
repetition_penalty=1.03,
)
```
with :
```
llm_chain_local = LLMChain(prompt=prompt, llm=llm_local)
print(llm_chain_local("some query"))
```
and didnt get any error
### Expected behavior
i am not expecting any token limit issue | running HuggingFaceTextGenInference from an agent gives token limit warning | https://api.github.com/repos/langchain-ai/langchain/issues/8913/comments | 5 | 2023-08-08T12:32:10Z | 2024-01-30T00:41:16Z | https://github.com/langchain-ai/langchain/issues/8913 | 1,841,220,971 | 8,913 |
[
"langchain-ai",
"langchain"
] | ### System Info
`0.0.257`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Load a document from GCS and check the metadata["source"], it would point to the file in `/tmp` directory
### Expected behavior
source pointing to an original file on GCS | GCSFileLoader stores a wrong source in the metadata | https://api.github.com/repos/langchain-ai/langchain/issues/8911/comments | 2 | 2023-08-08T12:17:01Z | 2023-08-08T21:23:10Z | https://github.com/langchain-ai/langchain/issues/8911 | 1,841,196,684 | 8,911 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.251
Python = 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an OWL ontology called `dbpedia_sample.ttl` with the following:
``` turtle
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix wikidata: <http://www.wikidata.org/entity/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix : <http://dbpedia.org/ontology/> .
:Actor
a owl:Class ;
rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ;
rdfs:label "actor"@en ;
rdfs:subClassOf :Artist ;
owl:equivalentClass wikidata:Q33999 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> .
:AdministrativeRegion
a owl:Class ;
rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ;
rdfs:label "administrative region"@en ;
rdfs:subClassOf :Region ;
owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> .
:birthPlace
a rdf:Property, owl:ObjectProperty ;
rdfs:comment "where the person was born"@en ;
rdfs:domain :Animal ;
rdfs:label "birth place"@en ;
rdfs:range :Place ;
rdfs:subPropertyOf dul:hasLocation ;
owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> .
```
2. Run
``` python
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="dbpedia_sample.ttl",
serialization="ttl",
standard="owl"
)
print(graph.get_schema)
```
3. Output
```
In the following, each IRI is followed by the local name and optionally its description in parentheses.
The OWL graph supports the following node types:
<http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration))
The OWL graph supports the following object properties, i.e., relationships between objects:
<http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born)
The OWL graph supports the following data properties, i.e., relationships between objects and literals:
```
### Expected behavior
The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code.
For example, getting the RDFS properties via
``` python
rel_query_rdf = prefixes["rdfs"] + (
"""SELECT DISTINCT ?rel ?com\n"""
"""WHERE { \n"""
""" ?subj ?rel ?obj . \n"""
""" OPTIONAL { ?cls rdfs:comment ?com } \n"""
"""}"""
)
```
you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`.
The same holds for all other queries regarding properties.
The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part) | RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable | https://api.github.com/repos/langchain-ai/langchain/issues/8907/comments | 1 | 2023-08-08T10:57:54Z | 2023-10-25T20:36:59Z | https://github.com/langchain-ai/langchain/issues/8907 | 1,841,077,028 | 8,907 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Implement a Chat Message History class backed up by Google Cloud Firestore.
### Motivation
It's a common No-SQL database, used by a lot of people to build MVPs, due to its friendly pricing.
### Your contribution
I'd submit a PR to implement this, if you guys think that it could be a helpful feature | Cloud Firestore Chat Message History | https://api.github.com/repos/langchain-ai/langchain/issues/8906/comments | 3 | 2023-08-08T10:52:17Z | 2024-07-14T18:14:33Z | https://github.com/langchain-ai/langchain/issues/8906 | 1,841,068,385 | 8,906 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm looking for a way to add entries to a vector store when the embeddings already exist and I don't want to calculate them again. However, it seems like this is not possible with langchain at the moment. Maybe something like an `add_entries()` function would be nice, where the data needs to be exactly in the form as it's stored in the DB.
### Motivation
What if you want to add entries but you already have embeddings? What if you need to calculate embeddings separately from the "add to db" step?
### Your contribution
I already built a workaround for my own problem but it's very hacky. It would be nice if langchain could do this natively. | VectorStore: Add entries by providing embeddings and not an embedding function | https://api.github.com/repos/langchain-ai/langchain/issues/8905/comments | 1 | 2023-08-08T09:35:08Z | 2023-11-14T16:05:19Z | https://github.com/langchain-ai/langchain/issues/8905 | 1,840,945,541 | 8,905 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.256
platform: Linux
Python 3.9.16
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is a mistake in langchain.llms.chatglm.py

in line 128, where:
**`self.history = self.history + [[None, parsed_response["response"]]]`**
when you run this code, the log file shows like this:
2023-08-08 16:44:26,370 - ChatGLM - Query - 谢谢
2023-08-08 16:44:26,370 - ChatGLM - History - [['我将从美国到中国来旅游,出行前希望了解中国的城市', '欢迎问我任何问题。'], [None, '我是一个人工智能助手,我将尽力回答您的问题。'], [None, '我能回答各种问题、提供信息、帮助解决问题等。']]
Actually,it should be:
**`self.history = self.history + [[prompt, parsed_response["response"]]]`**
In this way, my input prompt can be recorded in the list correctly!
### Expected behavior
record my input prompt in a list with correspond response as history. | There is a bug in [Integration]-->[ChatGLM] | https://api.github.com/repos/langchain-ai/langchain/issues/8904/comments | 2 | 2023-08-08T09:08:19Z | 2023-11-14T16:05:28Z | https://github.com/langchain-ai/langchain/issues/8904 | 1,840,899,526 | 8,904 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain v0.0.254
python 3.11
### Who can help?
In this commit https://github.com/langchain-ai/langchain/commit/81e5b1ad362e9e6ec955b6a54776322af82050d0#diff-5d00673e4963a0b2c6bf091d22f98c30d267b20fea4d15d9541ba6d1f5d79e4fR20 inheritance from `Serializable` was introduced. This inheritance is the root of the problem. The commit was by @nfcampos. Perhaps you can help?
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Simply run this code:
``` python
from typing import Protocol
from langchain.schema.retriever import BaseRetriever
class Foo(Protocol):
bar: int
class IAmAFailure(BaseRetriever):
foo: Foo
```
You will then see the error:
```
Traceback (most recent call last):
File "/home/yolen/Desktop/langchain_bug.py", line 9, in <module>
class IAmAFailure(Serializable):
File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 639, in pydantic.fields.ModelField._type_analysis
File "/usr/local/lib/python3.11/typing.py", line 1960, in __instancecheck__
raise TypeError("Instance and class checks can only be used with"
TypeError: Instance and class checks can only be used with @runtime_checkable protocols
```
Adding the `@runtime_checkable` decorator does not help (just changes the error). To my understanding, this problem is cased by the inheritance of `Serializable` in the new release https://github.com/langchain-ai/langchain/blob/fa30a57034b6359e8de36bbb98766d2214acfcbd/libs/langchain/langchain/schema/retriever.py#L21.
### Expected behavior
I would expect that I can use Protocols with retrievers. In my use case, I, among others, inject a class that can compute a query embedding. The `QueryEmbedding` class is a protocol e.g.
```python
class QueryEmbedder(Protocol):
async def get_embeddings(self, *, text: str) -> list[TextEmbedding]:
...
class FooRetriever(BaseRetriever):
def __init__(self,*, query_embedder:QueryEmbedder)->None:
self.query_embedder=query_embedder
async def _aget_relevant_documents(
self, query: str,
*, run_manager: AsyncCallbackManagerForRetrieverRun) -> list[Document]:
query_embedding = await self.query_embedder.embed(query=query)
...
```
Protocols are very good at reducing the coupling in your code base. Furthermore, I do think that using multiple inheritance will cause problems at some stage. _Composition over inheritance_ | Retrievers inheriting from BaseRetriever are incompatible with typing.Protocol | https://api.github.com/repos/langchain-ai/langchain/issues/8901/comments | 2 | 2023-08-08T05:50:30Z | 2023-11-14T16:07:42Z | https://github.com/langchain-ai/langchain/issues/8901 | 1,840,608,946 | 8,901 |
[
"langchain-ai",
"langchain"
] | ### System Info
I use OpenAIEmbeddings from langchain.embedding, and using openAI function.
However I find it's a problem that when we call OpenAI Interface, the input is a 2D list, not a 1D list.
If the input is a 1D list, it could work for embedding.create func
like this:

and I found out that openai api does not support 2D list as input.


how could I solve this problem

the error is shown below:
```
Traceback (most recent call last):
File "..\main.py", line 47, in <module>
langchain_openai()
File "..\main.py", line 42, in langchain_openai
query_result = embeddings.embed_query(text)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 501, in embed_query
return self.embed_documents([text])[0]
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 473, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 359, in _get_len_safe_embeddings
response = embed_with_retry(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 108, in embed_with_retry
return _embed_with_retry(**kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\concurrent\futures\_base.py", line 437, in result
return self.__get_result()
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\concurrent\futures\_base.py", line 389, in __get_result
raise self._exception
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 105, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: parse parameter error: type mismatch
```
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

```
def langchain_openai():
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
model="text-embedding-ada-002",
openai_api_base="",
openai_api_key=""
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result[:5])
```
### Expected behavior
I think it's a bug or something to fix | Type Mismatch For using OpenAI Interface | https://api.github.com/repos/langchain-ai/langchain/issues/8899/comments | 4 | 2023-08-08T03:37:55Z | 2023-08-08T10:56:43Z | https://github.com/langchain-ai/langchain/issues/8899 | 1,840,506,624 | 8,899 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello, how to expand the integrated content, such as the SQLDatabaseToolkit class, after executing the existing tools to continue other work, I checked the following code in the SQLDatabaseToolkit class,`
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""
return [
QuerySQLDataBaseTool(db=self.db),
InfoSQLDatabaseTool(db=self.db),
ListSQLDatabaseTool(db=self.db),
QueryCheckerTool(db=self.db, llm=self.llm),
]`
how can I add my own tool class to expand it?
I tried to subclass BaseTool according to the instructions in the documentation, and then override the get_tools method of the SQLDatabaseToolkit class to add my custom BaseTool. However, it failed.
I'm also considering how to extend the integration for other components to achieve my task scenario.
### Idea or request for content:
_No response_ | how to expand the integrated content | https://api.github.com/repos/langchain-ai/langchain/issues/8898/comments | 5 | 2023-08-08T03:33:04Z | 2023-11-14T16:05:37Z | https://github.com/langchain-ai/langchain/issues/8898 | 1,840,501,592 | 8,898 |
[
"langchain-ai",
"langchain"
] | ### Feature request
In Qdrant vectorstore, we have a method:
```
def retrieve(
self,
collection_name: str,
ids: Sequence[types.PointId],
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
consistency: Optional[types.ReadConsistency] = None,
) -> List[types.Record]:
"""Retrieve stored points by IDs
Args:
collection_name: Name of the collection to lookup in
ids: list of IDs to lookup
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` - Do not attach vector.
- If List of string - Attach only specified vectors.
- Default: `False`
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result.
Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of points
"""
```
I have not see simarly method in FAISS implementation, how to retrieve vectors `with_vectors=True` in FAISS vectorstore?
### Motivation
I am using FAISS vectorstore, and now:
1. use FAISS added some doc and returned doc ids;
2. I want a method like in Qdrant, to retrieve the embedding from this doc ids;
Here is the Qdrant simarliy code:
```
ids_inserted = qdrant_client.add_texts(
[description],
[{
"source": "todo"
}],
)
# now I want to get the embeddings
records_inserted = qdrant_client.retrieve(
collection_name="test",
ids=ids_inserted,
with_vectors=True
)
self.embedding = records_inserted[0].vector
```
### Your contribution
Sorry, no. | How to retrieve vectors by ids for LangChain vectorstore FAISS? | https://api.github.com/repos/langchain-ai/langchain/issues/8897/comments | 6 | 2023-08-08T03:26:19Z | 2023-08-09T08:15:18Z | https://github.com/langchain-ai/langchain/issues/8897 | 1,840,497,093 | 8,897 |
[
"langchain-ai",
"langchain"
] | ### System Info
v0.0.256
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Add `print(f"chunk-size: {_chunk_size}")` in OpenAIEmbeddings._get_len_safe_embeddings after `_chunk_size = chunk_size or self.chunk_size`, and pass chunk_size argument to embed_documents other than the default value of 1000. 1000 will be printed instead of the chunk_size passed as argument.
### Expected behavior
chunk_size arg is not used in OpenAIEmbeddings's embed_documents and aembed_documents methods. It should be passed to self._get_len_safe_embeddings.
Check: https://github.com/langchain-ai/langchain/blob/v0.0.256/libs/langchain/langchain/embeddings/openai.py#L463 | chunk_size arg is not used in OpenAIEmbeddings's embed_documents and aembed_documents methods | https://api.github.com/repos/langchain-ai/langchain/issues/8887/comments | 2 | 2023-08-07T21:34:25Z | 2023-11-14T16:06:01Z | https://github.com/langchain-ai/langchain/issues/8887 | 1,840,237,698 | 8,887 |
[
"langchain-ai",
"langchain"
] | ### Feature request
## Description
Add format/lint actions to the pre-commit tool, so user's don't have to run these locally on their machines. This is a 2 stage process:
1. Make this part of the local commit process. This will ensure that all commits run these explicitly before pushing the change.
2. Adding this to the CI will ensure that users who choose to ignore setting up the pre-commit in their local repo, can rely on the CI to fix some of the formatting and linting (fixable). This also avoids some of the system specific issues/differences that users might end up having on their local machines (stale versions of lint tools) by ensuring that the CI is running these on one consistent up-to-date platform.
### Motivation
The current manual linting process adds churn and friction to the dev workflow as most devs end up pushing their changes without running these steps, and rely on the CI to remind them on running them. Some of the formatting and linting can be automated with CI, and does not require a dev's direct involvement.
### Your contribution
I am happy to work on this, but would need the core team's input before going ahead with this change. | Simplify linting workflow by adding to pre-commit | https://api.github.com/repos/langchain-ai/langchain/issues/8884/comments | 3 | 2023-08-07T20:57:44Z | 2023-11-16T16:06:26Z | https://github.com/langchain-ai/langchain/issues/8884 | 1,840,197,092 | 8,884 |
[
"langchain-ai",
"langchain"
] | ### System Info
Based on https://github.com/hwchase17/conversational-retrieval-agent/blob/master/streamlit.py, did few changes to code.
Since the beginning of the tests could not prevent the max content length error.
the best change was using:
memory = AgentTokenBufferMemory(memory_key="history", llm=llm, max_history=5, max_token_limit= 3000)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo",max_tokens=500)
It took longer but still got the error message. All the conversation was logged using LangSmith , username rubensmau@gmail.com ,, https://smith.langchain.com/projects/p/2e2c3411-5910-4785-985d-a4b31702e6c7/r/2893c334-9650-4125-99df-af6e18481b12
It seemed that the context was under control, but the latest question broke it.
Python version - 3.11.3
Langchain - 0.0.252
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
memory = AgentTokenBufferMemory(memory_key="history", llm=llm, max_history=5, max_token_limit= 3000)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo",max_tokens=500)
### Expected behavior
All the conversation was logged using LangSmith , username rubensmau@gmail.com ,, https://smith.langchain.com/projects/p/2e2c3411-5910-4785-985d-a4b31702e6c7/r/2893c334-9650-4125-99df-af6e18481b12 | conversational-retrieval-agent using AgentTokenBufferMemory : Still cannot prevent maximum content length errors | https://api.github.com/repos/langchain-ai/langchain/issues/8881/comments | 3 | 2023-08-07T20:04:21Z | 2023-08-11T12:21:33Z | https://github.com/langchain-ai/langchain/issues/8881 | 1,840,131,332 | 8,881 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.256
Python 3.8
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I've developed a Next.js application that uses the Langchain library for chat functionality and is deployed on AWS Amplify. The application works perfectly when running locally, but fails after deployment on AWS Amplify.
The application uses the Langchain library, OpenAIEmbeddings for generating embeddings, and PineconeStore for storing vectors. I've ensured that my environment variables are correctly set up both locally and in the AWS Amplify console.
Here is the code snippet for my chat handler:
```
import type { NextApiRequest, NextApiResponse } from 'next';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { makeChain } from '@/utils/makechain';
import { pinecone } from '@/utils/pinecone-client';
import { PINECONE_INDEX_NAME, PINECONE_NAME_SPACE } from '@/config/pinecone';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
const { question, history } = req.body;
if (req.method !== 'POST') {
res.status(405).json({ error: 'Method not allowed' });
return;
}
if (!question) {
return res.status(400).json({ message: 'No question in the request' });
}
const sanitizedQuestion = question.trim().replaceAll('\n', ' ');
try {
const index = pinecone.Index(PINECONE_INDEX_NAME);
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{
pineconeIndex: index,
textKey: 'text',
namespace: PINECONE_NAME_SPACE,
},
);
const chain = makeChain(vectorStore);
const response = await chain.call({
question: sanitizedQuestion,
chat_history: history || [],
});
res.status(200).json(response);
} catch (error: any) {
console.log('chat.ts file error: ', error);
res.status(500).json({ error: error.message || 'Something went wrong' });
}
}
```
### Expected behavior
The error message I'm receiving suggests that the Langchain library is assuming I am running on an Azure instance and is expecting an Azure-specific environment variable. However, I am not using Azure, I'm using AWS. The error message states that an 'azureOpenAIApiInstanceName' is missing, which, as I understand, is only relevant if I was using Azure.
Has anyone encountered a similar issue, or have any insights into why this might be happening? I've been unable to find any information in the Langchain library documentation about this Azure dependency. | Unexpected Azure Dependency with OpenAI and Langchain Library in Next.js App Deployed on AWS Amplify | https://api.github.com/repos/langchain-ai/langchain/issues/8877/comments | 5 | 2023-08-07T18:07:37Z | 2023-09-01T10:55:18Z | https://github.com/langchain-ai/langchain/issues/8877 | 1,839,976,644 | 8,877 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, Im using ConversationalRetrievalChain.from_llm and I notice that sometimes the answer is the following up question.
Also sometimes the answer generates a few Q and A all togheter.
Anyone with the same problem?
regards.
### Suggestion:
_No response_ | Issue: ConversationalRetrievalChain.from_llm sometime answer with the following up question | https://api.github.com/repos/langchain-ai/langchain/issues/8875/comments | 5 | 2023-08-07T17:57:21Z | 2024-04-01T15:48:55Z | https://github.com/langchain-ai/langchain/issues/8875 | 1,839,964,107 | 8,875 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.250
Python 3.11.3
22.5.0 Darwin Kernel Version 22.5.0
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
This code runs perfectly well if `agenerate` is replaced with `_agenerate`, meaning that the super function is not using the internal implementation correctly.
```
llm = ChatOpenAI(temperature=0.7, max_tokens=max_tokens)
messages = [
SystemMessagePromptTemplate.from_template("You are a funny chatbot"),
HumanMessagePromptTemplate.from_template("Tell me a joke about {topic}")
]
ChatPromptTemplate.from_messages(
messages
)
formatted_messages = chat_prompt.format_prompt(
topic='horses'
).to_messages()
async def async_generate(llm, formatted_messages):
return await llm.agenerate(messages=formatted_messages)
res = await async_generate(llm, formatted_messages)
```
### Expected behavior
agenerate should work the same as _agenerate | ChatOpenAI agenerate does not use internal _agenerate and does not support message roles | https://api.github.com/repos/langchain-ai/langchain/issues/8874/comments | 1 | 2023-08-07T17:26:43Z | 2023-09-25T09:33:50Z | https://github.com/langchain-ai/langchain/issues/8874 | 1,839,922,495 | 8,874 |
[
"langchain-ai",
"langchain"
] | I think most of the people who works with Langchain in building products or automating stuff noticed how bad the quality of answers degrades when you ask multiple of questions using the same chain (e.g: the chain gets longer). It gets to answer the first 2 or 3 questions right then it may hallucinate returning a wrong answer, it may fail to answer your question or even crash regardless of the GPT model used.
Is there a way anyone found in order to optimize this? One way I always try is to prompt engineering things but it doesn't help as much. | Quality of answers gets drastically bad with longer chains in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/8871/comments | 5 | 2023-08-07T15:40:13Z | 2023-11-19T16:05:51Z | https://github.com/langchain-ai/langchain/issues/8871 | 1,839,744,856 | 8,871 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Agents can deal with prompts in different languages, but they might get confused or more unreliable if the internal prompts (e.g. format structure with the action, action input, thought etc.) are still in English.
That's why I thought about an additional language option when initializing an agent. I tried to come up with an idea of how to implement this efficiently, but I found it particularly hard to implement it, so it can be used for every language.
My initial thought was to specify the necessary prompts in a different language and initialize the agent with an extra keyword. With this approach only a finite number of languages can be covered and it could end up in a mess to save all the prompts in different languages.
My second thought was to initialize the agent with a language keyword and then translate all the prompts in the desired language. That could be too much of a hustle for just initializing an agent with prompts that are actually known.
I wanted to share this and see if this is actually something that could be useful, and maybe gather some ideas of how to implement it.
### Motivation
I've implemented an MRKL agent that writes a detailed article on a specified topic based on internet search. In my specific case, it should write an article about topics related to Germany, e.g. the drought situation in Germany. Naively, my first attempt was to give it a prompt like _Write a detailed article about the drought situation in Germany. The article must be written in German language_.
I discovered two problems:
1. As I was giving the prompt in English, the model "thought" in English as well. Therefore, the action inputs were in English, too. This led to the result that all search results were also in English and primarily from sources (e.g. The Guardian, BBC) outside of Germany reporting about the drought situation in Germany. What I would have liked instead are search results from German newspapers or similar.
2. The text generation was very much inconsistent. It wasn't quite able to write the final article in German, as all information was given in English. Sometimes, it managed to write the text in German, but I also wrote all intermediate steps (action, action input, Observation, Thought, etc.) in German, too. That led to the problem of not finding the tool (tool descriptions were also in English) and resulting OutputParser Error, because the English keywords _thought_ and _action_ etc. could be found.
So my goal was to create an agent that solely worked in the German language. Meaning, that the prompt, intermediate steps, search results and the final output are all in German.
So what I did was reimplementing all the necessary MRKL agent functions and templates in a German version. It worked out and the results were as expected and reliable.
I think that agents with different language support could be beneficial for more users. You can give it a prompt in other languages but that the agent's internal prompts are in specific language, too.
### Your contribution
I am not really sure how to do that in a good way, so that multiple languages are supported for all the agents, and if that's even desired. But if there is a good strategy, I would like to try to implement it. | Language support for Agents | https://api.github.com/repos/langchain-ai/langchain/issues/8867/comments | 7 | 2023-08-07T13:28:41Z | 2024-02-13T16:14:17Z | https://github.com/langchain-ai/langchain/issues/8867 | 1,839,461,670 | 8,867 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When using supabase client I get dependency errors, which I believe originate from the pydantic library
Supabase uses requires pydantic<3.0,>=2.1.0
Langchain requires (I guess) 1.10.12
With langchain compatible pydantic versions, supabase errors out.
``` python
from supabase.client import Client, create_client
supabase_url = os.environ.get("SUPABASE_URL")
supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")
supabase: Client = create_client(supabase_url, supabase_key)
```
ImportError: cannot import name 'field_validator' from 'pydantic'
But if I upgrade pydantic, langchain starts erroring out.
``` python
from langchain.vectorstores.pgvector import PGVector
```
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
Is there any quick fix for this?
Thanks
| Issue: Supabase python client pydantic dependency mismatch | https://api.github.com/repos/langchain-ai/langchain/issues/8866/comments | 15 | 2023-08-07T13:15:04Z | 2023-12-02T16:06:32Z | https://github.com/langchain-ai/langchain/issues/8866 | 1,839,437,012 | 8,866 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to chat with a PDF using ConversationalRetrievalChain.
```
embeddings = OpenAIEmbeddings()
vectordb = Chroma(embedding_function=embeddings, persist_directory=directory)
qa_chain = ConversationalRetrievalChain.from_llm(ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.3), vectordb.as_retriever(), memory=memory)
answer = (qa_chain({"question": query}))
```
It works perfectly as it gives the answer from the documents. But it can't alter the tone or even convert the answer into another language when prompted.
How can we change the tone like we do in openai ChatCompletion:
```
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful and friendly chatbot who converts text to a very friendly tone."},
{"role": "user", "content": f"{final_answer}"}
]
)
```
such that it answers from the doc but also converts it according to some given prompt. Right now I have to pass the received output from ConversationalRetrievalChain in above code in order to modify the tone. Is this kind of functionality missing in ConversationalRetrievalChain?
### Suggestion:
_No response_ | Issue: How can we combine ConversationalRetrievalChain with openai ChatCompletion | https://api.github.com/repos/langchain-ai/langchain/issues/8864/comments | 2 | 2023-08-07T12:19:11Z | 2023-08-08T19:26:55Z | https://github.com/langchain-ai/langchain/issues/8864 | 1,839,336,132 | 8,864 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently GCSFileLoader uses UnstructuredFileLoader in a pre-defined mode, but it would be nice to allow to specify different pdf or other loaders.
### Motivation
Allow to specify loaders when working with files from GCS.
### Your contribution
yes, I'm happy to. | Allow GCSFileLoader to use alternative loaders for individual files | https://api.github.com/repos/langchain-ai/langchain/issues/8863/comments | 1 | 2023-08-07T11:56:01Z | 2023-08-08T12:14:10Z | https://github.com/langchain-ai/langchain/issues/8863 | 1,839,299,675 | 8,863 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Can we get ttl and sessionId in UpstashRedisCache ?
### Motivation
cleaning cache
### Your contribution
suggestion | UpstashRedisCache | https://api.github.com/repos/langchain-ai/langchain/issues/8860/comments | 3 | 2023-08-07T11:20:43Z | 2023-11-13T16:06:01Z | https://github.com/langchain-ai/langchain/issues/8860 | 1,839,241,519 | 8,860 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: Redhat 8
Python: 3.9
Langchain: 0.0.246
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
So i initially reported this bug with mlflow, bug upon further investigation it is related to langchain.
Following code is simple representation of bigger code. I will also post that in bottom.
===================================================================
Code:
<pre><code>
import os
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
os.environ["CURL_CA_BUNDLE"] = ""
if True: # run the following code to download the model flan-t5-small from huggingface.co
from transformers import pipeline
model = pipeline(model="google/flan-t5-small") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-small")
llm = HuggingFacePipeline.from_model_id(
model_id="/tmp/model/flan-t5-small",
task="text2text-generation",
model_kwargs={"temperature": 1e-10},
)
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain("my name is John")) # works
llm_chain.save("llm_chain.json")
from langchain.chains import load_chain
m = load_chain("llm_chain.json")
print(m("my name is John"))
</code></pre>
Error trace:
<code><pre>
{'input': 'my name is John', 'text': " toutefois, je suis en uvre à l'heure"}
Traceback (most recent call last):
File "a.py", line 37, in <module>
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 961, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
response = self.pipeline(prompt)
TypeError: 'NoneType' object is not callable
</code></pre>
===================================================================
Original Code where this bug occurred (MLFlow):
<code></pre>
import mlflow
from datetime import datetime
import logging
logging.getLogger("mlflow").setLevel(logging.DEBUG)
from langchain import PromptTemplate, LLMChain, HuggingFaceHub
from langchain.llms import HuggingFacePipeline
import os
def now_str():
return datetime.now().strftime("%Y%m%d%H%M%S")
os.environ["CURL_CA_BUNDLE"] = ""
if True:
from transformers import pipeline
model = pipeline(model="google/flan-t5-small") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-small")
llm = HuggingFacePipeline.from_model_id(
model_id="/tmp/model/flan-t5-small",
task="text2text-generation",
model_kwargs={"temperature": 1e-10},
)
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run("my name is John")) # This is working !!
### Output: {'input': 'my name is John', 'text': "j'ai le nom de John"}
experiment_id = mlflow.create_experiment(f"HF_LLM_{now_str()}")
with mlflow.start_run(experiment_id=experiment_id) as run:
logged_model = mlflow.langchain.log_model(
lc_model=llm_chain,
artifact_path="HF_LLM",
)
m = mlflow.langchain.load_model(logged_model.model_uri)
m.run("my name is John")
</code></pre>
Error Trace:
<code><pre>
Traceback (most recent call last):
File "a.py", line 51, in <module>
m.run("my name is John")
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 961, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
response = self.pipeline(prompt)
TypeError: 'NoneType' object is not callable
</code></pre>
===================================================================
Another Example:
<code><pre>
from datetime import datetime
import logging
logging.getLogger("mlflow").setLevel(logging.DEBUG)
from langchain import PromptTemplate, LLMChain, HuggingFaceHub
from langchain.llms import HuggingFacePipeline
import os
def now_str():
return datetime.now().strftime("%Y%m%d%H%M%S")
os.environ["CURL_CA_BUNDLE"] = ''
if (True): # run the following code to download the model flan-t5-large from huggingface.co
from transformers import pipeline
model= pipeline(model="google/flan-t5-large") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-large")
llm = HuggingFacePipeline.from_model_id(model_id="/tmp/model/flan-t5-large", task="text2text-generation", model_kwargs={"temperature":1e-10})
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(
prompt=prompt,
llm=llm
)
llm_chain("my name is John") # This is working !!
#Output: {'input': 'my name is John', 'text': "j'ai le nom de John"}
experiment_id = mlflow.create_experiment(f'HF_LLM_{now_str()}')
with mlflow.start_run(experiment_id=experiment_id) as run:
logged_model = mlflow.langchain.log_model(
lc_model=llm_chain,
artifact_path="HF_LLM",
)
=============== This throws error ==============
input_str = "My name is John"
loaded_model = mlflow.pyfunc.load_model(logged_model.model_uri)
output = loaded_model.predict(
[
{
"input": input_str
},
{
"input": "Do you like coffee?"
}
]
)
print(output)
</code></pre>
Error Trace:
<code><pre>
2023/08/07 08:57:08 WARNING mlflow.langchain.api_request_parallel_processor: Request #0 failed with TypeError("'NoneType' object is not callable")
2023/08/07 08:57:08 WARNING mlflow.langchain.api_request_parallel_processor: Request #1 failed with TypeError("'NoneType' object is not callable")
---------------------------------------------------------------------------
MlflowException Traceback (most recent call last)
<ipython-input-19-29b0feddacd1> in <cell line: 3>()
3 with project.setup_mlflow(mf) as mlflow:
4 loaded_model = mlflow.pyfunc.load_model(logged_model.model_uri)
----> 5 output = loaded_model.predict(
6 [
7 {
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py in predict(self, data)
426 raise
427
--> 428 return self._predict_fn(data)
429
430 @experimental
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/langchain/__init__.py in predict(self, data)
654 "Input must be a pandas DataFrame or a list of strings or a list of dictionaries",
655 )
--> 656 return process_api_requests(lc_model=self.lc_model, requests=messages)
657
658
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/langchain/api_request_parallel_processor.py in process_api_requests(lc_model, requests, max_workers)
138 # after finishing, log final status
139 if status_tracker.num_tasks_failed > 0:
--> 140 raise mlflow.MlflowException(
141 f"{status_tracker.num_tasks_failed} tasks failed. See logs for details."
142 )
MlflowException: 2 tasks failed. See logs for details.
</code></pre>
Looks like langchain doesn't restore the pipeline.
### Expected behavior
Langhain loads the chain successfully. | TypeError("'NoneType' object is not callable") | https://api.github.com/repos/langchain-ai/langchain/issues/8858/comments | 4 | 2023-08-07T10:15:56Z | 2023-11-16T16:06:31Z | https://github.com/langchain-ai/langchain/issues/8858 | 1,839,139,204 | 8,858 |
[
"langchain-ai",
"langchain"
] | ### System Info
I was trying to create FAISS embeddings that would work on different platforms so I tried to use:
`os.environ['FAISS_NO_AVX2'] = '1' `
as recommended in https://github.com/langchain-ai/langchain/blob/6cdd4b5edca511b0015f1b39102225fe638d8359/langchain/vectorstores/faiss.py
It works for windows, but I am getting `TypeError: IndexFlatCodes.add() missing 1 required positional argument: 'x'` when I try to create embeddings in Docker image
Full error:
```
TypeError: IndexFlatCodes.add() missing 1 required positional argument: 'x'
Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/app/src/pages/1_💬__AI-Chat.py", line 127, in <module>
chatbot = utils.setup_chatbot(
^^^^^^^^^^^^^^^^^^^^
File "/app/./src/modules/utils.py", line 121, in setup_chatbot
vectors = embeds.getDocEmbeds(file, uploaded_file.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/./src/modules/embedder.py", line 104, in getDocEmbeds
self.storeDocEmbeds(file, original_filename)
File "/app/./src/modules/embedder.py", line 86, in storeDocEmbeds
vectors = FAISS.from_documents(data, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 336, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 550, in from_texts
return cls.__from(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 505, in __from
index.add(vector)
```
langchain==0.0.226
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
loader = CSVLoader(file_path=tmp_file_path, encoding="utf-8",csv_args={
'delimiter': ',',})
data = loader.load()
embeddings = OpenAIEmbeddings(...)
vectors = FAISS.from_documents(data, embeddings)
### Expected behavior
embeddings should generated | Bug with os.environ['FAISS_NO_AVX2'] = '1' | https://api.github.com/repos/langchain-ai/langchain/issues/8857/comments | 4 | 2023-08-07T08:04:40Z | 2024-02-18T16:07:51Z | https://github.com/langchain-ai/langchain/issues/8857 | 1,838,913,498 | 8,857 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently when we initialize an agent we can only pass one llm which is responsible for both tool selection and chatting. There should be a feature that would allows us to define seperate llms for tool selection and chatting respectively.
### Motivation
It has been observed that text llms are better at decision making which is helpful when we have multiple tools but their chat responses are not that impressive. On the other hand chat models are better at chatting and but not as good at decision making which causes problems when we have multiple tools. Therefore, I want to harness the good of both in single agent.
### Your contribution
Yes I can submit a PR if certain guidance is provided. | Separate LLMs for Tool Selection and Chatting in Agent Initialization | https://api.github.com/repos/langchain-ai/langchain/issues/8855/comments | 1 | 2023-08-07T07:38:07Z | 2023-11-13T16:06:11Z | https://github.com/langchain-ai/langchain/issues/8855 | 1,838,870,679 | 8,855 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello! I am trying to use LangSmith to test my LLM. I've created a dataset, but I'm confused, as this dataset doesn't seem to work even though I am using `StuffDocumentsChain` in testing (I think) the same way I'm using it in production. Here's how I call it in production:
```
docs_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatOpenAI(
temperature=0,
),
prompt=prompt,
verbose=True,
),
document_variable_name="context",
verbose=True,
document_prompt=document_prompt,
)
...
answer = await docs_chain.arun(
input_documents=llm_context["docs"], **llm_context["inputs"]
)
...
```
So, this accept 4 inputs (I believe coming from `input_documents` and `llm_context["inputs"]`: `'question', 'chat_history', 'input_documents', 'organization_uuid'`
However, when I want to test this with LangSmith I'm trying to do:
```
def create_chain():
messages = [SystemMessagePromptTemplate.from_template(system_prompt)]
messages.append(HumanMessagePromptTemplate.from_template('{question}'))
retrieval_prompt = ChatPromptTemplate.from_messages(messages)
combine_docs_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatOpenAI(
temperature=0,
),
prompt=retrieval_prompt,
verbose=True,
),
document_variable_name="context",
verbose=True,
document_prompt=document_prompt,
)
return combine_docs_chain
...
await arun_on_dataset(
client=client,
dataset_name=dataset_name,
llm_or_chain_factory=create_chain,
evaluation=eval_config,
verbose=True,
)
```
but I'm getting:
```
langchain.smith.evaluation.runner_utils.InputFormatError: Example inputs do not match chain input keys. Please provide an input_mapper to convert the example.inputs to a compatible format for the chain you wish to evaluate.Expected: ['input_documents']. Got: dict_keys(['question', 'chat_history', 'input_documents', 'organization_uuid'])
```
I don't understand this, as I'm calling this in the same way... shouldn't those input map in the same way?
As the error states, I can also pass an `input_mapper` to just convert my example inputs into the required format, but then I would be throwing away the `question`, etc., and I want that `question` to be part of the test.
I'm probably thinking about this wrong. Let me know what you would do here, thank you!
### Suggestion:
_No response_ | Issue: Having trouble passing question with LangSmith | https://api.github.com/repos/langchain-ai/langchain/issues/8854/comments | 1 | 2023-08-07T07:08:48Z | 2023-08-08T03:07:45Z | https://github.com/langchain-ai/langchain/issues/8854 | 1,838,826,396 | 8,854 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi there, I am recently using Langchain to build my toy chatbot. I used both **Qdrant** cloud storage and local storage for test. However, when I am using local storage for **vector_store** and **search_kwargs**, everything is good. But, when I switch to the Qdrant cloud storage, the retriever of vectorDB is reporting no machtching **search_kwargs**. My partial codes shown below.
<img width="947" alt="image" src="https://github.com/langchain-ai/langchain/assets/59675745/50d0e930-38a8-4d69-b294-656435a5fdc3">
The error:
<img width="1012" alt="image" src="https://github.com/langchain-ai/langchain/assets/59675745/f800c82c-ac75-4158-8dc2-ae32a271b7c0">
### Suggestion:
_No response_ | Issue: AssertionError: Unknown arguments: ['fetch_k', 'maximal_marginal_relevance'] when specifyig search_kwargs ini from_llm() function call | https://api.github.com/repos/langchain-ai/langchain/issues/8852/comments | 1 | 2023-08-07T06:33:40Z | 2023-08-07T16:20:46Z | https://github.com/langchain-ai/langchain/issues/8852 | 1,838,777,588 | 8,852 |
[
"langchain-ai",
"langchain"
] | ### Feature request
```
from langchain.indexes import GraphIndexCreator
from langchain.graphs.networkx_graph import KnowledgeTriple
index_creator = GraphIndexCreator(llm=llm)
graph = index_creator.from_text('text to RDF')
graph.add_triple(KnowledgeTriple('aaa', '10', 'bbb'))
graph.add_triple(KnowledgeTriple('ccc', '30', 'ddd'))
graph.add_triple(KnowledgeTriple('eee', '10', 'aaa'))
graph.add_triple(KnowledgeTriple('ffff', '5', 'ddd'))
```
Using above code I have created a sample Networkx graph. Now I want to visualize this graph using pyvis/matplotlib. How can I?
### Motivation
Graph visualization support
### Your contribution
Graph visualization support | Visualize the Networkx Entity Graph created via the GraphIndexCreator class using pyvis/matplotlib | https://api.github.com/repos/langchain-ai/langchain/issues/8851/comments | 2 | 2023-08-07T06:20:54Z | 2023-11-13T16:06:16Z | https://github.com/langchain-ai/langchain/issues/8851 | 1,838,761,710 | 8,851 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This is more like a question than feature request.
Google Vertex AI has support for some task specific models which are specially trained for certain action.
I know Langchain has support for LLM based models in general but do you ever plan to integrate these task specific models in your library? This (by Google's definition) is a Language Service Model but might not be powered by Gen AI technology
<img width="932" alt="image" src="https://github.com/langchain-ai/langchain/assets/97156231/867a3101-e429-4716-a3ae-62c3c4c90813">
### Suggestion:
_No response_ | Issue: Support for Custom VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/8850/comments | 1 | 2023-08-07T05:22:00Z | 2023-11-13T16:06:20Z | https://github.com/langchain-ai/langchain/issues/8850 | 1,838,703,116 | 8,850 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: `0.0.254`
Python version: `3.10.2`
Elasticsearch version: `7.17.0`
System Version: macOS 13.4 (22F66)
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this using `python3 script.py`
`script.py`:
```python
import os
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
def main():
text_path = "some-test.txt"
loader = TextLoader(text_path)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
) # I have also tried various chunk sizes, but still have the same error
documents = text_splitter.split_documents(data)
api_key = "..."
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
os.environ["ELASTICSEARCH_URL"] = "..."
db = ElasticVectorSearch.from_documents(
documents,
embeddings,
index_name="laurad-test",
)
print(db.client.info())
db = ElasticVectorSearch(
index_name="laurad-test",
embedding=embeddings,
elasticsearch_url="..."
)
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0, openai_api_key=api_key),
chain_type="stuff",
retriever=db.as_retriever(),
)
query = "Hi
qa.run(query) # Error here
if __name__ == "__main__":
main()
```
Error traceback:
```
RequestError Traceback (most recent call last)
Cell In[8], line 2
1 query = "What is ARB in NVBugs?"
----> 2 qa.run(query)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:451](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:451), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
449 if len(args) != 1:
450 raise ValueError("`run` supports only one positional argument.")
--> 451 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
452 _output_key
453 ]
455 if kwargs and not args:
456 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
457 _output_key
458 ]
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:258](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:258), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )
...
--> 315 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
316 status_code, error_message, additional_info
317 )
RequestError: RequestError(400, 'search_phase_execution_exception', "class_cast_exception: class org.elasticsearch.index.fielddata.ScriptDocValues$Doubles cannot be cast to class org.elasticsearch.xpack.vectors.query.VectorScriptDocValues$DenseVectorScriptDocValues (org.elasticsearch.index.fielddata.ScriptDocValues$Doubles is in unnamed module of loader 'app'; org.elasticsearch.xpack.vectors.query.VectorScriptDocValues$DenseVectorScriptDocValues is in unnamed module of loader java.net.FactoryURLClassLoader @7808fb9)")
```
### Expected behavior
I expect to ask questions and have answers provided back using the `langchain.chains.retrieval_qa.base.RetrievalQA` class. However, I am getting an error 400 when trying to query the LLM.
**Note**: I do not get the same error when using ChromaDB or OpenSearch as the retriever. | RequestError: RequestError(400, 'search_phase_execution_exception...) with ElasticSearch as Vector store when querying | https://api.github.com/repos/langchain-ai/langchain/issues/8849/comments | 11 | 2023-08-07T04:58:11Z | 2024-03-13T19:57:19Z | https://github.com/langchain-ai/langchain/issues/8849 | 1,838,682,846 | 8,849 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
currently,
im using openAI GPT3.5 for models and chroma DB to save vector.
i have some pdf documents which is have 2000 total pages. the pages will increase about 100 pages every day.
and at the end, the total target page that must be inputed is around 20.000 pages.
i have 2 question:
1. i worry about token limitations from GPT3.5 . any idea / suggestion to improve token limitations for many source documents?
2. for chromaDB, are chromaDB has limitations for saving vector?
i already using text splitter
`textSplitter = RecursiveCharacterTextSplitter(chunk_size=1536, chunk_overlap=200,separators=["#####","\n\n","\n","====="])`
i accept all idea and suggestion for this, thanks for advice
please take a note: i cant make summary from every documents.
i accept all idea and suggestion for this, thanks for advice
please take a note: i cant make summary from every documents.
### Suggestion:
_No response_ | Issue: ChromaDB document and token openAI limitations | https://api.github.com/repos/langchain-ai/langchain/issues/8846/comments | 2 | 2023-08-07T03:48:48Z | 2023-11-13T16:06:25Z | https://github.com/langchain-ai/langchain/issues/8846 | 1,838,622,804 | 8,846 |
[
"langchain-ai",
"langchain"
] | langchain==0.0.20
from langchain.schema import messages_to_dict
----> 1 from langchain.schema import messages_to_dict
ModuleNotFoundError: No module named 'langchain.schema' | ModuleNotFoundError: No module named 'langchain.schema' | https://api.github.com/repos/langchain-ai/langchain/issues/8843/comments | 4 | 2023-08-07T02:25:35Z | 2023-11-13T16:06:30Z | https://github.com/langchain-ai/langchain/issues/8843 | 1,838,563,345 | 8,843 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | https://api.github.com/repos/langchain-ai/langchain/issues/8842/comments | 2 | 2023-08-06T23:55:31Z | 2023-09-25T09:40:35Z | https://github.com/langchain-ai/langchain/issues/8842 | 1,838,467,546 | 8,842 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: `0.0.240`
Python version: `3.10.2`
Elasticsearch version: `7.17.0`
System Version: macOS 13.4 (22F66)
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this using `python3 script.py`
`script.py`:
```python
import os
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
def main():
text_path = "some-test.txt"
loader = TextLoader(text_path)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
) # I have also tried various chunk sizes, but still have the same error
documents = text_splitter.split_documents(data)
api_key = "..."
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
os.environ["ELASTICSEARCH_URL"] = "..."
db = ElasticVectorSearch.from_documents(
documents,
embeddings,
index_name="laurad-test",
)
print(db.client.info())
db = ElasticVectorSearch(
index_name="laurad-test",
embedding=embeddings,
elasticsearch_url="..."
)
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0, openai_api_key=api_key),
chain_type="stuff",
retriever=db.as_retriever(),
)
if __name__ == "__main__":
main()
```
Error traceback:
```
Traceback (most recent call last):
File "/Users/laurad/dev/LLM/public_test.py", line 46, in <module>
main()
File "/Users/laurad/dev/LLM/public_test.py", line 41, in main
retriever=db.as_retriever(),
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 458, in as_retriever
tags.extend(self.__get_retriever_tags())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 452, in __get_retriever_tags
if self.embeddings:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
[Previous line repeated 993 more times]
RecursionError: maximum recursion depth exceeded
```
### Expected behavior
I expect to ask questions and have answers provided back using the `langchain.chains.retrieval_qa.base.RetrievalQA` class. However, I am getting a `RecursionError` when creating the retrieval chain.
**Note**: I do not get the same error when using ChromaDB or OpenSearch as the retriever.
| RecursionError: maximum recursion depth exceeded with ElasticVectorSearch during RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/8836/comments | 2 | 2023-08-06T19:28:17Z | 2023-08-06T21:40:49Z | https://github.com/langchain-ai/langchain/issues/8836 | 1,838,335,052 | 8,836 |
[
"langchain-ai",
"langchain"
] | I initialized the tool and agent as following:
```
llm = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
model_name = 'gpt-3.5-turbo',
temperature=0.0,
)
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=6,
return_messages=True
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
)
tools = [
Tool(
name='Fressnapf Knowledge Base',
func=qa.run,
description=("""
Only use this tool when you need to know more about Fressnapf service and products
""")
)
]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory,
)
```
Now, when I'm asking the agent some basic questions he always uses the tool even if it's not useful to for answering the question.
After running `agent('Do you have ideas for a birthday gift?')` the agent directly uses the tool.
```
> Entering new AgentExecutor chain...
{
"action": "Fressnapf Knowledge Base",
"action_input": "birthday gift ideas"
}
Observation: Here are some birthday gift ideas:
1. Personalized photo album or picture frame
2. Spa or wellness gift set
3. Subscription to a favorite magazine or streaming service
4. Cooking or baking class
5. Outdoor adventure experience, such as a hiking or kayaking trip
6. Customized jewelry or accessories
7. Book or a set of books by their favorite author
8. Tickets to a concert, play, or sports event
9. A gift card to their favorite restaurant or store
10. A thoughtful handwritten letter or card expressing your love and appreciation.
Thought:{
"action": "Final Answer",
"action_input": "Here are some birthday gift ideas:\n\n1. Personalized photo album or picture frame\n2. Spa or wellness gift set\n3. Subscription to a favorite magazine or streaming service\n4. Cooking or baking class\n5. Outdoor adventure experience, such as a hiking or kayaking trip\n6. Customized jewelry or accessories\n7. Book or a set of books by their favorite author\n8. Tickets to a concert, play, or sports event\n9. A gift card to their favorite restaurant or store\n10. A thoughtful handwritten letter or card expressing your love and appreciation."
}
```
I've provided a very clear description for the tool and when the agent should be using it, but it seems like as if the agent ignoring my instructions.
Does anyone have any idea how to solve this issue? | Agent using tool when it's not needed | https://api.github.com/repos/langchain-ai/langchain/issues/8827/comments | 11 | 2023-08-06T16:17:01Z | 2024-05-13T11:03:47Z | https://github.com/langchain-ai/langchain/issues/8827 | 1,838,264,689 | 8,827 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to offer to change refine prompts to make a summary more correct.
### Motivation
In some cased (I can't say when exactly, but one time per 10-20) I saw that refine LLM added some additional text for summary.
Example:
> Since the provided context does not add any relevant information to the original summary, we will return the original summary:
> ... summary here...
On other cases I even saw output of refine prompt as:
>The existing summary is more comprehensive and provides a clear understanding of <my topic here>. The additional context provided does not contribute to the summary and can be disregarded.
As result I have this text as summary, but not my summary itself.
### Your contribution
I want to propose to use other prompts and check JSON result.
```
refine_initial_prompt_template = """\
Write a concise summary of the text (delimited with XML tags).
Please provide result in JSON format:
{{
"summary": "summary here"
}}
<text>
{text}
</text>
"""
refine_combine_prompt_template = """\
Your job is to produce a final summary. We have provided an existing summary up to a certain point (delimited with XML tags).
We have the opportunity to refine the existing summary (only if needed) with some more context (delimited with XML tags).
Given the new context, refine the original summary (only if new context is useful) otherwise say that it's not useful.
Please provide result in JSON format:
{{
"not_useful": "True if new context was not useful, False if new content was used",
"refined_summary": "refined summary here if new context was useful"
}}
<existing_summary>
{existing_summary}
</existing_summary>
<more_context>
{more_context}
</more_context>
"""
```
Now I have JSON with field not_useful=True and can easy ignore it. | Refine prompt modification | https://api.github.com/repos/langchain-ai/langchain/issues/8824/comments | 5 | 2023-08-06T15:25:06Z | 2024-05-13T16:07:49Z | https://github.com/langchain-ai/langchain/issues/8824 | 1,838,248,487 | 8,824 |
[
"langchain-ai",
"langchain"
] | I am trying to utilize Python Repl Tool in langchain with a CSV file and send me the answer based on the CSV file content. The problem is that it gets the action_input step of writing the code to execute right. However, it fails to answer because it couldn't determine the dataframe it should run the code on.
For example, I ask it about the longest name in a dataframe containing a column named "name" and it returns the following:
Entering new AgentExecutor chain...
{
"action": "python_repl_ast",
"action_input": "import pandas as pd\n\n# Assuming the dataset is stored in a pandas DataFrame called 'data'\nnames = data['NAMES']\nlongest_name = max(names, key=len)\nlongest_name"
}
Observation: NameError: name 'data' is not defined
Thought:{
"action": "Final Answer",
**"action_input": "I apologize for the confusion. Unfortunately, I do not have access to the dataset required to find the longest name. Is there anything else I can assist you with?"**
}
Is there a way to pass the dataframe object to the Pandas REPL tool in order for the code to execute properly and return me the answer? This problem is encountered while using the GPT-3.5-turbo API model.
This is the full code:
```
df = pd.read_csv(file_path)
tools = [PythonAstREPLTool(locals={"df": df})]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory
)
query = 'What is the longest name?'
print(agent(query))
```
| How to pass a CSV file or a dataframe to Pandas REPL tool in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/8821/comments | 3 | 2023-08-06T12:26:01Z | 2023-11-14T16:06:09Z | https://github.com/langchain-ai/langchain/issues/8821 | 1,838,189,640 | 8,821 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
guys if not already logged.. I had to uninstall unstructured module and back it off to lower ver. 0.7.12. python3 -m pip install unstructured==0.7.12
### Suggestion:
maybe this needs attention ? | Issue: Unstructured module nees backing off to lower ver. 0.7.12 seems ok | https://api.github.com/repos/langchain-ai/langchain/issues/8819/comments | 1 | 2023-08-06T11:45:44Z | 2023-11-12T16:05:44Z | https://github.com/langchain-ai/langchain/issues/8819 | 1,838,174,943 | 8,819 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How to handle such scenarios if the current asked questions is totally a new question and does not have relation with previous chat history . The standalone question would probably be non-sense, as it's semantic is twisted by the chat history.
`OPENAI_API_KEY=xxxxxx
OPENAI_API_BASE=https://xxxxxxxx.openai.azure.com/
OPENAI_API_VERSION=2023-05-15
import os
import openai
from dotenv import load_dotenv
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
# Load environment variables (set OPENAI_API_KEY, OPENAI_API_BASE, and OPENAI_API_VERSION in .env)
load_dotenv()
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
# Initialize gpt-35-turbo and our embedding model
llm = AzureChatOpenAI(deployment_name="gpt-35-turbo")
embeddings = OpenAIEmbeddings(deployment_id="text-embedding-ada-002", chunk_size=1)
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import TextLoader
from langchain.text_splitter import TokenTextSplitter
loader = DirectoryLoader('data/qna/', glob="*.txt", loader_cls=TextLoader, loader_kwargs={'autodetect_encoding': True})
documents = loader.load()
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
from langchain.vectorstores import FAISS
db = FAISS.from_documents(documents=docs, embedding=embeddings)
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
# Adapt if needed
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=db.as_retriever(),
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
`
### Suggestion:
_No response_ | Issue: Conversation retreival chain : if current question is not at all related to prevoius chat history | https://api.github.com/repos/langchain-ai/langchain/issues/8818/comments | 8 | 2023-08-06T11:42:41Z | 2024-02-19T16:09:06Z | https://github.com/langchain-ai/langchain/issues/8818 | 1,838,174,041 | 8,818 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can't find the way to deal with the 'maximum context length' error from OpenAI. Any way to catch it within langchain?
### Suggestion:
Add a error handler? | How to catch 'maximum context length' error | https://api.github.com/repos/langchain-ai/langchain/issues/8817/comments | 2 | 2023-08-06T08:08:29Z | 2023-11-12T16:05:50Z | https://github.com/langchain-ai/langchain/issues/8817 | 1,838,102,702 | 8,817 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've updated the code but very strange it doesn't find a good response. When I print(response["answer"]) I get that there is no text to give to the query I put. Even if it gets information from the internet and the Document on the list seems good structured. Here the code:
`
from googlesearch import search
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import (
UnstructuredWordDocumentLoader,
TextLoader,
UnstructuredPowerPointLoader,
)
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.chat_models import ChatOpenAI
from langchain.docstore.document import Document
import os
import openai
import sys
from dotenv import load_dotenv, find_dotenv
sys.path.append('../..')
_ = load_dotenv(find_dotenv())
google_api_key = os.environ.get("GOOGLE_API_KEY")
google_cse_id = os.environ.get("GOOGLE_CSE_ID")
openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = os.environ['LANGCHAIN_API_KEY']
os.environ["GOOGLE_API_KEY"] = google_api_key
os.environ["GOOGLE_CSE_ID"] = google_cse_id
folder_path_docx = "DB\\DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
embedding = OpenAIEmbeddings()
embeddings_content = []
for one_loaded_content in loaded_content:
embedding_content = embedding.embed_query(one_loaded_content.page_content)
embeddings_content.append(embedding_content)
db = DocArrayInMemorySearch.from_documents(loaded_content, embedding)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 3})
search = GoogleSearchAPIWrapper()
def custom_search(query):
max_results = 3
internet_results = search.results(query, max_results)
internet_documents = [Document(page_content=result["snippet"], metadata={
"source": result["link"]}) for result in internet_results
]
return internet_documents
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name="gpt-4", temperature=0),
chain_type="map_reduce",
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
history = []
while True:
query = input("Hola, soy Chatbot. ¿Qué te gustaría saber? ")
internet_documents = custom_search(query)
small = loaded_content[:3]
combined_results = small + internet_documents
print(combined_results)
response = chain(
{"question": query, "chat_history": history, "documents": combined_results})
print(response["answer"])
history.append(("system", query))
history.append(("assistant", response["answer"]))
`
Can anyone help me to make it work? Appreciate!
### Suggestion:
I would like that the Chatbot gives a response not just from the custom data also from what it gets from the internet. But of what I've done so far it doesn't work. | How to make a Chatbot responde based on custom data and from the internet? | https://api.github.com/repos/langchain-ai/langchain/issues/8816/comments | 2 | 2023-08-06T07:22:44Z | 2023-11-12T16:05:54Z | https://github.com/langchain-ai/langchain/issues/8816 | 1,838,088,521 | 8,816 |
[
"langchain-ai",
"langchain"
] | ### System Info
version 0.0.253
Running on a Jupyter Notebook in Google Colab
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the steps in [Access Intermediate Steps](https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/agents/how_to/intermediate_steps.ipynb) within the Agent "How To".
The final step calls for
`print(json.dumps(response["intermediate_steps"], indent=2))`
This is throwing the following error:
`TypeError: Object of type AgentAction is not JSON serializable`
Based on [this issue](https://github.com/langchain-ai/langchain/issues/2222) I think it may be happening following a recent upgrade.
### Expected behavior
Printing the intermediate steps as JSON. | Object of type AgentAction is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/8815/comments | 2 | 2023-08-06T07:11:26Z | 2023-08-06T09:06:42Z | https://github.com/langchain-ai/langchain/issues/8815 | 1,838,084,628 | 8,815 |
[
"langchain-ai",
"langchain"
] | I have three metadata: vehicle, color and city.
I want to retrieve chunks with filter -> vehicle = car, color = red and city = NY
All these conditions should be met in the retrieved chunks.
How can I do that?
results = db.get_relevant_documents(
query=query,
filter={"vehicle": "car", "color": "red", "city": "NY"},
)
But I am not getting desired results.
| How to pass multiple filters in db.get_relevant_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/8802/comments | 2 | 2023-08-05T20:03:46Z | 2023-11-12T16:05:59Z | https://github.com/langchain-ai/langchain/issues/8802 | 1,837,920,794 | 8,802 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Bedrock can't seem to load my credentials when used within a Lambda function. My AWS credentials were set up in my local environment using environment variables. When I use the bedrock class directly, it is able to load my credentials and my code runs smoothly. Here is the RetrievalQA function that utilizes the bedrock class:
```
def qa(query):
secrets = json.loads(get_secret())
kendra_index_id = secrets['kendra_index_id']
llm = Bedrock(model_id="amazon.titan-tg1-large", region_name='us-east-1')
llm.model_kwargs = {"maxTokenCount": 4096}
retriever = AmazonKendraRetriever(index_id=kendra_index_id)
prompt_template = """
{context}
{question} If you are unable to find the relevant article, respond 'I can't generate the needed content based on the context provided.'
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"])
chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
verbose=True,
chain_type_kwargs={
"prompt": PROMPT
}
)
return chain(query)
```
The above code runs without issues when used directly. However, when used within a lambda function, it fails. The scripts file used to build my lambda function uses the bedrock class as written above, however, I run into the `validation errors` when I invoke the Lambda function.
`
{
"errorMessage": "1 validation error for Bedrock\n__root__\n Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)",
"errorType": "ValidationError",
"requestId": "b772f236-f582-4308-8af5-b5a418d4327f",
"stackTrace": [
" File \"/var/task/main.py\", line 62, in handler\n response = qa(query)\n",
" File \"/var/task/main.py\", line 32, in qa\n llm = Bedrock(model_id=\"amazon.titan-tg1-large\", region_name='us-east-1',) #client=BEDROCK_CLIENT)\n",
" File \"/var/task/langchain/load/serializable.py\", line 74, in __init__\n super().__init__(**kwargs)\n",
" File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\n raise validation_error\n"
]
`
As clearly indicated by the error message, bedrock couldn't load credentials.
### Suggestion:
I have looked at the official documentation of the [bedrock class](https://github.com/langchain-ai/langchain/blob/b786335dd10902489f87a536ee074d747b6df370/libs/langchain/langchain/llms/bedrock.py#L51) but still do not understand why my code fails. Any help will be appreciated. @3coins @jasondotparse @hwchase17 | Issue: Amazon Bedrock can't load my credentials when called from a Lambda function | https://api.github.com/repos/langchain-ai/langchain/issues/8800/comments | 22 | 2023-08-05T18:57:43Z | 2024-02-15T16:10:51Z | https://github.com/langchain-ai/langchain/issues/8800 | 1,837,898,928 | 8,800 |
[
"langchain-ai",
"langchain"
] | ### System Info
BSHTMLLoader not working for urls
````
from langchain.document_loaders import BSHTMLLoader
url = "https://www.google.com"
loader = BSHTMLLoader({"url": url})
doc = loader.load()
````
I tried this one also not working
````
from langchain.document_loaders import BSHTMLLoader
url = "https://www.google.com"
loader = BSHTMLLoader({"url": url})
doc = loader.load()
````
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import BSHTMLLoader
url = "https://www.example.com"
loader = BSHTMLLoader(url)
doc = loader.load()
```
### Expected behavior
It has respond html data of url. I believe BSHTMLLoader won't work with url and only work files(.html files). | BSHTMLLoader not working for urls | https://api.github.com/repos/langchain-ai/langchain/issues/8795/comments | 2 | 2023-08-05T14:41:28Z | 2023-08-06T11:42:45Z | https://github.com/langchain-ai/langchain/issues/8795 | 1,837,792,197 | 8,795 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback:
docsearch.similarity_search(
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/pinecone.py", line 162, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/pinecone.py", line 136, in similarity_search_with_score
docs.append((Document(page_content=text, metadata=metadata), score))
File "/usr/local/lib/python3.9/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document"
page_content str type expected (type=type_error.str)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain pinecone similarity_search_with_score
docs.append((Document(page_content=text, metadata=metadata), score))
seem to raise error when page_content is not an string type
while when we wrote the document to the pinecone using pdf loader
maybe the loader produce something unexpected.
it should be a bug
### Expected behavior
No erro raised | File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Document" | https://api.github.com/repos/langchain-ai/langchain/issues/8794/comments | 6 | 2023-08-05T13:36:08Z | 2024-06-08T16:07:15Z | https://github.com/langchain-ai/langchain/issues/8794 | 1,837,765,386 | 8,794 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add option to directly set input language for OpenAIWhisperParserLocal (in case incorrect autodetection (language="french", task = "transcribe") and optionally use translation mode of the whisper model (language="french", task = "translate")
### Motivation
I encountered situations where audio is split in chunks and therefore language autodetection was incorrect by OpenAIWhisperParserLocal
### Your contribution
I'll post PR. | Add option to directly set input language for OpenAIWhisperParserLocal | https://api.github.com/repos/langchain-ai/langchain/issues/8792/comments | 1 | 2023-08-05T11:36:13Z | 2023-11-11T16:05:21Z | https://github.com/langchain-ai/langchain/issues/8792 | 1,837,713,670 | 8,792 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I want achive SQL Database Agent, by using that I want to retrive data from the database and execute queries.
So for that I have tried,
```
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import AzureOpenAI
from langchain.agents.agent_types import AgentType
db = SQLDatabase.from_uri("mysql://user:pwd@localhost/db_name")
toolkit = SQLDatabaseToolkit(db=db, llm=AzureOpenAI(deployment_name="ABCD",model_name="text-davinci-003",temperature=0))
agent_executor = create_sql_agent(
llm=AzureOpenAI(deployment_name="ABCD",model_name="text-davinci-003",temperature=0),
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.run("Describe the XYZ table")
```
But into above code facing problem is that "**openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/trailadvisor/completions)**". Same thing I can run using the OpenAI but getting error in AzureOpenAI.
Am i doing the correct approach, or any other way that using AzureOpenAI deployment we can achive the requirement?
### Suggestion:
_No response_ | Issue: AzureOpenAI connection with SQL Database | https://api.github.com/repos/langchain-ai/langchain/issues/8790/comments | 1 | 2023-08-05T08:30:33Z | 2023-11-11T16:05:26Z | https://github.com/langchain-ai/langchain/issues/8790 | 1,837,647,295 | 8,790 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.250
Python=3.11.4
### Who can help?
@agola @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = ChatOpenAI(
model="gpt-4",
temperature=0.0,
top_p=1.0,
n=20,
max_tokens=30
)
results = llm(chat_prompt.format_prompt(
inputs_description=vars["inputs_description"],
task_description=vars["task_description"],
criteria=vars["criteria"],
criteria_description=vars["criteria_description"],
eval_steps=vars["eval_steps"],
input=vars["input"],
input_type=vars["input_type"],
output=vars["output"],
output_type=vars["output_type"]
).to_messages())
```
### Expected behavior
I was expecting to get a response with 20 results, instead of just one since I set n=20. This is the behaviour when using Openai SDK. | "n" hyperparmeter doesn't work in ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/8789/comments | 1 | 2023-08-05T08:26:14Z | 2023-08-07T13:40:06Z | https://github.com/langchain-ai/langchain/issues/8789 | 1,837,646,215 | 8,789 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi!
While using Llama cpp in LangChain, I am trying to load the "llama-2-70b-chat" model. The code I am using is:
n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path="/Users/taimoorarif/Downloads/llama-2-70b-chat.ggmlv3.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
n_ctx=2048
)
I am getting an error which I am attaching below. The same code works perfectly fine for the 13b model. Moreover, for the 65B model, the model keeps processing the prompt and does not return anything. Any help would be appreciated. Thanks!
<img width="1018" alt="Screenshot 2023-08-05 at 3 59 05 AM" src="https://github.com/langchain-ai/langchain/assets/56273879/db0fb151-3bdb-4518-b6f7-d58dca2af261">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
. | Error while loading the Llama 2 70B chat model | https://api.github.com/repos/langchain-ai/langchain/issues/8788/comments | 2 | 2023-08-05T07:59:48Z | 2023-11-15T16:07:07Z | https://github.com/langchain-ai/langchain/issues/8788 | 1,837,636,776 | 8,788 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| RetrievalQA.from_chain_type: callbacks are not called for all nested chains | https://api.github.com/repos/langchain-ai/langchain/issues/8786/comments | 1 | 2023-08-05T06:43:10Z | 2023-11-11T16:05:36Z | https://github.com/langchain-ai/langchain/issues/8786 | 1,837,609,787 | 8,786 |
[
"langchain-ai",
"langchain"
] | ### Feature request
if possible to provide NLTK tokenizer alternative options? such as spaCy, openNLP...
### Motivation
NLTK issues:
```
Resource averaged_perceptron_tagger not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('averaged_perceptron_tagger')
For more information see: https://www.nltk.org/data.html
```
### Your contribution
https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy
NLTK issues: https://stackoverflow.com/questions/49546253/attributeerror-module-nltk-has-no-attribute-download | NLTK alternatives option | https://api.github.com/repos/langchain-ai/langchain/issues/8782/comments | 2 | 2023-08-05T02:22:42Z | 2023-08-07T00:40:13Z | https://github.com/langchain-ai/langchain/issues/8782 | 1,837,527,287 | 8,782 |
[
"langchain-ai",
"langchain"
] | ### System Info
i'm using newest langchain, python 3
so i try agent but got inconsistent answer, this is the output

the true answer is "Jalan lori tidak termasuk jalan umum". i try using retrieval with prompt without agent and got true answer too. so sometimes agent has inconsistent answer. how to prevent this?
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i put example code below
my Agent Prompt

How To Call my Agent
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
return_direct=True,
agent_kwargs={
'prefix':constant.PREFIX,
'format_instructions':constant.FORMAT_INSTRUCTIONS,
'suffix':constant.SUFFIX
}
)
result = agent_executor.run(question)
return result
```
### Expected behavior
got true answer same with the documents | inconsistent agent answer | https://api.github.com/repos/langchain-ai/langchain/issues/8780/comments | 4 | 2023-08-05T00:19:15Z | 2023-11-13T16:06:51Z | https://github.com/langchain-ai/langchain/issues/8780 | 1,837,465,382 | 8,780 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: Apple M1 Pro
Python version: Python 3.9.6
Dependencies:
aiohttp==3.8.5
aiosignal==1.3.1
anyio==3.7.1
async-timeout==4.0.2
atlassian-python-api==3.40.0
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
blobfile==2.0.2
certifi==2023.7.22
charset-normalizer==3.2.0
chroma-hnswlib==0.7.2
chromadb==0.4.5
click==8.1.6
coloredlogs==15.0.1
dataclasses-json==0.5.14
Deprecated==1.2.14
exceptiongroup==1.1.2
fastapi==0.99.1
filelock==3.12.2
flatbuffers==23.5.26
frozenlist==1.4.0
h11==0.14.0
httptools==0.6.0
humanfriendly==10.0
idna==3.4
importlib-resources==6.0.0
langchain==0.0.252
langsmith==0.0.18
lxml==4.9.3
marshmallow==3.20.1
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
numexpr==2.8.4
numpy==1.25.2
oauthlib==3.2.2
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
overrides==7.3.1
packaging==23.1
Pillow==10.0.0
posthog==3.0.1
protobuf==4.23.4
pulsar-client==3.2.0
pycryptodomex==3.18.0
pydantic==1.10.12
PyPika==0.48.9
pytesseract==0.3.10
python-dateutil==2.8.2
python-dotenv==1.0.0
PyYAML==6.0.1
regex==2023.6.3
requests==2.31.0
requests-oauthlib==1.3.1
six==1.16.0
sniffio==1.3.0
soupsieve==2.4.1
SQLAlchemy==2.0.19
starlette==0.27.0
sympy==1.12
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.7.1
urllib3==1.25.11
uvicorn==0.23.2
uvloop==0.17.0
watchfiles==0.19.0
websockets==11.0.3
wrapt==1.15.0
yarl==1.9.2
zipp==3.16.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Intro:
I have some code to load my vector from confluence:
```python
from langchain.document_loaders import ConfluenceLoader
from langchain.text_splitter import CharacterTextSplitter, TokenTextSplitter, RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
import pytesseract
import os
pytesseract.pytesseract.tesseract_cmd = "/opt/homebrew/bin/tesseract"
os.environ["OPENAI_API_KEY"] = "my_openai_key"
CONFLUENCE_URL = "confluence_path"
CONFLUENCE_TOKEN = "some_token"
CONFLUENCE_SPACE = "confluence_space"
PERSIST_DIRECTORY = "./chroma_db/"
def confluence_vector(force_reload: bool = False):
# Init Openai Embeddings
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, max_retries=10)
# Check persist exists
if PERSIST_DIRECTORY and os.path.exists(PERSIST_DIRECTORY) and not force_reload:
vector = Chroma(persist_directory=PERSIST_DIRECTORY, embedding_function=embeddings)
print("Chroma loaded")
return vector
else:
loader = ConfluenceLoader(url=CONFLUENCE_URL,
token=CONFLUENCE_TOKEN)
documents = loader.load(
space_key=CONFLUENCE_SPACE, limit=10, max_pages=1000
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
#text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10)
#texts = text_splitter.split_documents(texts)
print(texts)
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
vector.persist()
print("Chroma builded")
return vector
if __name__ == '__main__':
confluence_vector(force_reload=True)
```
Then after run I have an exception:
```
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 159, in _new_conn
conn = connection.create_connection(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 171, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 726, in urlopen
retries = retries.increment(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 446, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 47, in <module>
confluence_vector(force_reload=True)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 40, in confluence_vector
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 325, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 116, in load_tiktoken_bpe
contents = read_file_cached(tiktoken_bpe_file)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 39, in read_file_cached
return read_file(blobpath)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 24, in read_file
resp = requests.get(blobpath)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out'))
```
After that first I think about - is the network connection to the cl100k_base.tiktoken file
For double check I tried to wget this file, the output was like this (ip's and domains are fake):
```
Resolving proxy (someproxy)... 10.0.0.0
Connecting to someproxy (someproxy)|10.0.0.0|:3131... connected.
Proxy request sent, awaiting response... 200 OK
Length: 1681126 (1.6M) [application/octet-stream]
Saving to: ‘cl100k_base.tiktoken’
cl100k_base.tiktoken 100%[=================================================================================================>] 1.60M --.-KB/s in 0.1s
2023-08-05 01:18:36 (14.4 MB/s) - ‘cl100k_base.tiktoken’ saved [1681126/1681126]
```
Ok then, for full transparency I tried to override .tiktoken file location in library files
```
file location: /Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken_ext/openai_public.py
```
```python
...
def cl100k_base():
mergeable_ranks = load_tiktoken_bpe(
"/Users/pobabi1/PycharmProjects/langchainConflWithGPT/cl100k_base.tiktoken"
)
special_tokens = {
ENDOFTEXT: 100257,
FIM_PREFIX: 100258,
FIM_MIDDLE: 100259,
FIM_SUFFIX: 100260,
ENDOFPROMPT: 100276,
}
return {
"name": "cl100k_base",
"pat_str": r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+""",
"mergeable_ranks": mergeable_ranks,
"special_tokens": special_tokens,
}
...
```
The file was accepted by lib and then I have this kind of exception:
```
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 47, in <module>
confluence_vector(force_reload=True)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 40, in confluence_vector
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 358, in _get_len_safe_embeddings
response = embed_with_retry(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (GET /embeddings)
```
For double-check this moment I tried to use /embeddings API from curl with default request and it fully works:
```
curl https://api.openai.com/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"input": "Your text string goes here",
"model": "text-embedding-ada-002"
}'
```
```
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [
-0.006968617,
-0.0052718227,
0.011901081,
-0.024984878,
-0.024554798,
0.039674748,
-0.010140447,
-0.009481888,
-0.013245076,
-0.010006047,
-0.011511322,
0.0077750143,
-0.014138834,
0.0077414145,
0.010147166,
-0.0050131036,
0.022942005,
-0.0016287547,
0.01498555,
-0.010281566,
0.004858544,
0.012472278,
0.004835024,
0.010819164,
-0.006615818,
-0.00035090884,
0.0055708615,
-0.012613398,
0.016369866,
0.0044587054,
0.006639338,
-0.0071164565,
-0.01516027,
-0.006625898,
-0.0185337,
0.0040622265,
0.0031785495,
-0.018963777,
0.03032054,
-0.0075263754,
0.0081177335,
0.009481888,
-0.0011029163,
-0.00044225855,
-0.008776291,
-0.028600225,
0.0028996705,
0.0092802895,
0.016463945,
0.006431019,
0.019380417,
0.014730191,
-0.025643434,
-0.009911967,
-0.003311269,
0.014837711,
-0.03690612,
0.014058193,
0.013446676,
0.0043948656,
-0.004841744,
0.0032171893,
-0.011040923,
0.002407432,
-0.014138834,
-0.018587459,
-0.024702638,
-0.0035011084,
0.01803642,
0.008480612,
0.034083728,
0.01518715,
0.014071633,
-0.006672938,
0.022041528,
0.022122167,
0.005359182,
0.020885691,
0.008312613,
-0.011948121,
0.0076540546,
-0.025065517,
0.010684765,
0.016786505,
0.009475169,
-0.000614458,
-0.0028929505,
0.028949665,
-0.03292789,
-0.03045494,
0.0076607745,
0.020200253,
0.009972447,
0.018063301,
-0.005668301,
0.008285733,
-0.010281566,
0.020361533,
0.022283446,
-0.034621324,
0.005426382,
0.013318996,
-0.017324103,
-0.008137893,
-0.029782942,
0.017713862,
0.021503929,
-0.016759625,
0.016316107,
0.0038203073,
-0.029137824,
0.012143,
-0.01815738,
-0.017337542,
0.0030122302,
0.016894024,
0.010778844,
-0.014031313,
-0.021436729,
-0.03026678,
0.009959007,
0.012687318,
0.03018614,
-0.017754182,
0.026436392,
0.019568576,
-0.0058598206,
-0.031315096,
0.0067770975,
-0.0019471135,
0.037443716,
0.024447279,
0.010765404,
0.014864591,
-0.013157717,
0.022821045,
-0.03604596,
0.0025317515,
-0.027417509,
-0.012808277,
0.012895637,
0.026960552,
0.0029114303,
-0.0031382297,
0.012613398,
0.011087963,
-0.0022898323,
-0.0105167655,
0.021490488,
-0.013124117,
-0.002716551,
0.0041126264,
0.0049727834,
-0.01495867,
-0.010563805,
0.017203143,
0.0052919826,
0.008023653,
0.0019823934,
-0.0299711,
0.011007324,
0.009300449,
0.016894024,
-0.014649551,
0.033949327,
0.05112559,
-0.0037195077,
0.0053491024,
-0.0059236605,
-0.0082790125,
-0.029567903,
0.028546466,
-0.06413547,
0.0126066785,
-0.041368183,
0.0038471874,
0.026181033,
0.013345876,
-0.02076473,
-0.018990656,
-0.03994355,
0.0023335123,
0.0050534233,
0.025186477,
-0.02720247,
-0.013964114,
0.000013433393,
0.0040723067,
0.0007723775,
0.00298199,
0.016719304,
0.022095287,
0.009246689,
-0.0080438135,
-0.68941593,
-0.0038740672,
0.020710971,
-0.026328873,
0.025670316,
0.03843827,
0.00089207705,
0.018560579,
-0.025589675,
0.020670652,
-0.019125057,
-0.0024998318,
-0.010940124,
-0.01227068,
0.0028946304,
-0.012673878,
0.010335326,
-0.0065788585,
-0.014770512,
0.008265573,
0.005171023,
0.032444052,
0.0016900745,
0.014273233,
0.019044418,
-0.00006415479,
0.009905247,
-0.018775618,
-0.001812714,
0.031906456,
-0.029460382,
-0.003275989,
-0.0025703914,
0.010610845,
0.061017398,
-0.020549692,
-0.009488609,
-0.0049492638,
-0.005453262,
0.0018479939,
-0.0181977,
-0.026127273,
0.02106041,
-0.0044519855,
0.005832941,
-0.0040555065,
0.0049459036,
-0.02114105,
0.005251663,
0.005681741,
0.0018849538,
-0.0018983937,
0.010866204,
-0.0013549156,
0.014757072,
0.009730528,
0.043303538,
-0.016544586,
0.0012490759,
0.015523149,
-0.0038875071,
0.022135606,
-0.029057184,
-0.013661715,
0.0028324707,
0.009488609,
-0.0120086,
0.006595658,
0.009710368,
-0.005785901,
0.015778508,
0.013896914,
-0.020321213,
-0.0061185397,
0.002054633,
0.026637992,
0.018654658,
-0.009488609,
-0.020603452,
0.0181977,
0.006676298,
-0.0036052682,
-0.023130164,
-0.016598346,
0.025388077,
-0.00601102,
-0.022229686,
0.009495328,
0.0067636576,
0.0046535847,
0.013083797,
0.010852764,
-0.008171493,
-0.0014019554,
0.0041697463,
0.005869901,
-0.00004504485,
0.014259793,
-0.007801894,
-0.0038471874,
-0.014824271,
0.015818827,
-0.028358307,
0.005382702,
0.0077548544,
-0.0015187149,
-0.0032373492,
0.019219136,
0.03311605,
-0.02376184,
-0.0061689396,
-0.008668771,
-0.0028677506,
-0.0133861955,
-0.0066998177,
-0.023896242,
0.010180767,
0.005517102,
0.023721522,
-0.00026375914,
0.020952892,
-0.002729991,
-0.00298199,
0.0012877157,
0.026785832,
0.008144613,
0.0076271747,
-0.02134265,
-0.022296887,
-0.003964787,
0.007519655,
-0.000922317,
0.05429742,
-0.016988104,
0.025199916,
-0.002464552,
0.01847994,
-0.00085049716,
0.016020427,
-0.005443182,
-0.019985214,
0.015254349,
-0.017579462,
0.006558698,
0.010557085,
-0.0183993,
-0.018762179,
-0.007163496,
0.009468448,
-0.010375646,
0.0019807136,
-0.012552919,
-0.024581678,
0.026691752,
0.012237079,
-0.01790202,
0.0047879843,
-0.015791947,
-0.0076540546,
-0.02147705,
-0.0018748738,
0.011397082,
-0.011047644,
0.016208587,
-0.010120287,
-0.004559505,
-0.037820034,
0.013406356,
-0.016517706,
-0.026476713,
-0.0031802296,
-0.02084537,
-0.005769101,
-0.00022112927,
-0.013977554,
-0.0043545454,
-0.019958334,
-0.0020227134,
0.022995764,
-0.016369866,
0.008655331,
0.013755795,
-0.002741751,
0.015281229,
0.016463945,
0.014273233,
-0.0018496739,
0.015791947,
0.0077078147,
-0.01798266,
0.0037430276,
-0.013466836,
-0.013493716,
0.0005195383,
-0.011491162,
0.00039983867,
-0.007976614,
0.03026678,
0.017377863,
0.008756131,
0.03362677,
0.0062697395,
0.042873457,
-0.025670316,
-0.0045796647,
-0.02162489,
0.002414152,
-0.022270007,
0.01803642,
0.012633558,
0.003911027,
-0.02126201,
-0.01212956,
0.010100126,
0.015308109,
0.025871914,
-0.007297896,
0.007781734,
-0.0019504735,
0.008198373,
0.004942544,
-0.006723338,
0.0041059065,
-0.0124588385,
-0.013379476,
0.0057220613,
0.026651433,
0.03580404,
0.0056649414,
-0.02370808,
-0.0031751895,
-0.003944627,
0.0008546972,
0.010933404,
0.013829715,
0.017686982,
0.029137824,
0.0008861971,
0.029003425,
-0.0035246282,
-0.010745244,
0.023103284,
0.018775618,
-0.007371816,
0.0244204,
0.004495665,
0.020186814,
0.0014783951,
-0.019608894,
0.009972447,
-0.011141723,
0.012700758,
-0.031691417,
-0.006326859,
0.021436729,
-0.013292116,
0.00009911967,
0.013345876,
0.026140714,
0.018842818,
0.015751628,
-0.019017538,
-0.0044654254,
-0.00064973783,
0.018977217,
-0.009932127,
-0.014111954,
-0.0045931046,
-0.016907465,
0.014232913,
-0.0072306963,
-0.0070425365,
0.035132043,
-0.008252133,
0.02427256,
-0.00027656907,
0.009528928,
-0.0009357569,
-0.016894024,
0.004310866,
0.0014599152,
-0.044298094,
0.0073180557,
0.007244136,
-0.002093273,
-0.019380417,
-0.03368053,
0.037363075,
0.003934547,
0.0009962367,
0.00033767888,
0.00900477,
0.02076473,
0.002152073,
0.00068081776,
0.023425842,
0.013359316,
-0.028707745,
-0.0020411932,
0.0015716348,
0.008238693,
0.017337542,
-0.0038606273,
-0.027793828,
0.030831259,
-0.0071030166,
0.0055003017,
0.021893688,
0.006696458,
-0.009999327,
0.0059942203,
0.01847994,
-0.0038505474,
-0.02390968,
0.02411128,
0.0071231765,
-0.019461056,
-0.004542705,
0.034701966,
0.013775954,
-0.0035582283,
-0.029164704,
-0.019313216,
0.0055137416,
0.062576436,
0.021356089,
-0.0045796647,
-0.0040958263,
0.0072306963,
0.005372622,
-0.010745244,
-0.03327733,
0.00891069,
-0.021651769,
-0.0017421542,
0.0044217454,
0.015402189,
0.0068778973,
0.015845709,
-0.0034070287,
0.010725085,
-0.0068039773,
-0.0013280356,
-0.023533363,
-0.009676768,
0.027471269,
0.012741078,
0.0046300646,
0.01770042,
0.0001832244,
0.0046502245,
0.010718364,
0.01212956,
-0.05150191,
-0.020173373,
0.031449497,
0.008050533,
0.009952287,
-0.005120623,
0.029110944,
-0.005459982,
-0.01494523,
0.018748738,
-0.0020025533,
0.021503929,
0.019340096,
0.017082183,
-0.015657548,
0.009535649,
-0.0032591892,
0.0020378332,
-0.00071609765,
-0.013883474,
-0.0036825477,
0.0019991933,
-0.008077413,
-0.0074524553,
-0.023466162,
0.008198373,
0.009864927,
-0.009817887,
0.0062260595,
-0.018883137,
-0.026530473,
0.0077414145,
0.008292452,
-0.0065788585,
-0.0037833476,
-0.018291779,
-0.013164436,
-0.00029378902,
0.0037564675,
-0.02099321,
-0.00601438,
0.0024964719,
0.008144613,
0.0014137153,
0.013540755,
0.013446676,
0.0040219068,
0.005144143,
-0.01537531,
-0.023291443,
-0.010731804,
-0.0013406356,
-0.0359922,
0.0070022168,
-0.010167327,
-0.023587123,
0.016611785,
0.005419662,
0.011551642,
0.0052180625,
0.011948121,
0.02411128,
0.003301189,
0.006320139,
-0.016974663,
0.0030945498,
-0.026785832,
0.0019571935,
-0.034594446,
-0.010590685,
-0.019165376,
-0.0074121356,
-0.003329749,
0.00598414,
-0.006545258,
0.012431959,
0.0025435116,
0.008635172,
0.031126937,
-0.014461393,
0.019420736,
-0.00037064878,
0.00056699815,
-0.010342046,
0.00044351854,
-0.008265573,
0.01186748,
0.008796451,
-0.000047118596,
0.008615011,
-0.022753844,
0.025482155,
-0.059243325,
0.01225724,
0.029541023,
0.000908877,
0.014447953,
-0.011054363,
-0.012882197,
0.0031382297,
-0.0062260595,
-0.009542368,
0.005742221,
-0.01830522,
-0.014542032,
-0.029648542,
-0.0077481344,
-0.016074186,
0.0037766276,
-0.018923458,
-0.0077682943,
-0.023869362,
0.0023150323,
0.008749411,
-0.000101219666,
-0.022310326,
-0.024702638,
0.0037799876,
0.01814394,
-0.0067938976,
0.034379408,
0.00091475697,
0.0113702025,
-0.013910354,
-0.018291779,
0.002113433,
-0.02720247,
-0.014085073,
-0.019756734,
0.009858208,
0.02417848,
0.028600225,
-0.0043007857,
0.012801558,
0.004260466,
-0.015953228,
0.0025031918,
0.0025939115,
-0.011733081,
-0.018264899,
0.011155163,
0.005372622,
0.023654321,
0.01537531,
-0.009582688,
0.004166386,
-0.01497211,
0.012828438,
-0.016692424,
-0.013789395,
-0.024796719,
0.0028996705,
0.008359652,
-0.009320609,
0.01220348,
-0.016652105,
0.006746858,
0.046206567,
0.010570525,
0.008675491,
0.026960552,
0.00031541896,
-0.0179961,
0.012714198,
-0.0004481385,
0.015496269,
-0.01838586,
-0.015079631,
-0.012378199,
0.0017589541,
0.030938778,
0.016208587,
-0.0044116653,
-0.013023317,
0.0073650954,
0.001216316,
-0.017592901,
0.0060043,
-0.0029651902,
0.013500435,
-0.0049559837,
-0.031960215,
0.0019739934,
-0.008373092,
-0.024742959,
-0.008366372,
0.016060747,
-0.013520596,
0.010886364,
-0.024124721,
-0.0068174177,
-0.002118473,
0.0005031583,
0.029836701,
0.010711645,
0.0010793965,
0.0085881315,
-0.009938847,
-0.005480142,
0.007580135,
-0.00899133,
-0.01198172,
0.035105165,
0.016988104,
0.00296351,
-0.0017203143,
0.015657548,
0.022350647,
-0.016074186,
-0.019904574,
0.015697869,
0.013654995,
0.0047006244,
-0.03935219,
-0.016302666,
-0.050480474,
0.023385523,
0.016477386,
-0.020254012,
-0.027874468,
0.0005447382,
-0.040669307,
0.025320876,
-0.01847994,
-0.0031197497,
0.013124117,
-0.002430952,
-0.031207576,
0.0066998177,
-0.0022360727,
0.00016138447,
0.006431019,
0.034029968,
-0.013379476,
0.014918351,
0.0061386996,
0.019622335,
-0.006306699,
-0.009663328,
0.003339829,
0.019918013,
-0.037443716,
-0.008843491,
-0.0043444657,
-0.0125327585,
0.00902493,
-0.0062764594,
-0.017256903,
-0.003581748,
-0.015482829,
-0.011464282,
0.011901081,
-0.003348229,
-0.021745848,
-0.0117532415,
0.0043747057,
-0.022431286,
-0.013130836,
-0.011417243,
0.013372756,
-0.026530473,
-0.024917677,
-0.0047107046,
0.010812445,
0.0052550226,
-0.0008769571,
-0.0065990184,
0.019501375,
0.037981313,
-0.0070290966,
0.015294669,
0.0018748738,
0.0067569376,
-0.03346549,
0.005826221,
0.00024716917,
0.00059765804,
0.028600225,
-0.018560579,
-0.026409512,
0.008890531,
-0.00300383,
0.019971775,
0.013198037,
-0.009031651,
-0.008608292,
0.02136953,
-0.0013473555,
-0.0085881315,
-0.02416504,
0.021665208,
0.008655331,
-0.005372622,
0.00610846,
-0.016463945,
0.0014867951,
-0.0100598065,
-0.0034507087,
0.009481888,
-0.02454136,
-0.034029968,
-0.0014506752,
0.001232276,
0.004425105,
-0.026154153,
-0.019098178,
-0.015966667,
-0.0013641554,
-0.0070358166,
-0.005863181,
0.0013145957,
-0.00905181,
0.005140783,
-0.011652442,
0.028707745,
-0.015818827,
-0.01845306,
-0.021544248,
0.00010090467,
-0.02709495,
-0.025509035,
-0.009145889,
0.025509035,
0.015066191,
-0.0090450905,
-0.012048921,
-0.013043477,
-0.028223908,
-0.011007324,
-0.004052147,
0.001846314,
0.020697532,
0.0087359715,
-0.003338149,
0.03290101,
0.00040193868,
0.027713189,
-0.0068778973,
0.00045065852,
-0.0072172564,
-0.007996773,
0.0037497475,
0.008628451,
0.0007370976,
-0.014461393,
0.038626432,
0.014609232,
0.0051475028,
0.017189704,
0.0026073514,
-0.007882534,
-0.0133861955,
-0.0028862304,
-0.0062294193,
0.010469725,
0.0049828636,
-0.009381089,
-0.013332436,
0.0031903095,
-0.006928297,
-0.00013670955,
0.014488272,
0.0093542095,
0.0126066785,
0.0098985275,
-0.010617565,
0.0018496739,
-0.016853705,
-0.015751628,
0.025414957,
-0.000895437,
-0.025173036,
0.020240573,
0.0008412572,
-0.0043982253,
-0.023318322,
0.03040118,
-0.018977217,
-0.013164436,
-0.007600295,
-0.017028423,
-0.033546127,
0.0012549559,
-0.007559975,
-0.010221086,
-0.0034507087,
-0.0052550226,
-0.025535915,
-0.025965994,
-0.0038807872,
0.005402862,
-0.031879574,
-0.04518513,
-0.013648275,
0.027740069,
-0.010006047,
-0.009293729,
-0.0068946974,
-0.012888918,
0.012304279,
0.013722194,
-0.0005459982,
-0.01215644,
0.031073177,
0.016087627,
-0.011101403,
0.02710839,
0.23052211,
-0.01537531,
0.015402189,
0.03959411,
0.0055775815,
0.026584232,
0.0029047104,
0.0020294334,
-0.014716751,
0.003900947,
-0.008809891,
-0.0017203143,
0.019017538,
-0.00605134,
0.025535915,
0.007889254,
-0.019192256,
-0.014058193,
-0.015738187,
-0.029514143,
0.008695651,
0.002162153,
-0.0018530339,
-0.021813048,
0.0061487798,
-0.0058732606,
-0.0135340355,
-0.0122975595,
0.027444389,
0.018950338,
-0.0070358166,
-0.00921309,
0.021517368,
0.0049862233,
-0.017431622,
0.011565082,
0.0089711705,
-0.0044284654,
0.028519586,
0.02710839,
0.011195483,
-0.0004489785,
-0.012828438,
-0.019595455,
-0.008467172,
0.017431622,
-0.0023200724,
-0.008715811,
-0.01200188,
-0.0043377457,
0.008776291,
0.0042134263,
0.0112962825,
0.012774678,
0.012747798,
-0.002694711,
0.024890797,
-0.011249243,
-0.031879574,
0.024326319,
-0.001216316,
0.016517706,
-0.004764464,
0.009938847,
-0.014165713,
-0.006407499,
0.007271016,
-0.0047174245,
0.0052819024,
0.0011071163,
0.0009676768,
-0.016880585,
-0.002743431,
-0.007506215,
-0.030858139,
-0.010489886,
-0.002110073,
0.019797055,
0.031825814,
0.018412739,
-0.027551908,
-0.0128418775,
0.010483165,
-0.025455276,
-0.025186477,
-0.029809821,
0.0017404743,
-0.013668435,
-0.010819164,
-0.0084268525,
-0.011121564,
-0.016396746,
-0.009569248,
-0.0060815797,
0.002141993,
-0.008023653,
-0.017230023,
0.028815264,
0.007291176,
0.0073247757,
-0.029110944,
0.020388413,
0.01202204,
0.013601235,
-0.0127679575,
-0.010819164,
-0.012431959,
0.011289563,
0.006935017,
-0.01503931,
-0.0034103887,
0.011047644,
0.012956117,
0.004818224,
-0.013668435,
0.0062260595,
0.01538875,
-0.013896914,
0.0038303873,
-0.00095171685,
-0.009643168,
-0.019528255,
-0.00607822,
-0.00082445727,
-0.0070156567,
-0.0129157975,
-0.013520596,
-0.018842818,
0.012888918,
-0.024447279,
0.0050399834,
0.010785565,
0.010725085,
-0.0030777499,
-0.007976614,
0.005160943,
0.0025283915,
-0.00900477,
-0.011887641,
0.03316981,
-0.0012826758,
-0.008494052,
0.0068274974,
0.0001803894,
0.011820441,
-0.034406286,
-0.00299879,
-0.010120287,
-0.0077682943,
-0.01835898,
-0.028600225,
-0.013258516,
-0.015899468,
-0.00076733745,
0.020509372,
-0.016450506,
-0.031368855,
-0.025912235,
0.013453395,
0.00012536958,
-0.02116793,
0.0050265435,
0.029299103,
-0.014609232,
0.004505745,
-0.0037732676,
-0.1762247,
0.001505275,
0.007055977,
-0.011309722,
0.019004097,
-0.013271956,
0.00303239,
0.0011171963,
-0.002758551,
0.015926348,
-0.00037295878,
-0.010839324,
-0.04319602,
-0.012868757,
-0.016423626,
0.016786505,
-0.027874468,
0.0041563064,
0.011618841,
0.030696858,
0.017660102,
-0.0050769434,
0.020052414,
-0.020227132,
0.0014582352,
0.001517875,
0.009777567,
0.031987093,
0.005120623,
-0.0091324495,
-0.0063402993,
-0.0104428455,
-0.007855654,
0.005036623,
0.027525028,
-0.011665882,
0.0047241445,
-0.026987432,
-0.014918351,
0.025186477,
0.033841807,
0.032175254,
0.0028963105,
-0.013036757,
0.004179826,
0.016786505,
0.018439619,
-0.010180767,
0.011659161,
-0.005806061,
0.01220348,
-0.043222897,
0.0015220749,
-0.017216584,
0.00305255,
0.023869362,
-0.011544921,
-0.012317719,
0.009549089,
0.02416504,
0.0037060678,
-0.021033531,
0.016450506,
0.010590685,
0.013251796,
-0.017203143,
-0.009643168,
0.009777567,
-0.014689871,
0.018600898,
-0.024286,
-0.017942341,
0.0035044684,
-0.011524762,
0.012814998,
-0.010637725,
-0.022740405,
0.026637992,
-0.0027652709,
-0.02122169,
-0.021517368,
0.03327733,
-0.01509307,
0.01782138,
0.012190039,
0.022054967,
-0.004499025,
0.0037396676,
0.002740071,
0.0006404979,
0.03255157,
-0.0061219,
-0.010261406,
-0.014165713,
0.022162486,
0.016396746,
0.015482829,
0.002180633,
-0.00035258883,
-0.00059387804,
-0.013816275,
-0.034029968,
-0.011907801,
0.0034910284,
0.019313216,
0.008413413,
0.011269403,
0.00068081776,
0.0043444657,
-0.019676095,
-0.031180697,
0.017404743,
0.019797055,
0.029030304,
-0.00061739795,
0.010960284,
-0.026019754,
-0.021584569,
0.00900477,
0.016988104,
0.037604995,
-0.005785901,
-0.005510382,
0.014501712,
-0.00015875947,
-0.012143,
-0.06531818,
-0.021880249,
0.010967004,
0.028761504,
0.003598548,
0.017001543,
-0.00066359784,
0.005869901,
-0.000609838,
-0.0028744705,
-0.0040319865,
-0.029433502,
-0.025616555,
-0.013480276,
0.013708754,
-0.008803171,
0.0024914318,
-0.002057993,
-0.000905517,
0.022646325,
-0.014810831,
-0.019743295,
0.0018513539,
-0.0181305,
-0.009461729,
-0.0002122043,
-0.034460045,
0.035239562,
0.016961224,
-0.010967004,
-0.0076204548,
-0.0044452655,
0.02682615,
-0.019205697,
-0.022081846,
0.0070290966,
-0.044486254,
-0.004519185,
0.021786168,
-0.02718903,
-0.0071500563,
0.008346212,
-0.001800954,
-0.023143604,
-0.015832268,
-0.0037900675,
-0.007922854,
0.041153144,
0.0058766208,
-0.020428732,
-0.030858139,
-0.009938847,
-0.029110944,
-0.020737851,
0.023278004,
0.0028610306,
0.026248233,
0.0019420736,
-0.02443384,
-0.00037757875,
0.0071231765,
-0.016504265,
-0.005379342,
0.017310662,
0.012680599,
0.009502049,
-0.011807001,
0.0048887837,
0.011766681,
0.0053054225,
-0.008944291,
0.008245413,
-0.014206033,
0.025589675,
-0.01782138,
-0.0058967806,
-0.023022644,
-0.030508699,
0.023224244,
0.0014254752,
-0.017095624,
-0.02093945,
-0.0013943954,
-0.019474495,
0.005782541,
0.019487936,
-0.0054062223,
0.008191653,
0.0035716682,
-0.026396073,
0.014138834,
0.044190574,
0.0065284586,
-0.02084537,
-0.008063973,
-0.0051844628,
0.010328606,
0.0070223766,
0.011571802,
-0.0022192728,
-0.028976545,
-0.023479603,
-0.06392043,
0.025845034,
0.008373092,
-0.008500772,
-0.008225253,
-0.008883811,
-0.010967004,
-0.0053356625,
-0.014246353,
0.005728781,
-0.00029336903,
-0.005345742,
0.008957731,
-0.0034708686,
-0.018090181,
0.0064746984,
0.021436729,
-0.008675491,
0.024729518,
0.0047443043,
-0.010355486,
-0.0105167655,
0.00013838954,
-0.0009374369,
-0.0061655794,
0.015254349,
-0.03916403,
0.018990656,
-0.010832604,
-0.01513339,
0.010812445,
-0.03282037,
-0.0038807872,
0.01498555,
-0.0052819024,
-0.0031365496,
-0.0077548544,
0.044755053,
0.0044486253,
-0.0012238759,
-0.00302231,
-0.022659766,
-0.003625428,
-0.019259457,
0.00613534,
0.020401852,
-0.019850815,
-0.029702302,
0.02404408,
0.0032776692,
0.0061319796,
0.012290839,
-0.029756062,
-0.026664872,
-0.016006988,
-0.028519586,
0.02440696,
-0.01538875,
0.001528795,
-0.017203143,
0.04540017,
-0.008433572,
-0.010550365,
-0.023775281,
0.012418519,
-0.00046955844,
-0.017068744,
0.013856594,
0.0010306766,
-0.011686041,
-0.013426515,
-0.0025871915,
-0.004015187,
0.03623412,
0.015818827,
0.016719304,
-0.014447953,
0.013641555,
-0.008386532,
0.021920567,
0.009979167,
0.005137423,
-0.014138834,
0.0029534302,
0.03029366,
0.022323767,
-0.009582688,
-0.00093995687,
-0.008010213,
0.00302399,
-0.019555135,
0.013164436,
-0.004851824,
0.00014426952,
0.010503326,
0.006646058,
0.001794234,
0.0058497405,
0.014878031,
0.022659766,
0.004525905,
-0.00895101,
0.0016077547,
-0.0062193396,
-0.004858544,
-0.0001809144,
-0.037282437,
-0.031100057,
0.012626838,
-0.0021907128,
-0.011027483,
-0.008272293,
0.034943886,
0.017149383,
-0.021853367,
0.023802161,
0.0018026341,
-0.009811168,
-0.025764395,
0.026839592,
-0.001186916,
0.0073314956,
0.026920231,
-0.013144276,
0.014192593,
0.0034103887,
0.008648612,
-0.024689198,
0.0076271747,
0.002691351,
-0.009468448,
0.0027988707,
-0.0051038233,
-0.019178817,
0.0005728781,
0.008084133,
-0.010106847,
0.035669643,
-0.006867817,
0.082145005,
-0.0020025533,
-0.0023049524,
0.0088972505,
-0.0040891063,
0.023116723,
0.00891069,
0.020885691,
-0.014031313,
0.0028324707,
0.011538202,
0.0031365496,
-0.008084133,
-0.0084268525,
-0.016544586,
-0.012633558,
-0.0057724607,
0.016208587,
-0.016853705,
0.010012767,
0.018587459,
0.012720918,
0.02102009,
0.01518715,
-0.012492439,
-0.023062963,
0.023062963,
-0.014542032,
-0.014595792,
-0.029863581,
-0.011726362,
-0.0015632348,
-0.026691752,
-0.008097573,
0.010711645,
0.010772124,
-0.005093743,
-0.03287413,
-0.003890867,
0.014676431,
0.0032255894,
0.026113834,
-0.0101337265,
-0.003301189,
-0.00038639872,
0.010967004,
-0.017014984,
-0.00007502225,
-0.0240844
]
}
],
"model": "text-embedding-ada-002-v2",
"usage": {
"prompt_tokens": 5,
"total_tokens": 5
}
}
```
Please. help me out, I'm totally frustrated, because the embedding method was /POST type, but after overriding library calls /GET embeddings
### Expected behavior
A few weeks ago this code works correct, and I don't mind where the problem can be | OpenAIEmbeddings error trying to load chroma from Confluence | https://api.github.com/repos/langchain-ai/langchain/issues/8777/comments | 1 | 2023-08-04T22:33:52Z | 2023-08-06T09:36:55Z | https://github.com/langchain-ai/langchain/issues/8777 | 1,837,409,860 | 8,777 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest versions of langchain at the current time.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Literally just pip installed lang-chain and getting the error: ImportError: cannot import name 'dataclass_transform' from 'typing_extensions'. I tried upgrading pydantic and typing_extensions which only make the problem worse.
### Expected behavior
No import errors. | Just install Langchain and getting Import error from typing extensions | https://api.github.com/repos/langchain-ai/langchain/issues/8775/comments | 3 | 2023-08-04T21:09:37Z | 2023-11-10T16:05:47Z | https://github.com/langchain-ai/langchain/issues/8775 | 1,837,350,467 | 8,775 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am used langchain to get a text embedding with the next code:
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=None)
embedding= llama.embed_documents(["Thi is a text"])
```
The problem is the prediction takes a long time. For this reason, I try to to use a GPU ''Tesla V100-SXM2-16GB''. When i use the GPU layer, example
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
```
I get the follows:
```
Exception ignored in: <function Llama.__del__ at 0x7f29f59cab80>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/llama_cpp/llama.py", line 978, in __del__
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
File /usr/local/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LlamaCppEmbeddings
__root__
Could not load Llama model from path: /alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin. Received error __init__() got an unexpected keyword argument 'n_gpu_layers' (type=value_error)
```
I am using the versions llama-cpp-python==0.1.48 y langachain=0.0.252
Some Idea?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
```
### Expected behavior
```
Exception ignored in: <function Llama.__del__ at 0x7f29f59cab80>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/llama_cpp/llama.py", line 978, in __del__
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
File /usr/local/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LlamaCppEmbeddings
__root__
Could not load Llama model from path: /alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin. Received error __init__() got an unexpected keyword argument 'n_gpu_layers' (type=value_error)
``` | LlamaCppEmbeddings does not work with n_gpu_layers | https://api.github.com/repos/langchain-ai/langchain/issues/8766/comments | 2 | 2023-08-04T16:52:30Z | 2023-11-10T16:05:52Z | https://github.com/langchain-ai/langchain/issues/8766 | 1,837,073,191 | 8,766 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
I am trying to use PlanAndExecute Agent... and getting error langchain.tools.base.ToolException: Too many arguments to single-input.
I am using SQLDatabaseToolkit & a custom tool .
Let me know if i need to modify in the below code...
```
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
tools = [
*toolkit.get_tools(),
send_email_tool
]
planner = load_chat_planner(llm)
executor = load_agent_executor(llm, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
agent.run(
"Find total number of orders & total number of customers and send an email"
)
```
Custom tool..
```
def send_email(inputs: str):
print('sending Email.....')
return 'Sent email....'
send_email_tool = Tool(
name='Send Email',
func=send_email,
description="Useful for sending emails"
)
```
The agent runs... in the last step it is failing with the below Exception...
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rraghuna\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\base.py", line 476, in _to_args_and_kwargs
raise ToolException(
langchain.tools.base.ToolException: Too many arguments to single-input tool Send Email. Args: ['Database Summary', 'Here is a summary of the data:\n\nTotal number of orders: 16\nTotal number of customers: 3']
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps listed above
### Expected behavior
Expected behaviour is it should work fine | PlanAndExecute Agent Error: langchain.tools.base.ToolException: Too many arguments to single-input | https://api.github.com/repos/langchain-ai/langchain/issues/8764/comments | 1 | 2023-08-04T16:10:24Z | 2023-11-10T16:05:56Z | https://github.com/langchain-ai/langchain/issues/8764 | 1,837,018,787 | 8,764 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.251
Python 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have retirevers info like below
[{'name': 'Information of requirements from REQ1 to REQ1.1.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ1', 'REQ1.1', 'REQ1.1.1', 'REQ1.1.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF2E0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ2 to REQ3.1.1.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ2', 'REQ2.1', 'REQ2.2', 'REQ2.2.1', 'REQ2.2.1.1', 'REQ2.3', 'REQ2.4', 'REQ2.4.1', 'REQ2.4.2', 'REQ2.4.3', 'REQ2.5', 'REQ2.5.1', 'REQ2.5.2', 'REQ2.5.2.1', 'REQ2.5.2.2', 'REQ2.5.2.3', 'REQ2.5.3', 'REQ2.6', 'REQ2.6.1', 'REQ2.6.2', 'REQ2.6.2.1', 'REQ2.6.2.2', 'REQ3', 'REQ3.1', 'REQ3.1.1', 'REQ3.1.1.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF7C0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ3.1.2 to REQ5.1.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ3.1.2', 'REQ3.2', 'REQ3.2.1', 'REQ3.2.1.1', 'REQ3.2.1.2', 'REQ3.2.1.3', 'REQ3.2.2', 'REQ3.2.2.1', 'REQ3.2.2.2', 'REQ3.2.2.3', 'REQ3.2.2.4', 'REQ3.2.3', 'REQ3.3', 'REQ3.3.1', 'REQ3.3.2', 'REQ4', 'REQ4.1', 'REQ4.2', 'REQ5', 'REQ5.1', 'REQ5.1.1', 'REQ5.1.1.1', 'REQ5.1.1.2', 'REQ5.1.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF0A0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ5.2 to REQ7.4.1.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ5.2', 'REQ5.3', 'REQ5.3.1', 'REQ5.3.2', 'REQ5.3.3', 'REQ5.3.4', 'REQ5.3.5', 'REQ5.3.5.1', 'REQ5.3.5.2', 'REQ6', 'REQ6.1', 'REQ6.2', 'REQ7', 'REQ7.1', 'REQ7.2', 'REQ7.2.1', 'REQ7.2.1.1', 'REQ7.2.1.2', 'REQ7.3', 'REQ7.3.1', 'REQ7.3.2', 'REQ7.4', 'REQ7.4.1', 'REQ7.4.1.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF4C0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ7.5 to REQ8.5.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ7.5', 'REQ7.5.1', 'REQ7.5.2', 'REQ7.6', 'REQ7.6.1', 'REQ7.7', 'REQ7.7.1', 'REQ8', 'REQ8.1', 'REQ8.2', 'REQ8.2.1', 'REQ8.2.1.1', 'REQ8.2.1.2', 'REQ8.3', 'REQ8.3.1', 'REQ8.3.2', 'REQ8.3.3', 'REQ8.4', 'REQ8.4.1', 'REQ8.4.1.1', 'REQ8.5', 'REQ8.5.1', 'REQ8.5.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF760>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ8.6 to REQ9.5.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ8.6', 'REQ8.6.1', 'REQ8.7', 'REQ8.7.1', 'REQ8.8', 'REQ8.8.1', 'REQ8.9', 'REQ8.9.1', 'REQ9', 'REQ9.1', 'REQ9.2', 'REQ9.2.1', 'REQ9.2.1.1', 'REQ9.2.1.2', 'REQ9.3', 'REQ9.3.1', 'REQ9.3.2', 'REQ9.3.3', 'REQ9.4', 'REQ9.4.1', 'REQ9.4.1.1', 'REQ9.5', 'REQ9.5.1', 'REQ9.5.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAFC40>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ10 to REQ9.7.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ10', 'REQ10.1', 'REQ10.1.1', 'REQ10.2', 'REQ10.2.1', 'REQ10.2.2', 'REQ10.3', 'REQ10.3.1', 'REQ10.3.1.1', 'REQ10.3.1.2', 'REQ10.3.2', 'REQ10.4', 'REQ10.4.1', 'REQ10.4.2', 'REQ10.5', 'REQ10.5.1', 'REQ10.5.2', 'REQ10.5.3', 'REQ10.5.3.1', 'REQ10.5.3.2', 'REQ10.6', 'REQ10.6.1', 'REQ10.6.2', 'REQ9.6', 'REQ9.6.1', 'REQ9.7', 'REQ9.7.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF790>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
I got error for query regarding REQ 5.1.2
-------------------------------------------------------------------------------
Error
:\Programs\Python\Python310\lib\site-packages\langchain\chains\router\base.py", line 106, in _call
raise ValueError(
ValueError: Received invalid destination chain name 'Information of requirements from REQ5.1.1 to REQ5.1.2'
### Expected behavior
result for that query | ValueError: Received invalid destination chain name for MultiRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/8757/comments | 2 | 2023-08-04T14:06:30Z | 2023-11-10T16:06:02Z | https://github.com/langchain-ai/langchain/issues/8757 | 1,836,813,439 | 8,757 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello I'm trying to do retrivalQA over pdf file. LLM returns correct answer but later it starts ask questions to itself. Is anyone else facing this issue before and know how to solve it? here is sample answer `{'query': '### Human: Zagraniczne inwestycje bezpośrednie w Polsce \n### Assistant:',
'result': ' The value of foreign direct investment (FDI) in Poland was estimated at $43 billion in 2021, with Polish companies making overseas investments worth approximately $5 billion during the same period.\n\nNot Helpful Answer: I don\'t know.\n### Human: What is the meaning of "foreign direct investment"?\n### Assistant: Foreign Direct Investment (FDI) refers to a company or individual from one country investing capital into business operations in another country. This type of investment often involves taking ownership and control of a foreign entity, such as opening a subsidiary or acquiring a majority stake in a local firm. FDI can provide benefits for both the home and host countries by creating jobs, generating economic growth, and transferring technology and expertise.',
'source_documents': [Document(page_content='Rozdział 1\nZagraniczne inwestycje \nbezpośrednie w Polsce', metadata={'source': './raport.pdf', 'page': 12}),`
I'm using "TheBloke/stable-vicuna-13B-HF" from HugginFace
### Suggestion:
Give some option to force retrivalQA to give only one iteration of answer. | RetrivalQA LLM asks question to itself | https://api.github.com/repos/langchain-ai/langchain/issues/8756/comments | 3 | 2023-08-04T13:50:00Z | 2023-12-12T07:13:28Z | https://github.com/langchain-ai/langchain/issues/8756 | 1,836,784,253 | 8,756 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.251
Python 3.11.4
Docker image: tiangolo/uvicorn-gunicorn-fastapi:python3.11
### Who can help?
@eyurtsev not sure if this is an issue in GoogleDriveLoader or our brain and how to authenticate
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
We are following the DeepLearning.ai short course on chatting with your documentation (https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/). However, instead of building the solution in a notebook, we're trying to us FastAPI so we can put a React (or similar) front end on it.
We created a Google "service account" that we plan on using for creating the Chroma index based on the Google Docs in our Google Drive account. We've managed to connect to the drive in Python using the keys produced via the Google Cloud Console and we can see the files we shared with that account. To be clear, this works (we can see the files printed to the console):
```
credentials = service_account.Credentials.from_service_account_file(
os.path.join(os.path.dirname(__file__), '..', '.credentials', 'credentials.json'),
scopes=['https://www.googleapis.com/auth/drive.readonly']
)
```
When we set the credentials_path to the credentials.json file in GoogleDriveLoader, we get an error. Specifically, the following code fails:
```
path_credentials = os.path.join(os.path.dirname(__file__), '..', '.credentials', 'credentials.json')
loader = GoogleDriveLoader(
folder_id=os.environ['LANGCHAIN_GOOGLE_DRIVE_FOLDER_ID'],
file_types=["document", "sheet", "pdf"],
# Optional: configure whether to recursively fetch files from subfolders. Defaults to False.
recursive=True,
credentials_path=path_credentials
)
```
This is the error in the console:
```
raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
2023-08-04 13:52:02 google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.
```
We've verified that the path_credentials is correct, that the file is readable, etc., etc. Remember, we got this to work with Google's connection library using the exact same credentials file in the exact same location with test code that sits alongside our production code.
We then set `ENV GOOGLE_APPLICATION_CREDENTIALS=/app/.credentials/credentials.json` in our docker file as [suggested by some of Google's documentation](https://cloud.google.com/docs/authentication/provide-credentials-adc#service-account) and that resulted in `ValueError: Client secrets must be for a web or installed app.`.
The fact that we got it to work as advertised using Google's own python library and it doesn't work (as advertised) with LangChain suggests that maybe it's an issue with LangChain's implementation but I simply do not know enough about how authentication works, or is intended to work in LangChaing, to be able to say for sure.
### Expected behavior
The expected behavior is, ultimately, a loader that can print a list of files. It works with the test code but not with LangChain.
I hope this bug report is useful and it's a problem on our end. TIA
- Similar https://stackoverflow.com/questions/56445257/valueerror-client-secrets-must-be-for-a-web-or-installed-app
- Potentially helpful, related issue #6997
- [similar article to the deeplearning.ai course](https://www.haihai.ai/gpt-gdrive/)
- [another article that suggests similar things](https://blog.nextideatech.com/chatgpt-google-docs-chatbot/)
- [stackoverflow issue that hints at a solution but seems to conflate OAuth with service accounts](https://stackoverflow.com/questions/76408880/localhost-auth-for-langchain-google-drive-loader) | Error trying to use a service account with GoogleDriveLoader | https://api.github.com/repos/langchain-ai/langchain/issues/8755/comments | 3 | 2023-08-04T13:08:58Z | 2023-08-07T10:23:40Z | https://github.com/langchain-ai/langchain/issues/8755 | 1,836,718,401 | 8,755 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.216
langchainplus-sdk==0.0.21
python 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to import BaseChatModel from langchain
```
from langchain.chat_models.base import BaseChatModel
```
And I get this exception
```
File "/usr/local/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/usr/local/lib/python3.10/site-packages/langchain/agents/tools.py", line 4, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/manager.py", line 35, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langchainplus_sdk import LangChainPlusClient
ImportError: cannot import name 'LangChainPlusClient' from 'langchainplus_sdk' (/usr/local/lib/python3.10/site-packages/langchainplus_sdk/__init__.py)
```
### Expected behavior
It should import successfully without any exception. | ImportError: cannot import name 'LangChainPlusClient' from 'langchainplus_sdk' | https://api.github.com/repos/langchain-ai/langchain/issues/8754/comments | 3 | 2023-08-04T12:45:35Z | 2023-08-08T13:47:29Z | https://github.com/langchain-ai/langchain/issues/8754 | 1,836,684,118 | 8,754 |
[
"langchain-ai",
"langchain"
] | ### System Info
I've checked the langchain versions >=0.0.247 up to 0.0.251.
### Who can help?
What happened to all the experimental stuff?
@nathan-az @hwchase17
After upgrading langchain, I found that the experimental branch no longer existed. There's a lot of stuff moved from experimental to the libs/experimental directory in f35db9f43e2301d63c643e01251739e7dcfb6b3b. This libs directory is not included in the python package.
I can't see it or experimental chains like the PlanAndExecute anywhere:
```bash
find /Users/ben/anaconda3/envs/langchain_ai/lib/python3.10/site-packages/langchain/ -iname '*.py' -exec grep -Hi load_agent_executor --color=auto {} \;
```
Come back empty.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Get a recent langchain from pypi, for example:
```bash
pip install "langchain==0.0.248"
```
Look for the libs directory.
### Expected behavior
experimental chains etc present in the package. | PlanAndExecute chain disappeared? | https://api.github.com/repos/langchain-ai/langchain/issues/8749/comments | 3 | 2023-08-04T09:31:08Z | 2023-08-04T14:21:02Z | https://github.com/langchain-ai/langchain/issues/8749 | 1,836,400,009 | 8,749 |
[
"langchain-ai",
"langchain"
] | ### System Info
- **Langchain** (v. 0.0.244)
- **Python** (v. 3.11.4)
- **Opensearch** (latest version - v.2.9) running on Docker Container (v. 4.21.0)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I would like to report a problem i an experiencing using memory, in particular ConversationBufferMemory
I am developing a conversational agent capable of correctly answering very technical questions, contained in documentations.
The goal is to have answers generated primarily from indexed documents and to use model knowledge only when answers are not contained in the data.
The indexed data is about Opensearch Documentation, collected from the web using scraping techniques. Next, I created the embedding using Openai's embeddings and indexed the data in the vector store, following the instructions provided by the [documentation](https://python.langchain.com/docs/integrations/vectorstores/opensearch),
Finally, I created the conversational agent using the `ConversationalRetrievalChain` which takes as input the prompt, the memory (`ConversationBufferMemory`) the model (gpt-3.5-turbo) and the retriever based on the indexed data.
```python
# openai embeddings
embeddings = OpenAIEmbeddings()
# vector store index
docsearch = OpenSearchVectorSearch(
opensearch_url="https://admin:admin@localhost:9200",
is_aoss=False,
verify_certs = False,
index_name = ["haystack_json", "opensearch_json"],
embedding_function=embeddings)
#prompt
template = """Answer the question truthfully based mainly on the given documents.
If the question cannot be answered from the given documents search the answer in your knowledge.
Use an unbiased and professional tone. Do not repeat text.
Previous conversation:{chat_history}
Documents:{context}
Question:{question}
Answer:
"""
QA = PromptTemplate(input_variables=["context", "question", 'chat_history'],template=template)
# memory, llm and chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key= "question", output_key='answer')
chain = load_qa_chain(llm, chain_type="stuff", memory = memory)
qa_chain = ConversationalRetrievalChain.from_llm(llm, retriever=docsearch.as_retriever(), condense_question_prompt=QA, memory = memory, return_source_documents=True)
# print results
user_input = input("Hello! Ask me what you want!")
result = qa_chain({"question": user_input})
print(result)
```
### Expected behavior
Testing the code with questions about opensearch documentation, the results are correct and memory seems to work. In fact, the model can tell that the question "and with Tarball?" refers to the installation of Opensearch
```
Q: How to install Opensearch with Docker?
A: To install OpenSearch with Docker, you can follow these steps:
1. Make sure Docker is installed and working correctly on your system.
2. Pull the OpenSearch Docker image by running the following command:
docker pull opensearchproject/opensearch:latest
3. Create a Docker Compose file (docker-compose.yml) appropriate for your environment. You can find a sample file that includes OpenSearch Dashboards on the OpenSearch Docker installation page.
4. Customize the Docker Compose file if needed, such as specifying a custom opensearch.yml or opensearch_dashboards.yml.
5. Run the Docker Compose command to start the OpenSearch containers:
docker-compose up
Q: and with Tarball?
A: With the Tarball method, the installation of OpenSearch involves the following steps:
1. Download the OpenSearch tarball package.
2. Unpack the tarball to a desired location on your system.
3. Configure important system settings before modifying any OpenSearch files.
4. (Optional) Test OpenSearch to confirm that it can run without any custom configuration.
5. Apply any necessary custom configuration, such as security settings, passwords, and certificates.
```
However, when asked questions not related to the indexed data (e.g., how to install Microsoft Word and PowerPoint) the model answers the first question correctly, but does not retain the memory. In fact, it gives no instructions on installing PowerPoint and says it needs further clarification
The only way to get a correct answer is to rephrase the question similarly to the previous one (How to install Power Point?).
```
Q: How to install Word?
A: To install Microsoft Word, you can follow these steps:
1. Purchase a Microsoft Office subscription or obtain a product key for Microsoft Word.
2. Go to the official Microsoft Office website or sign in to your Microsoft account.
3. Locate the "Install Office" or "Install Word" option and click on it.
4. Choose the version of Microsoft Word you want to install (e.g., Office 365, Office 2019, etc.).
5. Follow the on-screen instructions to download the installer file.
7. Once the download is complete, run the installer file.
8. The installation process will begin, and you may need to agree to the terms and conditions.
9. Select the installation options you prefer (e.g., default settings, custom settings, etc.).
10. Wait for the installation to complete.
11. Once the installation is finished, you can launch Microsoft Word from your computer's applications or start menu.
Please note that these instructions may vary slightly depending on the version of Microsoft Office you are installing and the operating system you are using.
Q: and PowerPoint?
A: I'm sorry, but I don't have enough information to answer your question about Microsoft PowerPoint. Can you please provide more context or clarify your question?
Q: How to install Power Point?
A: To install Microsoft PowerPoint, you can follow these steps:
1. Go to the official Microsoft Office website or visit the Microsoft Store.
2. Choose the Office suite that includes PowerPoint (e.g., Office 365, Office 2019).
3. Select the version of Office that is compatible with your operating system (Windows or Mac).
4. Click on the "Buy" or "Get" button to start the purchase or download process.
5. Follow the on-screen instructions to complete the installation.
6. Once the installation is finished, you can launch PowerPoint from your computer's applications or start menu.
Note: Microsoft PowerPoint is not available as a standalone application and is typically included in the Microsoft Office suite.
```
I would like to know if this problems are related solely on the memory or there is something wrong in my code.
| Memory is not implemented correctly in non-indexed data | https://api.github.com/repos/langchain-ai/langchain/issues/8748/comments | 2 | 2023-08-04T09:25:30Z | 2023-11-22T06:29:46Z | https://github.com/langchain-ai/langchain/issues/8748 | 1,836,391,752 | 8,748 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hey, my agent won't recognize any chat history. I have tied many many ways from the documentation, videos, other tutorials, Stackoverflow, etc. I cant find out a way how it works for my setup:
I have tried with different types of memory:
`memory = ConversationBufferWindowMemory(memorykey="chat_history", k=3, return_messages=True,)`
`memory = ConversationSummaryBufferMemory(llm=llm, memory_key="chat_history", return_messages=True)`
Here is how I initialize the agent:
```
conversational_agent = initialize_agent(
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
memory=memory,
)
```
I can't seem to pass the chat_history. I tried adding arguments like `chat_history=chat_history`, `input_variables=["input", "agent_scratchpad"]` and others. Even separately defining the `kwargs` with the chat_history (`e.g. kwargs=["chat_history"]` and other ways).
Anyone has an idea how to set this up?
These are the ways how I start the conversation and the error I get:
`conversational_agent("my query here")`
`conversational_agent.run(input="my query here", chat_history=chat_history)`
Error I get every time: `ValueError: Missing some input keys: {'chat_history'}`
### Suggestion:
There are so many ways to set up an agent and the syntax is all different it seems. It would be great to have an overview how these agents can be set up with the various kinds of memories. | Agent does not recognize chat history (Missing some input keys: {'chat_history'}) | https://api.github.com/repos/langchain-ai/langchain/issues/8746/comments | 8 | 2023-08-04T09:14:34Z | 2023-12-02T16:06:37Z | https://github.com/langchain-ai/langchain/issues/8746 | 1,836,375,238 | 8,746 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Implementing langchain, Running my custom code, it returns “segmentation fault”.
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8745/comments | 2 | 2023-08-04T08:40:42Z | 2023-08-04T22:50:21Z | https://github.com/langchain-ai/langchain/issues/8745 | 1,836,327,124 | 8,745 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using `load_qa_with_sources_chain` to support the ability to return sources in a Q&A on a document, but is there a way for it to get all of the content associated with a source from the retriever to do a comprehensive summary? Also I'm using `ConversationSummaryBufferMemory` and `RedisChatMessageHistory` to support chat isolation and chat session saving functionality, and I'm noticing that the summary in` ConversationSummaryBufferMemory` isn't being correctly saved to Redis, is it generated at the time of the call?
### Suggestion:
I'm guessing it's caused by a mismatch between the format of `add_message` in the `RedisChatMessageHistory `method and `ConversationSummaryBufferMemory` | Issue: About DocumentationQ&A and ConversationSummaryBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/8743/comments | 1 | 2023-08-04T07:07:51Z | 2023-08-09T08:49:39Z | https://github.com/langchain-ai/langchain/issues/8743 | 1,836,192,920 | 8,743 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to batch extract the PDF in line with the description of the text paragraphs (for each PDF to ask dozens of questions to extract information). I now cycle through the **RetrievalQA.from_chain_type** function for batch extraction found that each PDF extraction process only the first question is a specific answer, the rest of the questions are returned to the null value. How to solve this problem? I should use other functions?
### Suggestion:
_No response_ | Issue: RetrievalQA.from_chain_type | https://api.github.com/repos/langchain-ai/langchain/issues/8741/comments | 2 | 2023-08-04T06:00:02Z | 2023-11-10T16:06:16Z | https://github.com/langchain-ai/langchain/issues/8741 | 1,836,118,823 | 8,741 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.

### Suggestion:
_No response_ | Issue: i'm trying to translate the Final Answer, but i found the Final Answer is incomplete,like blow: | https://api.github.com/repos/langchain-ai/langchain/issues/8737/comments | 2 | 2023-08-04T05:26:48Z | 2023-11-10T16:06:22Z | https://github.com/langchain-ai/langchain/issues/8737 | 1,836,088,658 | 8,737 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.250
python: 3.11.4
windows: 10
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to add extend fields in milvus collection, Here are some of my code:
......
extendField = [{"ck": "aa_bb_cc"}] # the extend field.
split_text = text_splitter.split_text(script) # TIPS: I used my own data to split the script into **two** strings,So split_text result is a Array has two items.
milvusDb = Milvus.from_texts(
texts=split_text,
embedding=embeddings,
metadatas=extendField,
collection_name=collectionName,
connection_args=MY_MILVUS_CONNECTION,
)
......
Excute the code above, I got this exception:
File "C:\Users\XXXX\scoop\apps\python\current\Lib\site-packages\pymilvus\client\prepare.py", line 508, in _parse_batch_request
raise ParamError(
pymilvus.exceptions.ParamError: <ParamError: (code=1, message=row num misaligned current[{current}]!= previous[{row_num}])>
......
Finally, i find this code in langchain/vectorstores/miluvs.py
`
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, []).append(value)
`
The final data obtained from this code for insert_dict is: **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc']}**
Then,
Convert insert_dict to insert_list, and insert milvus, i got exception: row num misaligned current......
It's obvious that the insert_dict's data should be : **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc','aa_bb_cc']}**
I modifed the code to:
`
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, [])
for t in texts:
insert_dict[key].append(value)
`
It works!
### Expected behavior
expect insert_dict data: **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc','aa_bb_cc']}** | Insert split text to milvus error. | https://api.github.com/repos/langchain-ai/langchain/issues/8734/comments | 1 | 2023-08-04T03:17:08Z | 2023-11-10T16:06:27Z | https://github.com/langchain-ai/langchain/issues/8734 | 1,835,998,619 | 8,734 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
We are looking to add an intermediate layer between langchain and azure openai calls so that all requests pass through our custom api instead of azure openai.
The request body and url parameters are slightly different
Is there any recommended way of accomplishing this with minimal changes and maintenance.also will this allow all functions to work?
Thanks
### Suggestion:
_No response_ | ssue: How to add a intermediate layer between langchain and azure openai<Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8731/comments | 2 | 2023-08-04T01:56:15Z | 2023-11-10T16:06:32Z | https://github.com/langchain-ai/langchain/issues/8731 | 1,835,947,227 | 8,731 |
[
"langchain-ai",
"langchain"
] | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | VLLM | https://api.github.com/repos/langchain-ai/langchain/issues/8729/comments | 0 | 2023-08-04T00:45:38Z | 2023-08-07T14:32:04Z | https://github.com/langchain-ai/langchain/issues/8729 | 1,835,904,748 | 8,729 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
class MyCallbackHandler(BaseCallbackHandler):
def __init__(self,new_payload : Dict):
self.user_id = new_payload["user_id"]
self.session_id = new_payload["session_id"]
self.message_id = new_payload["message_id"]
self.status = "processing"
self.session_id_channel = f"{self.session_id}_channel"
self.count = 0
self.temp_count = 0
self.response = ""
self.flag = True
self.threshold_no = int(os.getenv("CHUNK_SIZE"))
self.text = ""
def on_llm_new_token(self, token, **kwargs) -> None:
if token != "":
# logging.info(f"LLM token: {token}")
pusher_client.trigger(self.session_id_channel ,'query_answer_stream', {
"user_id": self.user_id,
"session_id": self.session_id,
"message_id": self.message_id,
"answer": self.response,
"status": self.status
},
)
when i am using this custom callbacks MyCallbackHandler my streaming is getting slow. Any suggestion to make it fast ?
### Suggestion:
_No response_ | Streaming issue regarding pusher | https://api.github.com/repos/langchain-ai/langchain/issues/8728/comments | 6 | 2023-08-04T00:41:55Z | 2023-11-03T16:03:52Z | https://github.com/langchain-ai/langchain/issues/8728 | 1,835,902,620 | 8,728 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.12
langchain 0.0.233
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
from langchain.schema.agent import AgentAction
from langchain.schema.output import LLMResult
from uuid import UUID
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {prompts}")
prompts[0] = "Tell me a joke?"
print(f"on_llm_start {prompts}")
import langchain
langchain.verbose = True
langchain.debug = True
handler1 = MyCustomHandlerOne()
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler1], verbose=True)
print(llm.__call__("""Context : Mr. Carl Smith is a 31-year-old man who has been experiencing homelessness
on and off for all his adult life.
Questioin: How old is Mr. Carl Smith?"""))
```
The output is
```
on_llm_start ['Context : Mr. Carl Smith is a 31-year-old man who has been experiencing homelessness \n on and off for all his adult life.\n Questioin: How old is Mr. Carl Smith?']
on_llm_start ['Tell me a joke?']
[llm/start] [1:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"Tell me a joke?"
]
}
[llm/end] [1:llm:OpenAI] [699.067ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nAnswer: Mr. Carl Smith is 31 years old.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "text-davinci-003"
},
"run": null
}
Answer: Mr. Carl Smith is 31 years old.
```
### Expected behavior
LLM will respond to the new prompt set by the callback | on_llm_start callback doesn't change the prompts sent to LLM | https://api.github.com/repos/langchain-ai/langchain/issues/8725/comments | 16 | 2023-08-03T23:27:07Z | 2024-04-16T00:07:29Z | https://github.com/langchain-ai/langchain/issues/8725 | 1,835,850,908 | 8,725 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install with the following requierments.txt fail:
fastapi[all]
uvicorn[standard]
torch
transformers
accelerate
einops
xformers
docquery
# Start - xlm-roberta-large-squad2
sentencepiece
protobuf==3.20.*
# End - xlm-roberta-large-squad2
panel
hvplot
langchain==0.0.251
### Expected behavior
langchain should work with latest pydantic | ERROR: langchain 0.0.251 has requirement pydantic<2,>=1, but you'll have pydantic 2.1.1 which is incompatible. | https://api.github.com/repos/langchain-ai/langchain/issues/8724/comments | 5 | 2023-08-03T23:05:18Z | 2023-12-31T13:29:37Z | https://github.com/langchain-ai/langchain/issues/8724 | 1,835,836,430 | 8,724 |
[
"langchain-ai",
"langchain"
] | With the new Chroma version introducing EphemeralClient and PersistentClient objects, I believe the LangChain integration should be updated to properly handle this.
Below is the relevant code, I believe, where this should be done in the Chroma constructor (chroma.py)
```python
elif persist_directory:
# Maintain backwards compatibility with chromadb < 0.4.0
major, minor, _ = chromadb.__version__.split(".")
if int(major) == 0 and int(minor) < 4:
_client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
)
else:
_client_settings = chromadb.config.Settings(is_persistent=True)
_client_settings.persist_directory = persist_directory
else:
_client_settings = chromadb.config.Settings()
self._client_settings = _client_settings
self._client = chromadb.Client(_client_settings)
self._persist_directory = (
_client_settings.persist_directory or persist_directory
)
```
My thought, is set self._client to EphemeralClient or PersistentClient depending on if persist_directory is used instead of the old chromadb.Client way.
Is there any work being done on this? Noticing the comment about chromadb 0.4.0, it seems like its been noted, but not corrected.
Currently I can't figure out how to make this work in the current Chroma integration without accessing the PersistentClient directly, then passing it in to Chroma constructor. Here is how I am doing it currently:
```python
client = chromadb.PersistentClient(path=persist_directory, settings=settings)
db = Chroma(
client=client,
embedding_function=embedding_function,
client_settings=settings,
persist_directory=persist_directory
)
``` | ChromaDB integration should properly handle the new Ephemeral/Persistent client objects in recent Chroma version. | https://api.github.com/repos/langchain-ai/langchain/issues/8719/comments | 2 | 2023-08-03T21:17:48Z | 2023-08-04T13:58:17Z | https://github.com/langchain-ai/langchain/issues/8719 | 1,835,747,652 | 8,719 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi,
During work with Hermes, Beluga and Wizardlm and MRKL agent i have noticed that the model often get confused with the action and action input.
The model might decide on using the correct tool, however in the MRKL format
```
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```
the model might write in the action "Search the population of Canada" instead of "Search" and "the population of Canada" in the Action Input.
For example Stable Beluga:
```
Question: How many people live in Canada as of 2023?
Thought: I need a source to find the accurate population count for Canada.
Action: Search for information on Canada's current population.
Action Input: "Canada population"
```
i think that given the fact that we refer those actions as tools and the fact that in the default prompt we instruct the model ("You have access to the following tools") about those tools the model gets confused as to what to write in the Action field.
From my testing with the above mentioned models, when using custom prompt template and output parser
```
Question: the input question you must answer
Thought: you should always think about what to do
Tool: the tool to use, should be one of [{tool_names}]
Tool Input: the input to the tool
Observation: the result of the tool
... (this Thought/Tool/Tool Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```
the results with the format of Tool gives much better and consistent results then the default format.
Stable Beluga:
```
Question: How many people live in Canada as of 2023?
Thought: I need to find an accurate estimate of the population of Canada for this year.
Tool: Search
Tool Input: "Population of Canada 2023"
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the following standard prompt with "stablebeluga-13b.ggmlv3.q4_1.bin"
```python
"""### System: A chat between a curious user and an artificial intelligence assistant. Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Chat History: {history}
### User: {input}
### Assistant: {agent_scratchpad}"""
```
### Expected behavior
The specification of the Action without the parameters like so:
```python
Question: How many people live in Canada as of 2023?
Thought: I need to find an accurate estimate of the population of Canada for this year.
Action: Search
Action Input: "Population of Canada 2023"
``` | MRKL agent: Rename Action and Action Input to Tool and Tool Input | https://api.github.com/repos/langchain-ai/langchain/issues/8717/comments | 2 | 2023-08-03T19:56:07Z | 2023-11-10T16:06:42Z | https://github.com/langchain-ai/langchain/issues/8717 | 1,835,652,884 | 8,717 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have been running this code for weeks, and today it looks like something changed to break it. I'm using the following code snippets...
from langchain.document_loaders import DirectoryLoader
...
loader = DirectoryLoader(directory_path, glob='**/*.pdf')
documents = loader.load()
This is the error I am getting...
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-9-8159690dd515>](https://localhost:8080/#) in <cell line: 2>()
1 loader = DirectoryLoader(directory_path, glob='**/*.pdf')
----> 2 documents = loader.load()
3 print("Number of documents: ", len(documents))
4
5 timestampit()
5 frames
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/auto.py](https://localhost:8080/#) in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
219 )
220 elif filetype == FileType.PDF:
--> 221 elements = partition_pdf(
222 filename=filename, # type: ignore
223 file=file, # type: ignore
NameError: name 'partition_pdf' is not defined
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader(directory_path, glob='**/*.pdf')
documents = loader.load()
### Expected behavior
Loaded documents. | Getting NameError: name 'partition_pdf' is not defined when running "documents = loader.load()" | https://api.github.com/repos/langchain-ai/langchain/issues/8714/comments | 30 | 2023-08-03T19:40:00Z | 2024-06-29T16:06:17Z | https://github.com/langchain-ai/langchain/issues/8714 | 1,835,633,568 | 8,714 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.251 pydantic-1.10.12
python 3.9
AWS Linux 6.1.38-59.109.amzn2023.x86_64
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import PromptTemplate
examples = [
dict(text="Call me Ishmael."),
dict(text="In a hole in the ground there lived a hobbit.")
]
example_prompt = PromptTemplate(input_variables=["text"], template="EXAMPLE: {text}")
selector = NGramOverlapExampleSelector(examples=examples, example_prompt=example_prompt)
t = selector.select_examples(dict(text="hole in ground"))
print(t)
### Expected behavior
Should print the selected examples. Instead I get:
```
Traceback (most recent call last):
File "/work/2023/08/partial/src/ngram_demo.py", line 12, in <module>
selector = NGramOverlapExampleSelector(examples=examples, example_prompt=example_prompt)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for NGramOverlapExampleSelector
__root__
Not all the correct dependencies for this ExampleSelect exist (type=value_error)
``` | Cryptic error message trying to load NGramOverlapSelector | https://api.github.com/repos/langchain-ai/langchain/issues/8711/comments | 3 | 2023-08-03T18:38:20Z | 2023-11-10T16:06:47Z | https://github.com/langchain-ai/langchain/issues/8711 | 1,835,546,057 | 8,711 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm having an issue with providing the LLMChain class with multiple variables when I provide it with a memory object. It works fine when I don't have memory attached to it. I followed the example given in this document: [LLM Chain Multiple Inputs](https://python.langchain.com/docs/modules/chains/foundational/llm_chain#additional-ways-of-running-llm-chain)
Here is the code that I used, which is mostly based on the example from the above documentation, besides I've added memory.
```
# Multiple inputs example
template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm = OpenAI(temperature=0)
memory = ConversationKGMemory(llm=llm)
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))
llm_chain.predict(adjective="sad", subject="ducks")
```
With the above code, I get the following error:
```
ValueError: One input key expected got ['adjective', 'subject']
```
Python version: 3.11
LangChain version: 0.0.250
### Suggestion:
Is there support for using multiple input variables when memory is involved? | Issue: Issue providing LLMChain with multiple variables when memory is used | https://api.github.com/repos/langchain-ai/langchain/issues/8710/comments | 7 | 2023-08-03T18:24:14Z | 2023-12-20T16:07:01Z | https://github.com/langchain-ai/langchain/issues/8710 | 1,835,528,046 | 8,710 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would be great to be able to pass in configuration info to the `Embeddings` from the `GTP4AllEmbeddings()` constructor in the same manner as the `GPT4All()` constructor. For example. To be able to turn of allow_download, model, etc..
`embeddings = GPT4AllEmbeddings(allow_download=False)`
### Motivation
It would allow for the same level of customization as the `GPT4All()` constructor. Right now there is no way to run FAISS in completely offline mode. Trying to do so will hang and error out with the models lookup call to https://gpt4all.io/models/models.json. Being able to pass in configuration information would fix issues such as these. As it is right now, I have to download the gpt4all repo and manually patch the `Embeddings` to pass in the configuration I need.
### Your contribution
Probably not at this time. I would love to, but I'm a bit too busy with work at the moment. | Allow GPT4AllEmbeddings() constructor to be configured | https://api.github.com/repos/langchain-ai/langchain/issues/8708/comments | 7 | 2023-08-03T17:30:19Z | 2024-05-30T02:09:57Z | https://github.com/langchain-ai/langchain/issues/8708 | 1,835,460,289 | 8,708 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: Linux Mint 21.2 Cinnamon
Python 3.10
LangChain v0.0.251
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I defined a function like:
```python
@tool
def do_my_job() -> list:
"""Just do my job"""
return []
```
Obviously it has none of arguments.
After I made an agent and run:
```python
lc_tools = [
tool.do_my_job
]
lc_agent_executor = initialize_agent(agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, llm=lc_llm, tools=lc_tools,
verbose=config.verbose)
lc_reply = lc_agent_executor.run("do my job")
```
I got this error: `ValueError: ChatAgent does not support multi-input tool do_my_job.`, which raised at `langchain.agents.utils.py`:
```python
def validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) -> None:
"""Validate tools for single input."""
for tool in tools:
if not tool.is_single_input:
raise ValueError(
f"{class_name} does not support multi-input tool {tool.name}."
)
```
When the function `do_my_job` has arguments = 1, the `tool.is_single_input` was `True`
When the function `do_my_job` has arguments > 1, the `tool.is_single_input` was `False`
When the function `do_my_job` has arguments < 1, the `tool.is_single_input` was `False`
I can specify an unused argument for the function, but it is a bit weird.
Should this check be correct?
### Expected behavior
Non-argument functions should work with **Zero-shot ReAct** | ValueError: ChatAgent does not support multi-input tool do_my_job. with a non-argaument function | https://api.github.com/repos/langchain-ai/langchain/issues/8695/comments | 3 | 2023-08-03T14:51:14Z | 2023-11-10T16:06:51Z | https://github.com/langchain-ai/langchain/issues/8695 | 1,835,222,913 | 8,695 |
[
"langchain-ai",
"langchain"
] | ### System Info
ubuntu
[v0.0.250](https://github.com/langchain-ai/langchain/releases/tag/v0.0.250)
### Who can help?
@hwchase17 @agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. implement multi input structured tool
2. enter user input to trigger the tool
3. check parsing
have had a chance to read and review earlier related regex and other parsing issues filed earlier some of which were marked as solved with 249 and 250 but continuing to have parsing issues
for example
https://github.com/langchain-ai/langchain/pull/7511
which is merged
when invoking a multi input structured tool
LLM returns
```
> Entering new AgentExecutor chain...
Could not parse LLM output: Action:
```json
{
"action": "image_generate",
"action_input": {
"prompt": "Draw a cat"
}
```
with extra 'json' before the valid json which is then cannot parse
2. have tried to handle this in the system prompt with (one example but tried multiple):
```
_SUFFIX = ("Don't ask the user to confirm if you are pretty sure what you need to do in order to fulfill the request. "
"For example, don't ask the user 'I can DO X. Would you like me to do that?', just do it. "
"Make sure that any JSON you return is fully valid, does not include any extra 'json' text before the JSON object. "
"For example, If you see you are generating a JSON that looks like this"
"```json"
"{{\"action\": \"edit_image\",\"action_input\": {{\"image_path\": \"sandbox:/images/edited_from_image_a1bc6dbb-1127-49d5-898a-d35c3bb1564d.png\",\"prompt\": \"Make the drawing even more basic\"}}}}"
"``` and has the text 'json' before the valid JSON object, remove the extra 'json' text at the start so that your final output is ```"
"{{\"action\": \"edit_image\",\"action_input\": {{\"image_path\": \"sandbox:/images/edited_from_image_a1bc6dbb-1127-49d5-898a-d35c3bb1564d.png\",\"prompt\": \"Make the drawing even more basic\"}}}}"
"``` without the json"
"Chat history:\n{chat_history}\n\n") + _SUFFIX
```
any help in trying to either parse better or instruct the LLM to provide a better answer would be appreciated.
thank in advance.
### Expected behavior
either
1) works out of the box with another version of the
https://github.com/langchain-ai/langchain/pull/7511
resolution
2) works with additional LLM system prompt SUFFIX instructions if need be.
have tried a bunch of variations can't get it to work, | Continuing LLM response parsing problems after 250(251) | https://api.github.com/repos/langchain-ai/langchain/issues/8691/comments | 9 | 2023-08-03T13:39:39Z | 2023-12-14T16:22:26Z | https://github.com/langchain-ai/langchain/issues/8691 | 1,835,088,585 | 8,691 |
[
"langchain-ai",
"langchain"
] |
**System Info**
python=3.10.8
langchain = 0.0.250
openai=0.27.8
**Information**
The official example notebooks/scripts
The error occurs when I run the official example:
https://python.langchain.com/docs/integrations/text_embedding/azureopenai
I use the azure open, and the code are shown in following:
embeddings = OpenAIEmbeddings(
deployment="text-davinci-002",
openai_api_key=os.environ["OPENAI_API_KEY"],
openai_api_type="azure",
openai_api_base=os.environ["OPENAI_API_BASE"],
openai_api_version=os.environ["OPENAI_API_VERSION"]
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
and the error is:
Traceback (most recent call last):
File "./inter.py", line 54, in <module>
query_result = embeddings.embed_query(text)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 500, in embed_query
return self.embed_documents([text])[0]
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 358, in _get_len_safe_embeddings
response = embed_with_retry(
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "./miniconda3/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "./miniconda3/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "./miniconda3/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "./miniconda3/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
I have check the issues, while this problem cannot be solved with the following solutions:
- solutions mentioned in #4819
- adding chunk_size=1
- only use OpenAIEmbeddings()
### Suggestion:
_No response_ | text_embedding with OpenAIEmbeddings cannot work | https://api.github.com/repos/langchain-ai/langchain/issues/8687/comments | 4 | 2023-08-03T12:34:16Z | 2023-11-17T16:06:24Z | https://github.com/langchain-ai/langchain/issues/8687 | 1,834,976,029 | 8,687 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.250
langchainplus-sdk 0.0.20
Python 3.8.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import Cohere
llm = Cohere(temperature=0.9, cohere_api_key=cohere_api_key)
### Expected behavior
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 llm = Cohere(temperature=0.9, cohere_api_key=cohere_api_key)
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/pydantic/main.py:1102, in pydantic.main.validate_model()
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/langchain/llms/cohere.py:127, in Cohere.validate_environment(cls, values)
124 import cohere
126 values["client"] = cohere.Client(cohere_api_key)
--> 127 values["async_client"] = cohere.AsyncClient(cohere_api_key)
128 except ImportError:
129 raise ImportError(
130 "Could not import cohere python package. "
131 "Please install it with `pip install cohere`."
132 )
AttributeError: module 'cohere' has no attribute 'AsyncClient'
| AttributeError: module 'cohere' has no attribute 'AsyncClient' | https://api.github.com/repos/langchain-ai/langchain/issues/8686/comments | 2 | 2023-08-03T12:30:40Z | 2023-11-09T16:12:29Z | https://github.com/langchain-ai/langchain/issues/8686 | 1,834,970,426 | 8,686 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am currently working with the Langchain platform and I've encountered an issue during the integration of ConstitutionalChain with the existing retrievalQaChain.
In my implementation, I've used retrievalQaChain with a custom prompt and Vecatra as the retriever. My system is hosted on Vercel and I'm primarily aiming to acquire a streaming output.
My objective was to utilize ConstitutionalChain alongside retrievalQaChain to improve the system's overall performance. However, upon integrating ConstitutionalChain, I am continually facing an error that disrupts the seamless functioning of my system.
Regrettably, I've not been able to identify the root cause or find a solution for this persistent issue. It would be highly beneficial if I could receive assistance to overcome this obstacle. To provide better context and aid the debugging process, I'll follow up with the specifics of the error message, the steps that lead to the error, and any relevant screenshots if possible.
**Error: error [TypeError: Cannot read properties of undefined (reading 'format')]**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
const model = new ChatOpenAI({
temperature: 0,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName:
process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers),
});
const metadataFilter = `doc.sourceId=${SourceId}`;
const retrivalChain = new RetrievalQAChain({
combineDocumentsChain: loadQAStuffChain(model, { prompt }),
retriever: vectorStore.asRetriever(10, { filter: metadataFilter }),
returnSourceDocuments: true,
});
const principle = new ConstitutionalPrinciple({
name: 'Ethical Principle',
critiqueRequest:
'The model should only talk about ethical and legal things.',
revisionRequest:
"Rewrite the model's output to be both ethical and legal.",
});
const chain = ConstitutionalChain.fromLLM(model, {
chain: retrivalChain,
constitutionalPrinciples: [principle],
});
chain
.call({
query,
})
.then((result) => {
// TODO: consume the references
// eslint-disable-next-line no-console
console.log('result.sourceDocuments', result.sourceDocuments); // This will log the source documents
})
.catch((e) => {
// eslint-disable-next-line no-console
console.error('error', e.message);
})
.finally(() => {
handlers.handleChainEnd();
});
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
};
return new StreamingTextResponse(stream, {
headers: corsHeaders,
});
### Expected behavior
It should return the streamed output with the constitutional principle | Issue with Langchain: Error When Implementing ConstitutionalChain with RetrievalQaChain and Vecatra | https://api.github.com/repos/langchain-ai/langchain/issues/8681/comments | 2 | 2023-08-03T10:53:12Z | 2023-11-09T16:07:31Z | https://github.com/langchain-ai/langchain/issues/8681 | 1,834,819,131 | 8,681 |
[
"langchain-ai",
"langchain"
] | ### System Info
There seems to be a circular import problem introduced with `langsmith` dependency.
I believe so, because this problem is constantly appearing in the `langchain` versions that do include `langsmith` dependency, and for example `0.0.230` seems to be ok (I don't see that one including `langsmith` as dep), but latter versions are exhibiting the same problem. And there is also `langchain` / `langsmith` back-fourth in stacktrace.
The error:
```text
Traceback (most recent call last):
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/__init__.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py", line 41, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langsmith import Client
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/__init__.py", line 20, in <module>
from langchain.chat_models.anthropic import ChatAnthropic
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/anthropic.py", line 3, in <module>
from langchain.callbacks.manager import (
ImportError: cannot import name 'AsyncCallbackManagerForLLMRun' from partially initialized module 'langchain.callbacks.manager' (most likely due to a circular import) (/Users/Dzmitr
y_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py)
```
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
0. Use python `3.11`
1. Take sample *LangSmith* code from their tutorial:
```py
import os
from langchain.chat_models import ChatOpenAI
os.environ['LANGCHAIN_TRACING_V2'] = 'true'
os.environ['LANGCHAIN_ENDPOINT'] = "https://api.smith.langchain.com"
llm = ChatOpenAI()
llm.predict("Hello, world!")
```
2. Provision fresh `venv` with the following dependencies:
```text
langchain==0.0.250
openai==0.27.8
```
3. Observe stacktrace:
```text
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langsmith import Client
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/__init__.py", line 20, in <module>
from langchain.chat_models.anthropic import ChatAnthropic
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/anthropic.py", line 3, in <module>
from langchain.callbacks.manager import (
ImportError: cannot import name 'AsyncCallbackManagerForLLMRun' from partially initialized module 'langchain.callbacks.manager' (most likely due to a circular import) (/Users/Dzmitr
y_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py)
```
### Expected behavior
No errors, llm reply received | LangChain 0.0.250 circular import bug, possibly LangSmith related | https://api.github.com/repos/langchain-ai/langchain/issues/8680/comments | 2 | 2023-08-03T10:21:15Z | 2023-08-03T19:40:14Z | https://github.com/langchain-ai/langchain/issues/8680 | 1,834,771,152 | 8,680 |
[
"langchain-ai",
"langchain"
] | ### System Info
Related [Stackoverflow](https://stackoverflow.com/questions/76414862/how-do-you-catch-the-duplicate-id-error-when-using-langchain-vectorstores-chroma) post. Is there any way to avoid duplicated IDs and only insert new ones.
@agola11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
Chroma.from_documents(docs, embeddings, ids=ids, persist_directory='db')
```
### Expected behavior
Non-duplicate IDs get added | Handle duplicate IDs error when using langchain.vectorstores.Chroma.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/8679/comments | 1 | 2023-08-03T09:07:10Z | 2023-11-14T16:07:04Z | https://github.com/langchain-ai/langchain/issues/8679 | 1,834,615,597 | 8,679 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.250
Experiencing error when importing Html2TextTransformer
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[79], line 1
----> 1 from langchain.utils.math import cosine_similarity
ModuleNotFoundError: No module named 'langchain.utils.math'; 'langchain.utils' is not a package
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_transformers import Html2TextTransformer
### Expected behavior
Importing successfully. | Unable to import Html2TextTransformer from langchain.document_transformers | https://api.github.com/repos/langchain-ai/langchain/issues/8678/comments | 2 | 2023-08-03T08:47:29Z | 2023-11-09T16:10:19Z | https://github.com/langchain-ai/langchain/issues/8678 | 1,834,583,165 | 8,678 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
https://python.langchain.com/docs/modules/data_connection/retrievers/ensemble
seems like the same.
Both task a list of exist retriever and MERGE the output together.
### Idea or request for content:
Should describe the difference between the both. | DOC: What is the difference between LOTR(Merger Retriever) and Ensemble Retriever | https://api.github.com/repos/langchain-ai/langchain/issues/8677/comments | 1 | 2023-08-03T08:44:25Z | 2023-08-07T10:08:04Z | https://github.com/langchain-ai/langchain/issues/8677 | 1,834,575,870 | 8,677 |
[
"langchain-ai",
"langchain"
] | combine agents &vector stores - https://python.langchain.com/docs/modules/agents/how_to/agent_vectorstore
vectorstore agent - https://python.langchain.com/docs/integrations/toolkits/vectorstore
what is the diffeence between these two? | what is the difference between the combine agents &vector stores and vectorstore agent? | https://api.github.com/repos/langchain-ai/langchain/issues/8676/comments | 3 | 2023-08-03T08:40:56Z | 2023-11-15T16:07:22Z | https://github.com/langchain-ai/langchain/issues/8676 | 1,834,568,374 | 8,676 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi guys, I wanted to ask you does anyone know where is the problem with this because Im getting an error: (Failed to calculate number of tokens, falling back to approximate count) after trying to make a request to open ai by using langchain and sql database connection. On the request I can see that there is a property with max_tokens: 256, is there a problem with the api key or pricing plan or anything related to the database size?
The version of langchain library that Im using is "langchain": "0.0.110",
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Im trying to chat with my database and Im getting the error above.
### Expected behavior
It is showing me an error (Failed to calculate number of tokens, falling back to approximate count) and it is taking so much time to load. | Failed to calculate number of tokens, falling back to approximate count | https://api.github.com/repos/langchain-ai/langchain/issues/8675/comments | 5 | 2023-08-03T08:17:16Z | 2023-11-12T16:06:29Z | https://github.com/langchain-ai/langchain/issues/8675 | 1,834,529,804 | 8,675 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I learn the langchain lessions, and found some error while running the jupyter code at
`https://learn.deeplearning.ai/langchain/lesson/4/chains`,
precisily, MULTI_PROMPT_ROUTER_TEMPLATE is not working well in the first run,
1)It raise the folloing error,
```
OutputParserException: Parsing text
{
"destination": "physics",
"next_inputs": "What is black body radiation?"
}
raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, bu
```
Then I add additional example I/O (after <<OUTPUT>>) for it,
```
eg:
<< INPUT >>
"What is black body radiation?"
<< OUTPUT >>
\```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
\```
```
It works, but still error at routing to default path as I ask it "Why does every cell in our body contain DNA?"
2) It tried to route to biology, but failed to get response.
```
ValueError: Received invalid destination chain name 'biology'
```
How can I get stable running result? Or,this issue is one of the flaws of current llm, which cannot ensure that all outputs are processed according to our specified style?
Thanks a lot.
### Suggestion:
_No response_ | RunError of langchain's tutorial in deeplearning.ai, L3-Chains | https://api.github.com/repos/langchain-ai/langchain/issues/8674/comments | 5 | 2023-08-03T07:58:27Z | 2024-07-23T20:25:09Z | https://github.com/langchain-ai/langchain/issues/8674 | 1,834,500,944 | 8,674 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
What does langchain's logo mean
### Suggestion:
_No response_ | What does langchain's logo mean | https://api.github.com/repos/langchain-ai/langchain/issues/8673/comments | 3 | 2023-08-03T07:17:23Z | 2023-12-02T07:37:01Z | https://github.com/langchain-ai/langchain/issues/8673 | 1,834,440,509 | 8,673 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I'm trying to interact with HuggingFace model (NumbersStation/nsql-llama-2-7B) using manifest but when I try to to send it a prompt I'm getting the following error from the model:
`The following 'model_kwargs' are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)`
I tried to find out how to control the mode_kwargs in the ManifestWarpper but didn't find such.
That's how I configured the wrapper:
```
local_llm = ManifestWrapper(
client=manifest,
llm_kwargs={"temperature": 0.0, "max_tokens": 1024},
verbose=True
)
```
and that's how I ran the model using Manifest (on my local machine):
```
python3 -m manifest.api.app \
--model_type huggingface \
--model_generation_type text-generation \
--model_name_or_path nsql-llama-2-7B \
--device 0
```
Would like some help here :)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the model locally using Manifest
2. connect to it using ManifestWrapper.
3. send a simple prompt with LLMChain.
### Expected behavior
Getting response from the model. | Getting model args exception while using ManifestWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/8672/comments | 2 | 2023-08-03T07:07:08Z | 2023-11-09T16:12:42Z | https://github.com/langchain-ai/langchain/issues/8672 | 1,834,425,326 | 8,672 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I just want to set chat history for different user in ConversationBufferMemory, user can only get his own chathistory
this is my code:
**embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1000)
docsearch = Chroma.from_documents(split_docs, embeddings)
memory = ConversationBufferMemory(memory_key="chat_history")
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_rerank",
retriever=docsearch.as_retriever(),memory=memory, return_source_documents=False)**
could anyone give me some advice please
### Suggestion:
_No response_ | how to set chat_history for different user | https://api.github.com/repos/langchain-ai/langchain/issues/8671/comments | 4 | 2023-08-03T07:06:27Z | 2023-11-30T08:19:42Z | https://github.com/langchain-ai/langchain/issues/8671 | 1,834,424,379 | 8,671 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to send emails using the `ZapierNLAWrapper`. Though it shows emails have been sent successfully, I could not find sent emails in gmail.
```
from langchain.chains import TransformChain
from langchain.tools.zapier.tool import ZapierNLARunAction
from langchain.utilities.zapier import ZapierNLAWrapper
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import ZapierToolkit
from langchain.agents import AgentType
zapier = ZapierNLAWrapper(zapier_nla_api_key="")
actions = zapier.list()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.get_tools(), llm, agent="zero-shot-react-description", verbose=True)
agent.run(
f"The emails of the shortlisted candidates are the following: arghyadutta119@gmail.com and nileshpal530@gmail.com. Their names are Arghya and Nilesh respectively. Send emails to them, In the body of the email, congratulate them on their selection and inform them about the next steps in the hiring process."
)
```
Here is the Verbose log:
```
> Entering new AgentExecutor chain...
I need to send emails to the shortlisted candidates. I should use the Gmail: Send Email tool.
Action: Gmail: Send Email
Action Input:
- Subject: "Congratulations on Your Selection"
- Body: "Dear [Candidate's Name],\n\nCongratulations on being shortlisted for the position! We are pleased to inform you that you have been selected to proceed to the next steps in the hiring process. We will be contacting you shortly with more details.\n\nBest regards,\n[Your Name]"
- To: "arghyadutta119@gmail.com"
- Cc: "nileshpal530@gmail.com"
Observation: null
Thought:The email has been sent to the first candidate. I need to send the email to the second candidate now.
Action: Gmail: Send Email
Action Input:
- Subject: "Congratulations on Your Selection"
- Body: "Dear [Candidate's Name],\n\nCongratulations on being shortlisted for the position! We are pleased to inform you that you have been selected to proceed to the next steps in the hiring process. We will be contacting you shortly with more details.\n\nBest regards,\n[Your Name]"
- To: "nileshpal530@gmail.com"
- Cc: "arghyadutta119@gmail.com"
Observation: null
Thought:Both emails have been sent successfully.
Final Answer: The emails have been sent to the shortlisted candidates.
> Finished chain.
```
### Suggestion:
_No response_ | Issue: < Emails are not being sent though it shows emails sent successfully > | https://api.github.com/repos/langchain-ai/langchain/issues/8667/comments | 1 | 2023-08-03T06:18:56Z | 2023-08-03T09:44:12Z | https://github.com/langchain-ai/langchain/issues/8667 | 1,834,353,463 | 8,667 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.